Brand Logo
Back to workshops

How to run LLMs locally

LLMs
How to run LLMs locally

How to run LLMs locally

Hosted by Dr. Alvaro Cintas

How to run LLMs locally

Your current plan does not include access to this workshop.

Timestamps

[00:00] Start session with intro and housekeeping
[01:53] Recap major GPT releases and closed-source trend
[03:58] Explain rise of open-source after Meta leak
[05:23] Compare open vs closed models head-to-head
[07:00] Break down key differences: cost, control, access
[09:24] Cover downsides of open-source: quality, oversight
[10:18] Highlight top open models: LLaMA 2, Mistral
[11:00] Introduce leaderboards for benchmark comparisons
[12:53] Share top reasons to run models locally
[15:39] Review tools for local LLMs: Ollama, Jan, LM
[17:05] Install GPT4All on Mac and Windows
[20:00] Choose models and note system requirements
[22:07] Demo chat with fast local response
[24:14]Test...

Your current plan does not include access to the rest of this workshop.

Comments (0)

You need an active subscription to comment.