Brand Logo
Log in
RU Logo
RU Logo
Home
Courses
Guides
Workshops
Community
Perks
Support
Login
Back to workshops
WorkshopLLMs

How to run LLMs locally

Timestamps [00:00] Start session with intro and housekeeping [01:53] Recap major GPT releases and closed-source trend [03:58] Explain rise of open-source after Meta leak [05:23] Compare open vs closed models head-to-head [07:00] Break down key differences:...

Access

Members only

Topics

1 topic

Updated

Aug 19, 2025

How to run LLMs locally

How to run LLMs locally
Replay preview locked

Unlock this replay

Included with Trial or Pro

Your planGuestNot signed in

Already a member? Log in.

See Pro plans

Workshop overview

What you'll learn

Timestamps

[00:00] Start session with intro and housekeeping
[01:53] Recap major GPT releases and closed-source trend
[03:58] Explain rise of open-source after Meta leak
[05:23] Compare open vs closed models head-to-head
[07:00] Break down key differences: cost, control, access
[09:24] Cover downsides of open-source: quality, oversight
[10:18] Highlight top open models: LLaMA 2, Mistral
[11:00] Introduce leaderboards for benchmark comparisons
[12:53] Share top reasons to run models locally
[15:39] Review tools for local LLMs: Ollama, Jan, LM
[17:05] Install GPT4All on Mac and Windows
[20:00] Choose models and note system requirements
[22:07] Demo chat with fast local response
[24:14]Test...

Actions

Engage

Was this helpful?

Instructors

Dr. Alvaro Cintas

Dr. Alvaro Cintas

PhD AI Professor

Resources

Unlock to access slides, templates, and workshop downloads.

Topics

Coding

Comments (0)

You need an active subscription to comment.

HomeCoursesGuides