Brand Logo
Back to workshops

How to run OpenAI’s open weight models locally on your computer

Open Source
 How to run OpenAI’s open weight models locally on your computer
The Rundown

How to run OpenAI’s open weight models locally on your computer

Hosted by Dr. Alvaro Cintas

 How to run OpenAI’s open weight models locally on your computer

Your current plan does not include access to this workshop.

Timestamps:

[00:02] Introduce workshop topic and recording access
[01:32] Share screen for live demos
[02:12] Discuss new GPT-5 impressions
[03:16] Outline GPT-OSS session goals
[05:23] Recap OpenAI model history
[07:15] Explain reasoning model concept
[08:59] Highlight first open-source release in 5 years
[10:19] Compare 120B vs 20B model specs
[13:43] Review benchmark performance results
[18:24] Emphasize offline use and privacy benefits
[26:26] Demonstrate running models with Ollama
[33:09] Showcase LM Studio interface and features
[46:24] Suggest smaller models for lower-spec machines
[47:58] Present online GPT-OSS playground option
[50:25] Demonstrate speed using Groq platform
[60:42]Conclude with Q&A and future workshop...

Your current plan does not include access to the rest of this workshop.

Comments (2)

You need an active subscription to comment.

avatar

SCHUSTER JUERGEN CHRISTOPH

8/10/2025

Still don't get a grasp of how to fine tune a model. Wouldn't it be time Alvaro to finally fulfill your promise 😊

avatar

SCHUSTER JUERGEN CHRISTOPH

8/10/2025

How can a locally Open Source Model snitch to the government like shown in this benchmark: https://youtu.be/pnMgVIpBNf8?si=n_oZB_CkIvlTyvwV&t=764 It means they logging your input anyway and send it to their masters whenever they see fit.