Post

Three Years of ChatGPT

Today marks 3 years since ChatGPT was launched. In this short article I reflect on how far LLMs have come in just a few years, from getting early access to GPT-4 to now running open models that surpass it, and share two graphs that illustrate both the progress in open-weight models and the increasingly close race between OpenAI, Google, and Anthropic (with Google currently in the lead).

Today marks three years since ChatGPT was launched, and it’s wild to think about how much has changed since then. I still remember getting early access to GPT-4 while working on the first version of Copilot at Microsoft and realizing that LLMs were going to reshape the entire industry.

Fast forward to today: I can now run models locally that outperform the original GPT-4 on two prosumer RTX Pro 6000 GPUs. That alone would’ve sounded like science fiction three years ago.

And the broader landscape has shifted just as dramatically. Google, who many people initially wrote off despite having introduced transformers back in 2017, is now arguably in the lead with Gemini 3, with a gap that may widen thanks to their early and aggressive bets on TPUs. OpenAI, Google, Anthropic, and others are now in an incredibly tight race at the frontier.

I’m genuinely excited to see what the next three years bring!

I’ll leave you with a couple of graphs that capture this progress. One comparing the relative “intelligence index” of four open-weight models you can run today within 192 GB of VRAM against GPT-4 and GPT-4o.

Screenshot from 2025-11-30 13-28-47

And another which showcases how close the race between OpenAI, Google, and Anthropic has gotten recently, with the Google currently being in the lead.

Screenshot from 2025-11-30 13-30-41

This post is licensed under CC BY 4.0 by the author.