Archives
- 30 Dec The age of hyper-personalized software
- 27 Dec Running MiniMax-M2.1 Locally with Claude Code on Dual RTX Pro 6000
- 25 Dec Guide on installing and running the best models on a dual RTX Pro 6000 rig with vLLM
- 21 Dec Injecting Knowledge into LLMs via Fine-Tuning
- 30 Nov Three Years of ChatGPT
- 16 Nov Getting Started with running LLM models locally
- 02 Nov Silicon Valley's New Secret: Chinese Base Models
- 26 Oct Speeding up local LLM inference 2x with Speculative Decoding
- 21 Oct Open Weights, Borrowed GPUs
- 11 Oct Harnessing GPT-OSS Built-in Tools