|
Quantization Series | Part 1. Foundations: What is Quantization?
|
|
1
|
2
|
30 April 2026
|
|
DeepSeek V4 Pro at 2-Bit?! | Local AI Cluster vs 1.6T Params 🤯
|
|
1
|
2
|
29 April 2026
|
|
Let's Run DeepSeek V4 Flash vs Pro - Local AI Coding, Maths & Logic TESTED 🧐
|
|
1
|
3
|
27 April 2026
|
|
Deepseek V4 Local Ai Running = PITA!
|
|
1
|
2
|
24 April 2026
|
|
Should You Buy nVidia RTX 5070ti 16gb GPU for Local AI? Qwen 3.6 Agents?
|
|
1
|
1
|
22 April 2026
|
|
I Just Tried The Brand New Ternary Model And It's Great!
|
|
1
|
2
|
21 April 2026
|
|
1 top FREE model, 2 formats… one is WAY FASTER
|
|
1
|
2
|
21 April 2026
|
|
Qwen3.6 vs Gemma4 Local Ai Performance Benchmarking
|
|
1
|
9
|
17 April 2026
|
|
How Do We Get MASSIVE Model To Run On Device? Quantization Explained
|
|
1
|
2
|
14 April 2026
|
|
Should You Buy the nVidia RTX 3080 for Local AI? Gemma 4?
|
|
1
|
4
|
13 April 2026
|
|
Let's Run MiniMax M2.7 - #1 Coding Local AI for Agents & OpenClaw? 🧐
|
|
1
|
0
|
13 April 2026
|
|
Google's New Quantization is a Game Changer
|
|
1
|
2
|
11 April 2026
|
|
GLM 5.1 at 2-Bit?! 🤯 Can Local AI Extreme Quantisation Be GOOD?
|
|
1
|
0
|
10 April 2026
|
|
GLM 5.1 - Coding, Apps & Maths TESTED | #1 Local AI Got Smarter 🤯
|
|
1
|
0
|
8 April 2026
|
|
After This, 16GB Feels Different
|
|
1
|
1
|
8 April 2026
|
|
Bonsai 1bit Local AI Model + 2bit TurboQuant - Will it Run OpenClaw? 🤯
|
|
1
|
2
|
2 April 2026
|
|
bitnet
|
|
1
|
0
|
1 April 2026
|
|
LLM Compression Explained: Build Faster, Efficient AI Models
|
|
1
|
0
|
31 March 2026
|
|
How to Run TurboQuant - "Lossless" Quantization for Local AI TESTED ✅
|
|
1
|
1
|
29 March 2026
|
|
Testing Google's TurboQuant Approach: I Got 5x Compression with 99.5% Accuracy!
|
|
1
|
1
|
25 March 2026
|
|
KittenTTS - The Nano TTS
|
|
1
|
1
|
22 February 2026
|
|
One Setting 3x’d My LLM Speed… Same hardware
|
|
1
|
5
|
17 February 2026
|
|
New top open-source AI image model just dropped! Ultra-fast & light
|
|
1
|
6
|
16 January 2026
|
|
THIS is the REAL DEAL 🤯 for local LLMs
|
|
1
|
5
|
12 September 2025
|
|
DeepSeek 3.1 FULL Just Launched!
|
|
1
|
2
|
21 August 2025
|
|
Reverse-engineering GGUF | Post-Training Quantization
|
|
1
|
5
|
18 July 2025
|
|
Training models with only 4 bits | Fully-Quantized Training
|
|
1
|
12
|
19 June 2025
|
|
Can LLMs Run On 1 Bit?
|
|
1
|
21
|
18 June 2025
|
|
The myth of 1-bit LLMs | Extreme Quantization
|
|
1
|
16
|
28 May 2025
|
|
How LLMs survive in low precision | Quantization Fundamentals
|
|
1
|
5
|
19 May 2025
|