quantisation
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Training models with only 4 bits | Fully-Quantized Training
|
![]() |
1 | 0 | 19 June 2025 |
Can LLMs Run On 1 Bit?
|
![]() |
1 | 0 | 18 June 2025 |
The myth of 1-bit LLMs | Extreme Quantization
|
![]() |
1 | 8 | 28 May 2025 |
Gemma 3 QAT Insane Speed Boost vs FP16?! Google AI's KILLER 27b
|
![]() |
1 | 3 | 23 April 2025 |
Model quantisation leads to decoherence - Federico Barbero
|
![]() |
1 | 1 | 17 March 2025 |
DeepSeek 671B params on Mac Studio
|
![]() |
1 | 3 | 14 March 2025 |
Phi 4 Local Ai LLM Review - Is This Free Local Chat GPT Alternative Good?
|
![]() |
1 | 2 | 10 January 2025 |
19 Tips to Better AI Fine Tuning
|
![]() |
1 | 4 | 9 January 2025 |
Dolphin 3 Llama 3.1 8b on Ollama LLM Review
|
![]() |
1 | 6 | 6 January 2025 |
Optimize Your AI - Quantization Explained
|
![]() |
1 | 1 | 28 December 2024 |
Find Your Perfect Ollama Build
|
![]() |
1 | 2 | 21 November 2024 |
Ollama + HuggingFace - 45,000 New Models
|
![]() |
1 | 1 | 25 October 2024 |
Comparing Quantizations of the Same Model - Ollama Course
|
![]() |
1 | 26 | 21 August 2024 |
Honey, I shrunk the LLM! A beginner's guide to quantization
|
![]() |
0 | 3 | 15 July 2024 |