|
The physics behind Flow Matching models
|
|
1
|
0
|
12 April 2026
|
|
Subham Sahoo: The Future of Discrete Diffusion
|
|
0
|
0
|
8 April 2026
|
|
How is hardware reshaping LLM design?
|
|
1
|
0
|
3 March 2026
|
|
Why are diffusion LLMs so fast?
|
|
1
|
2
|
9 February 2026
|
|
An image is NxN words | Transformers in vision: ViT, DiT, MMDiT
|
|
1
|
0
|
3 February 2026
|
|
The relationship between convolution & self-attention
|
|
1
|
1
|
14 January 2026
|
|
Why are Transformers replacing CNNs?
|
|
1
|
5
|
1 December 2025
|
|
Inside a Real RAG Pipeline (Continua AI Case Study)
|
|
1
|
19
|
17 November 2025
|
|
David Petrou: Building social AI after 17 years at Google
|
|
0
|
1
|
8 November 2025
|
|
Transformers & Diffusion LLMs: What's the connection?
|
|
1
|
9
|
6 November 2025
|
|
Text diffusion: A new paradigm for LLMs
|
|
1
|
11
|
6 October 2025
|
|
Hierarchical Reasoning Model: Substance or Hype?
|
|
1
|
13
|
8 September 2025
|
|
The physics behind diffusion models
|
|
1
|
6
|
18 August 2025
|
|
Reverse-engineering GGUF | Post-Training Quantization
|
|
1
|
4
|
18 July 2025
|
|
Training models with only 4 bits | Fully-Quantized Training
|
|
1
|
9
|
19 June 2025
|
|
How LLMs survive in low precision | Quantization Fundamentals
|
|
1
|
3
|
19 May 2025
|