DX Today | No-Hype Podcast & News About AI & DX

TurboQuant: Google's 6x KV Cache Compression, the Pied Piper Moment, and the New Inference Cost Math - May 7, 2026

Published: May 7, 2026

Duration: 12:40

Send us Fan Mail

TurboQuant: Google's 6x KV Cache Compression, the Pied Piper Moment, and the New Inference Cost Math - May 7, 2026 Google Research dropped TurboQuant at ICLR 2026, a two stage vector quantization algorithm that compresses LLM key value caches to roughly three bits per coordinate while delivering an eight times attention speedup on H100 GPUs. The economics ripple is enormous: inference is now 85% of enterprise AI spend, and TurboQuant's 6x memory cut could halve that bill, which is exactly why Micron and SK Hynix took a hit when the news broke. Hosted by Chris and Laura. The...