
Are LLMs glorified compression Algos ? A new take from ancient perspective !
Published: November 26, 2025
Duration: 35:13
Research
The debate over Large Language Models (LLMs) often uses the term "glorified compression algorithm" as a modern litmus test, separating reductionists who view LLMs as "blurry JPEGs" of the internet from proponents who see compression as the very proof of emergent intelligence. By synthesizing information theory with ancient philosophy, we find that LLMs are indeed powerful compression systems, but the nature of this compression elevates the process to a cognitive function.
LLMs are mathematically optimized to minimize Cross-Entropy Loss, which is synonymous with minimizing the bits required to represent their training data. This process...