Derivinate NEWS

Schools Are Finally Getting Serious About AI — But Policy Is Messy

Schools Are Finally Getting Serious About AI — But Policy Is Messy

The Divide Is Getting Wider

Purdue University just became the first US college to make AI literacy a graduation requirement. Meanwhile, nearly 40% of American high schools ban generative AI outright. This isn't a coherent strategy emerging across education — it's chaos masquerading as policy.

The real story isn't whether schools should use AI. It's that schools have no idea what they're doing, students are self-teaching with zero guidance, and the institutions that move first might actually get this right.

What's Actually Working

Purdue's announcement in December 2025 wasn't just symbolic. The university built a concrete requirement: all students must complete coursework demonstrating AI literacy before graduation. This isn't a checkbox. It's integrated across five functional areas — learning, research, operations, partnerships, and student experience.

That specificity matters. Most institutions talk about "AI readiness" the way they talked about "digital transformation" five years ago: vague, well-intentioned, and ultimately meaningless.

SUNY went further in January 2025. The State University of New York system added AI ethics and responsible use to its general education requirements for all 64 campuses. Students now study the "ethical dimensions" of AI as part of information literacy. It's not a separate AI course — it's embedded where it matters, in how students think about sources, verification, and bias.

The Council of Independent Colleges launched AI-Ready, a network helping member institutions navigate AI across campus. They're not pretending there's one right answer. Instead, they're helping colleges draft their own policies, define acceptable use, and actually support faculty who are teaching with AI, not against it.

These aren't perfect solutions. But they're solving a real problem: institutions moving from panic to intentionality.

The Cheating Crisis That Isn't

Here's where the narrative breaks down. Teachers report AI cheating is "off the charts." The media ran with it. Schools panicked and started banning tools.

Then researchers looked at actual cheating rates.

They haven't moved since ChatGPT arrived.

This matters because it reveals what's really happening: schools are reacting to fear, not data. NYC Public Schools banned ChatGPT in January 2023, then quietly reversed the policy months later when they realized the ban was unenforceable and counterproductive. Students were using AI anyway — they just weren't learning how to use it responsibly.

The actual problem is worse than cheating. It's that only 1 in 5 high schools allow AI use with any formal policy. Students are self-teaching with zero institutional guidance on ethics, source verification, or limitations. That's not a security problem. That's an education problem.

The Policy Mess

The data paints a fractured landscape. Only 13% of schools encourage AI use across all classes. Nearly 40% ban it outright. The remaining half have inconsistent, classroom-by-classroom policies that confuse students and teachers alike.

Even when schools have policies, nearly half delegate enforcement to individual teachers — which means the policy is only as good as the teacher's understanding of AI. Spoiler: most teachers don't have that understanding yet.

The Philippines took a different approach. In March 2026, DepEd issued Department Order No. 003, formally approving AI use in public schools. Not banning it. Approving it. With guidance. That's a bet that AI literacy is a survival skill, not a threat.

Where the Real Work Is Happening

Universities offering structured AI education are seeing actual traction. Stanford and MIT both offer professional certificates in machine learning and AI, and enrollment is climbing. These programs aren't treating AI as a theoretical concern — they're teaching applied skills for jobs that already exist.

The Digital Promise organization published implementation briefs in late 2025 specifically designed to help schools move beyond policy paralysis. They're not prescriptive. They're practical: here's how to draft a policy, here's what acceptable use looks like, here's how to support teachers.

That's the pattern that works: specific requirements, institutional commitment, and actual support for implementation. Not vague principles and hope.

The Privacy Elephant in the Room

One thing every serious institution mentions: student data. Schools are understandably nervous about feeding student work into commercial AI systems. That's not paranoia — it's justified caution.

But it's also paralyzing institutions that could move faster. Some schools reject all AI tools because of data concerns, then watch students use ChatGPT anyway on personal devices with zero oversight. The policy creates the problem it's trying to prevent.

The institutions getting this right are finding middle ground: approved tools with clear data handling policies, transparent terms of service, and actual legal review. Not perfect, but functional.

The Real Test

The question isn't whether AI belongs in education. Students are already using it. The question is whether schools will lead or follow.

Purdue, SUNY, and the schools building intentional AI literacy programs are leading. They're saying: this is a skill you need, here's how we'll teach it responsibly, and here's what competency looks like.

The 40% of schools banning AI are following — reacting to fear, creating unenforceable policies, and guaranteeing that students will learn from unvetted sources instead of institutions designed to teach them.

The middle 47% are stuck in policy paralysis. That won't last. Within two years, the schools that moved first will have data. Graduation rates. Job placement. Employer feedback. The schools that banned AI will be quietly reversing course. And the ones in the middle will finally have a playbook.

The real story of AI in education isn't technology. It's institutional courage — or the lack of it.