By Lew Ludwig

I’ve been invited to the Conference Board of the Mathematical Sciences (CBMS) annual meeting to discuss AI in the classroom. I know what questions will come up—they’re the ones people ask me at every webinar and workshop: How do we secure assessment in an AI-driven world? How do I use AI in the classroom effectively?
I gently push back on both questions, not because they're wrong to ask, but because they let technology drive our pedagogy. It's the clicker craze all over again: everyone wanted student response systems because they were cool and novel, but few stopped to ask what students were actually learning. The question isn’t “How do I use AI?” but “What do my students need to learn, and does AI help or hinder that?”
And as for securing assessment? AI didn’t suddenly break our ability to evaluate students fairly; it just made it impossible to ignore that our evaluations were never as secure as we pretended. Even before ChatGPT, students had Chegg, GroupMe, solution manuals, and friends in the dorm. Take-home work was never a pure window into individual understanding. When a chatbot can ace our two-midterms-and-a-final setup, it confirms what we suspected all along: crammable, pattern-driven exams were measuring short-term recall more than durable learning. AI hasn’t broken academic integrity; it has revealed that honor codes and stern syllabus paragraphs were never a substitute for designing tasks that actually require students to think with us, not just perform for us.
So what do we do? Here's the deeper question AI forces us to confront: if a machine can differentiate polynomials faster and more accurately than any human ever will, why are we still requiring every student to master that skill? The answer used to be ‘because you might need it someday.’ That excuse just evaporated. What students actually need—the mathematics AI can’t replace—is judgment: knowing when a statistical claim is trustworthy, whether a model’s assumptions hold, what question to ask in the first place. That’s not calculus for everyone. That’s statistics, data science, and quantitative reasoning—the pathways we’ve been advocating for years but never fully committed to. AI just removed our last excuse for inertia.
The mathematical community has been preparing for this moment for more than a decade. NCTM's Catalyzing Change called for ending student tracking and moving beyond calculus as the singular goal of high school mathematics. ASA's GAISE reports argued for statistical literacy for all students, not just those bound for STEM careers. AMATYC's IMPACT standards promoted ‘New Mathematical Pathways’ that recognize different students need different mathematical preparation. The Dana Center’s Launch Years Initiative has been working with states to align high school and college mathematics around multiple coherent routes: calculus for future engineers, statistics and data science for students heading into social sciences and business, quantitative reasoning for informed citizenship. The vision has been clear: broaden access, create relevance, meet students where their futures actually lead.
Let me tell you about a high school in the Mountain West. Last spring, their longtime AP Statistics teacher retired. The math department needed to fill the position. They had veteran teachers with advanced degrees, decades of classroom experience, deep relationships with students and families. When the department chair raised the question of who might teach AP Stats, the room went quiet. It wasn’t resistance or lack of interest. It was something more fundamental: most teachers hadn’t engaged with statistics since a single required course in college, if that. The traditional high school mathematics curriculum simply hadn’t required it. Their entire careers had been built around algebra, geometry, precalculus, and calculus. That’s what the standards emphasized. That’s what parents and colleges expected. That’s what it meant to be a successful math teacher. And now, after twenty years of mastering that content, the prospect of teaching statistics felt like starting over in a new mathematical domain. Not because they couldn’t learn it, but because everything in their professional training said good teachers already know what they teach.
Eventually, they asked the newest teacher in the building—a 24-year-old who'd spent the previous year teaching Algebra I and II—if he'd be willing to add AP Statistics to his load. Math and computer science degree. Zero teaching credentials. Never took an education course. Never student-taught. And crucially: his lone encounter with statistics was a college probability and statistics sequence—helpful foundation, but not preparation for teaching AP Stats. His path to this moment was unconventional: emergency substitute for AP Calculus one semester, full-time Algebra teacher the next. Before any of that, herding sheep in New Zealand.
This should have been a disaster.
Instead, every Sunday night, he spent a few hours with AI, working through the next week's content. By Monday morning, he knew enough to teach it. His students are engaged. The parents are satisfied. The department chair is relieved.
The veteran teachers in his building have decades of pedagogical expertise he's only beginning to develop. But they couldn’t teach statistics because doing so would mean admitting—to themselves, to colleagues, to students—that they were still learning. Meanwhile, this novice treats every week as a learning opportunity, and his students see that as authenticity, not weakness. He succeeds not despite his inexperience, but because he never learned that learning was shameful.
This is where AI changes everything. Not because it makes teaching easier, but because it makes learning new material manageable for teachers already managing everything else that comes with the job. Consider the cognitive load difference. A novice teacher has to figure out everything simultaneously: classroom management, pacing, assessment design, building relationships, and learning content. It’s overwhelming, which is why so many leave the profession in their first five years. But a veteran teacher already has most of that figured out. They know how students learn. They know how to explain a concept three different ways when the first approach doesn't land. They know how to read a room, adjust on the fly, build the relationships that make hard work feel worthwhile. All that expertise doesn't disappear when the content changes—it becomes their foundation.
What they need is content expertise. And that’s exactly what AI can provide—not as a replacement for their teaching skills, but as a way to extend them. A veteran teacher can take AI-generated explanations of the Central Limit Theorem and immediately recognize which approach will work for their students. They can spot the three most common p-value misconceptions and design classroom activities that address them before they even arise. They can take a sequence of probability problems and adjust the scaffolding based on years of watching students struggle with abstraction. The AI provides the mathematical content. The veteran teacher provides everything that makes that content learnable. They’re not starting over—they’re applying decades of expertise to new territory.
This is the lever we’ve been missing. Not curriculum materials—we have those. Not policy support—NCTM, ASA, and AMATYC have provided that. What we’ve lacked is a practical way to help experienced teachers become learners again without asking them to sacrifice their professional identity in the process. AI doesn’t replace their expertise. It lets them extend it into domains they never had time to master.
When I address the CBMS, I won’t have a simple answer about securing assessment or using AI in the classroom. What I will offer is this: AI has exposed what was already broken, but it’s also revealed what’s finally possible. We have the vision for multiple mathematical pathways. We have the policy frameworks. We have the curriculum. What we haven’t had is a practical way to help the teachers we already have—experienced, skilled, committed professionals—expand their content expertise without abandoning their identities as experts.
AI isn’t the enemy of good teaching. It’s the tool that might finally let us build the mathematics education our students actually need. The question isn’t whether AI will change education. It’s whether we’ll use this moment of disruption to finally implement the reforms we’ve been talking about for a decade, or whether we’ll spend another ten years patching a system that was already failing before ChatGPT ever arrived.
I know which conversation I'd rather have. Are we ready to have it together?
AI Disclosure: This piece reflects my experiences and ideas. I used Claude to help me edit and sharpen the writing, practicing the human-AI partnership I described.
A Note on This Month's Image: This month's image features Charley, our family's faithful Goldendoodle. He passed away after a courageous battle with lymphoma. His main goal in life seemed to be bringing joy and happiness to others—something he achieved every single day he was with us.
And the younger, 24-year-old hobbit from our story, taking the road less traveled? That's my son Bjorn, who is now in his second year at Waterford School.

Lew Ludwig is a professor of mathematics and the Director of the Center for Learning and Teaching at Denison University. An active member of the MAA, he recently served on the project team for the MAA Instructional Practices Guide and was the creator and senior editor of the MAA’s former Teaching Tidbits blog.