By Lew Ludwig
“Alright, now that we understand how to write in the dominant impression, our next essay will use a structure known as “they say/I say” which is like engaging a conversation already in progress.”
“Now that we have completed our empathy map and have a better understanding of our stakeholders, we will now start prototyping a solution that we will test.”
Three years ago, as a mathematician directing our teaching center, I couldn't imagine saying these lines to my classes, much less understanding what they meant. But now, as we approach the third anniversary of ChatGPT, I find myself teaching introductory writing and design thinking. Who knew this viral app—one million downloads in five days—would upend my career, education, and society itself.
This semester I’m teaching two brand new courses I’ve never taught, courses that have never been taught at our university. The first is W101: Finding Your Voice in the Age of AI, a first-year writing workshop. The second is INDT 184: When the Liberal Arts Meets AI, an interdisciplinary course where we explore issues created by AI and develop solutions. Two courses that exist because of this nascent technology. But here's the ironic part: I couldn't teach these courses without this technology.
Let me explain. While I write well enough to get a few of you to read this column, I'm not a trained writer, and I'm definitely not trained in teaching writing. However, I bring real assets to the classroom: I know how to motivate students, I'm competent at facilitating learning, and from my teaching center days I have a wealth of pedagogical techniques like the jigsaw method, carousel method, and quick writes that I can deploy strategically. What I didn't know was anything about the dominant impression or the “they say/I say” method of argumentation. And I knew very little about design thinking beyond a TED talk or two.
Enter my unlikely teaching assistant: generative AI. Like Bilbo venturing beyond the Shire, I’ve needed a guide for this unfamiliar territory. With AI, I create both flawed and exemplary writing samples for students to analyze and dissect—the non-examples approach any good mathematician would use. I design scaffolded writing projects that blend contract grading and portfolio assessment, carefully calibrated to keep students engaged without overwhelming them. For the design thinking course, I develop hands-on brainstorming activities after students conduct stakeholder interviews, helping them transform raw data into empathy maps that reveal what their research subjects truly need.
Not everything worked. My first attempt at using AI to generate a rubric was laughably generic. I quickly learned that AI is a tool, not a replacement. I still need to bring my pedagogical judgment to every interaction. This human-AI partnership goes deeper than generating examples and activities.
Here's what humbles me: my students are tackling the hard problems I’m still figuring out myself. In W101, they’re wrestling with complex ethical issues caused by AI—creative work and copyright, labor and employment, mental health and well-being, truth and misinformation. We used “AI and the environment” as our running example throughout a four-week unit, not just to teach the “they say/I say” style, but to help them critically examine AI’s environmental costs, and then apply that same scrutiny to the streaming, social media, and video calls we’ve normalized without question.
The pattern intensifies in my interdisciplinary course. On the first day, I set an audacious goal: identify a problem AI is causing on campus and propose a solution. I knew our students were struggling with this technology as much as we faculty were, but we rarely created space to hear their voices. This course would be that space.
They did not disappoint. My students identified a cascade of concerns: without clear ethical boundaries, students default to using AI as a shortcut, which weakens their independent voice and critical thinking skills. Sound familiar? These are precisely the problems my W101 students are researching. In fact, these problems are the story of higher education right now. While we educators puzzle over how to respond to AI, our students are already living with, and thoughtfully analyzing, its consequences. Both courses, the writing workshop and the interdisciplinary design thinking course, converge on the same realization: we're all navigating uncertainty together.
I expected to feel like an impostor all semester. Instead, I’ve discovered that not knowing everything creates space for genuine intellectual partnership with students—something harder to achieve when you’re the unquestioned expert in the room.
Returning to my Tolkien throughline, I'm not the mathematician who started this journey three years ago. I've become something I never trained to be—and I needed AI to help me get here. This makes some of my colleagues uncomfortable. It makes me uncomfortable sometimes too.
But here's what I've learned: the discomfort is the point. We're all being asked to become educators we weren't trained to be. The technology that disrupted our classrooms is also the tool that helps us adapt to that disruption. That's the paradox we're living in—and teaching through.
You don't need to teach writing or design thinking to experience this transformation. Whatever your discipline, AI is reshaping what expertise means and how we transfer knowledge to students.
The question isn't whether you'll be changed by this journey, but whether you'll be intentional about that change. The alternative—clinging to expertise in our narrow domains while the world transforms around us—leaves our students to navigate AI alone, without the guidance they need and deserve.
I never imagined I’d be here, teaching courses I couldn’t teach without the technology that made them necessary. But perhaps that’s the most important lesson of all: none of us can predict where this journey leads. We can only decide whether we’ll take it, and whether we’ll bring our students along as partners rather than passengers. Three years in, I’m still learning what that partnership looks like. I suspect I’ll be learning for years to come.
AI Disclosure: This piece reflects my experiences and ideas. I used Claude to help me edit and sharpen the writing, practicing the human-AI partnership I described.

Lew Ludwig is a professor of mathematics and the Director of the Center for Learning and Teaching at Denison University. An active member of the MAA, he recently served on the project team for the MAA Instructional Practices Guide and was the creator and senior editor of the MAA’s former Teaching Tidbits blog.