Skip to content

There and Back Again: I Worry

When a committee meeting on AI 'best practices' turns into something else entirely, one mathematician learns that two small words might open more doors than any amount of expertise.


By Lew Ludwig

Image generated using Gemini, with prompts created by ChatGPT.

I was invited to advise a committee tasked with developing "best practices" for AI use at an institution. Before the meeting, I received the questions they planned to discuss. Some of the questions were genuinely curious. But others troubled me. They weren't asking what I thought — they were framing my positions in the weakest possible light and asking me to defend them. It felt less like an invitation and more like a cross-examination.

Even so, I showed up. I'd been invited as the "expert," after all. Surely there would be room for a broader perspective once the discussion got going.

I was wrong.

The Finger Test

At the start of the meeting, I tried to establish some common ground. I asked everyone to rate their direct experience with AI on a scale of one to five — just hold up fingers. No judgment. No right answer.

The results were telling. A couple of threes. Several twos. Several ones. And one person who joked they'd hold up their middle finger instead.

I laughed along. But something shifted in that moment. I started to wonder what kind of conversation I was actually walking into.

I pressed forward anyway. I shared what I've been noticing: yes, AI has driven a wedge between students and faculty. That damage is real. But it's not because students and faculty suddenly became worse people. Much of the blame belongs to Silicon Valley's reckless rollout of these tools into education — without care, without thought, without institutional support.

But here's what's worse, I said: now it's faculty against faculty.

The Underground

One group of faculty strongly opposes AI and regards anyone who uses it with suspicion. As a result, many faculty who do use AI — who find it genuinely helpful — are doing so quietly. Behind closed doors. Almost underground.

A lot of these are junior faculty. Teaching four courses with three new preps. Using AI as a thought partner to help explain a concept, generate an example, manage the crushing weight of it all. They're not doing anything unethical. They're surviving. But they don't feel safe saying so openly.

Instead of the conversation expanding, it's contracting. Instead of understanding growing, it's polarizing.

I wanted to move the room away from anti-AI reflexes and toward something more careful. Something more honest.

I tried.

What Happened Next

Over the next fifteen minutes, the discussion slid into general complaints. How much they dislike AI. How it's ruining education. How it shouldn't be used.

I tried to redirect. This committee's charge was to propose "best practices" for the institution's departments. I asked what they meant by "best practices" in the first place.

They didn't have a clear answer.

So I offered one. Best practices don't simply appear because a committee decides they should. They emerge over time — through experience, testing, reflection, evidence, shared professional scrutiny. Right now, in the education, people are still trying things. Some ideas are promising. But very little has risen clearly enough to deserve the label "best." And I wasn't sure any of us were ready to write that document yet.

I could see it land wrong. Could feel the defensiveness rise.

I withdrew after that. Kept quiet. Stopped trying to redirect the room. By the end, I wished them luck and encouraged them to keep an open mind — though I could hear the lack of optimism in my own voice.

Driving home, I kept replaying the meeting. And that's when I remembered something.

I Worry

A colleague — offering what I can only call wizardly advice — had once told me that when you want to open a conversation that's already closing, try starting with "I worry" instead of "you're wrong."

"I worry," I could have said, "that we're writing policy based on fear rather than understanding."

"I worry that junior faculty are going underground because they don't feel safe being honest about how they're using these tools."

"I worry that the wedge between faculty is only going to deepen if we don't create space for real conversation."

"I worry that while we debate, our students are making their own choices about AI — without our guidance, without our wisdom, and without our care."

There's something about that framing. It opens a door instead of slamming one shut. It signals concern without forcing the other side into immediate defensiveness. It says: I'm not certain. I'm troubled. I want to understand. That's not weakness. That's honesty. And honesty, I've learned, creates more space for dialogue than certainty ever will.

The Real Problem

Here's what troubles me most. This isn't just about one difficult meeting. It's about a pattern I'm seeing across institutions: we're trying to write the rules for a technology most of us haven't spent enough time with. And when that happens, fear tends to fill the gaps where experience should be.

The committee's instinct was to shut the door — restrict AI, discourage its use, hope the problem goes away. I understand the impulse. But when we refuse to engage with this technology, we don't take control. We surrender our agency. The tools keep evolving. The companies keep shipping. The students keep using them. And we've removed ourselves from the only conversation that matters — how these tools should actually be used.

The underground grows. Certainty gets mistaken for wisdom. And the faculty divide deepens.

Like any good hobbit, I'd rather be sitting at home next to the fire with my feet up. But I'm not willing to cede this conversation to Silicon Valley or to fear masquerading as policy.

Next time I'm invited into a room like that one, I'll start differently. Not with expertise. Not with data. Not with a finger test. I'll start with two words: I worry. And I'll mean them.

AI Disclosure: This piece was written in partnership with Claude, which helped me organize the structure, edit for clarity, and identify gaps. The ideas, experiences, and commitment are entirely mine.


Lew Ludwig is a professor of mathematics and the Director of the Center for Learning and Teaching at Denison University. An active member of the MAA, he served on the project team for the MAA Instructional Practices Guide and was the creator and senior editor of the MAA's former Teaching Tidbits blog. His new book, The Science of Learning Meets AI, co-authored with Todd Zakrajsek, was published in April, 2026.