Skip to content

There and Back Again: Why I’m Finally Trying Alternative Grading

Three years ago, in the early months after ChatGPT arrived, I predicted that “Alternative Grading in the Age of AI” would be the next best selling book on the faculty development circuit. Here we are three years later, and while this text has yet to materialize, I’m finally ready to dip my Hobbit-like toes into this realm.

For years, I’ve avoided alternative grading. Not because I didn't believe in it—experts like David Clark, Robert Talbert, and Rachel Weir have laid out compelling frameworks. I avoided it because the organizational infrastructure felt overwhelming. The complexity of managing standards, tracking individual student progress across multiple attempts, and generating reassessment variations kept me stuck in my traditional points-based grading rut.

Last semester changed that. Using AI to teach writing and design thinking—courses I'd never taught—showed me something crucial: AI can handle exactly this kind of organizational complexity. This semester, I'm teaching Linear Algebra and Differential Equations, a course squarely in my wheelhouse. If AI can help me teach unfamiliar content, it can help me reimagine how I assess familiar content.

Before ChatGPT arrived, I knew traditional points-based assessments were failing us. They created a transactional game where students optimized for points rather than understanding, trading minimum effort for maximum points. Deep down, many of us recognized this problem, but we soldiered on. Then ChatGPT laid waste to our illusions. Paid models can comfortably pass, if not ace, any undergraduate assessment we throw at them. Students know it. They're farming out assignments, outsourcing their thinking, asking the obvious question: “Why bother learning this when AI can do it for me?”

For decades, we avoided alternative grading because the traditional system kind of worked. Students accumulated points, we accumulated grades, and everyone understood the transaction. It wasn't perfect—students crammed, forgot, gamed the system—but it was manageable. We had inertia on our side.

Here’s the irony I couldn't have predicted in early 2023: the same technology that broke my traditional assessments is the tool that makes alternative grading feasible. AI excels at exactly this kind of complex coordination. It can’t decide what mastery looks like in differential equations, but it can manage the systems that let me say to each student “Here’s where you are, what you still need to demonstrate, and how you can show me,” instead of just handing back a score.

I’m not building this from scratch. Clark and Talbert’s Grading for Growth outlines four pillars: clearly defined standards, helpful feedback, marks that indicate progress, and reassessment without penalty. The framework is there, but translating it to my Linear Algebra and Differential Equations course—that's the work AI makes possible.

For this first pass, I’m keeping the system simple and workable. I’m drafting about a dozen standards—the “must be able to do” moves in Linear Algebra and Differential Equations. Each standard will be assessed on a simple four-level rubric (Emerging / Developing / Proficient / Exemplary). Students will get lots of feedback early, but the goal is simple: earn “Proficient” on each standard.

Reassessment is the key. When a student isn’t there yet, they can try again using a new version of the prompt. Before that second attempt, they complete a short prep step such as corrections, a brief reflection, or targeted practice. I’m still working out the details, but the north star is clear. Grades should communicate mastery, not point accumulation.

With that plan on the table, I’m surprised by what does and doesn’t worry me. I’m not anxious about learning the system. I actually find that energizing, as last semester’s writing course taught me. I’m also not worried about public accountability. If anything, telling the MAA community is insurance against backing out.

What gnaws at me is student resistance. These students have spent more than 13 years in points-based systems. They know how to play that game. They’ve been trained to ask “How many points is this worth?” and optimize accordingly. Now I’m asking them to unlearn that reflex and trust a different approach, one where reassessment is expected and mastery matters more than point accumulation.

But the moment I start writing standards, I hit the real question. What does it mean to “understand” differentiation when AI can compute derivatives instantly? What's worth assessing when the execution is automated? What does learning look like when AI can do the math?

AI hasn’t just exposed that this system was broken. It’s removed our last excuse for not fixing it. When a chatbot can differentiate polynomials faster and more accurately than any human ever will, when it can solve differential equations while we’re still setting up the problem, the question “Why should students learn this?” becomes impossible to dodge with “because they might need it someday.”

So we’re left with harder questions. What mathematical thinking can’t be automated? What does mastery mean when machines can execute but can’t judge? How do we design learning experiences that develop the judgment, reasoning, and sense-making that AI fundamentally lacks?

Alternative grading focuses on mastery rather than points, on feedback rather than scores, on growth rather than performance. It might be one answer. Or it might not. But it forces us to articulate what we actually value in student learning, stripped of the comfortable camouflage that points-based systems provide.

The real risk isn’t that my experiment fails. It’s that we keep avoiding this reckoning, keep patching the old system, keep pretending that AI hasn’t fundamentally changed what learning mathematics needs to mean. Back in early 2023, I predicted someone would write a book on alternative grading in the age of AI. Three years later, I’m realizing we’re all writing that book now, whether we planned to or not.

So here I am, toes officially dipped. I haven’t implemented anything yet, and I haven’t faced that first wave of student resistance. All I’ve done is commit.

I’ll report back on what I learn. The resistance. The mid-semester adjustments. Whether that “why bother” question finds an answer.

What small experiment might work in your wheelhouse? If this piece nudges you to try it, I’ll count that as a win. The adventure begins in two weeks. I’m nervous. I’m excited. I’m committed. See you on the other side.

AI Disclosure: This piece was written in partnership with Claude, which helped me organize the structure, edit for clarity, and identify gaps. The ideas, experiences, and commitment to the experiment are entirely mine.


Lew Ludwig is a professor of mathematics and the Director of the Center for Learning and Teaching at Denison University. An active member of the MAA, he recently served on the project team for the MAA Instructional Practices Guide and was the creator and senior editor of the MAA’s former Teaching Tidbits blog.