
"In a hole in the ground there lived a hobbit." Tolkien's opening line has always resonated with me because it captures something essential about human nature: when the world outside gets overwhelming, we retreat to our hobbit-holes. We close the round door, light the fire, and tend to what we know.
I suspect many of you are feeling the pull toward your own hobbit-holes right now. The news cycles have been relentless. Whether you're processing national headlines, navigating institutional pressures, or simply trying to stay steady in your own department, your cognitive load is maxed out.
There's solid psychological research backing what we all feel intuitively: when people experience chronic stress or threat, they become more rigid in their thinking. We default to familiar routines. We resist new challenges. It's not weakness—it's a survival mechanism psychologists call "threat rigidity." Our brains, already working overtime to process external chaos, simply don't have energy left for venturing into unfamiliar territory. When change fatigue sets in, we cling more tightly to what we already know how to do.
So here's what I'm not going to do this month: I'm not going to ask you to explore some exciting new AI capability or fundamentally rethink your pedagogy. Instead, I want to meet you in your hobbit-hole and help you quietly do what you're already doing—but better.
If you're like many of the faculty I talk to, you're already using AI. Maybe not in your classroom yet, but for your own work. You're using it to draft difficult emails to struggling students. You're generating quiz variations when you need a makeup exam. You're brainstorming ways to revise the assignment that never quite works. You're doing it quietly, maybe even a bit guiltily, but you're doing it because it genuinely helps. So if AI is supposed to make your life easier, why does it sometimes feel like it's creating extra work?
You've probably run into this frustration: you spend 30 minutes building context with ChatGPT about your specific course situation. You explain your student population, your learning objectives, the constraints you're working within. You get genuinely useful suggestions. Then you return the next day to continue the conversation, and it's like talking to a stranger. You’re back to re-explaining everything. Or worse—you're deep into conversation and the AI starts contradicting something it said twenty minutes ago, or it asks you a question you already answered.
If you haven't hit this wall yet, you will.
Why It Forgets (The Short Version)
Every AI model has a "context window"—a finite limit on how much text it can actively consider. When your conversation gets long enough, the early parts get compressed or dropped entirely. This isn't a bug; it's by design. Processing everything from the beginning every time would be prohibitively expensive and slow.
Think of the context window as the AI's breadcrumb trail—its attempt to remember where you've been. But as conversations grow, the wind can pick up and sweep those breadcrumbs away. The good news? You can leave your own breadcrumbs, stored somewhere the wind can't reach.
Strategy 1: Leave Yourself Better Breadcrumbs (5 Minutes)
The quickest fix is to be deliberate about what you save. Instead of trusting the AI's trail, you create your own markers—ones you control.
When you're ending a productive session, try this prompt (formatted here for easy copying):
"Before we end, create a summary I can use to resume this work later. Include:
1. The main goal or problem we were working on
2. Key decisions, conclusions, or insights we reached
3. Important constraints, context, or parameters we established
4. Open questions or unresolved issues
5. Where we stopped and the logical next stepFormat this so I can paste it directly into a new conversation."
Save that summary somewhere you can find it—a Google Doc, a notes file, wherever you keep project materials.
When you're ready to continue, start a fresh conversation with this structure:
"I'm continuing a previous conversation. Here's the summary of where we left off:
[Paste your summary here]
Please confirm you understand this context, note any clarifying questions, and then let's pick up at [specific next step]."
That confirmation step is crucial. It catches misunderstandings before you're ten minutes deep into a conversation heading the wrong direction. This pairing works because it's completely platform-agnostic—whether you're using ChatGPT, Claude, or Gemini, these prompts work the same way.
Strategy 2: Build a Proper Workspace (30-Minute Setup)
Breadcrumbs are great for quick returns. But if you're working on something all semester—a course redesign, a new unit, an evolving assessment strategy—you need something more permanent than breadcrumbs. You need a shelf.
Both ChatGPT and Claude now offer "Projects": dedicated workspaces where you can store instructions and reference files that stay available across conversations. Think of it as adding a library shelf to your workspace: a place where the important context can live between conversations. You may still need to point it in the right direction: "Review the project instructions" or "Check the syllabus I uploaded."
The key is building thoughtful project instructions—they shape how the AI approaches everything in that workspace. Here's a prompt that turns the AI into an interviewer, helping you develop them:
You don't have to draft these instructions from scratch. You can have the AI interview you to develop them collaboratively. The whole process takes about 30 minutes—here's the prompt:
"I want you to help me think through and develop project instructions for an LLM project folder. These instructions will live in the project, ready for the AI to review when I point it there.
Act as a collaborative thinking partner with expertise in mathematics pedagogy at the college level. For each element below, don't just ask me a question and move on—help me brainstorm, probe my initial answers, offer examples I can react to, and surface considerations I might not have thought of. Push back if my answers are vague.
The elements we need to develop:
1. Scope and goal — What is this project about? What are you trying to accomplish?
2. Your role and context — Who are you? What's your background? What kind of institution do you teach at?
3. Key constraints — What are you working with? (student population, course structure, resources, limitations)
4. AI persona — How should I behave toward you? Should I act as a peer who challenges ideas? A supportive brainstorming partner? An editor who catches blind spots? How much pushback do you want?
5. Communication style — How should I talk to you? Do you want concise responses or detailed explanations? What level of pedagogical literacy should I assume? What frustrates you about AI conversations?
6. Workflow — How should our work proceed over time? Should we work in phases? Do you want check-ins before moving forward? Iterative refinement or comprehensive planning upfront?
7. Reference expertise — What knowledge or frameworks should I draw on? Are there specific books, pedagogical approaches, or institutional values I should align with?
8. Preferred formats — How do you want outputs structured? LaTeX? Plain text? Specific document formats? Particular organizational schemes?
9. Running summary — Is there prior work or context to carry forward from previous conversations?
10. What to avoid — What should I NOT do or suggest? What approaches, tools, or assumptions should I steer clear of?
Work through these one at a time. When we've fully explored each element, synthesize everything into a concise draft I can paste directly into my project instructions.
Start with scope and goal—and don't let me off the hook with a one-sentence answer."
This turns the AI into a brainstorming partner rather than a passive note-taker. It will probe vague answers, suggest possibilities you haven't considered, and help you think through what actually matters for your work.
Once you've built those instructions and saved them in your project settings, every new conversation within that project starts with that shelf within reach. You save yourself hours of repetitive explanation all term.
Making Your Hole More Comfortable
I started this piece by acknowledging that we're all retreating to our hobbit-holes right now—and that this is a reasonable response to an unreasonable amount of external pressure.
But here's the thing about a well-organized hobbit-hole: it's not about hiding from the world. It's about having a space that actually works for you. A place where you don't have to start from scratch every time you sit down to work.
That's what these strategies offer. Not a grand new vision for AI in education. Just practical ways to make the work you're already doing a little less exhausting. If you only have five minutes, use the exit summaries. If you can invest thirty minutes once, set up a project with a library shelf of instructions and files you can return to all semester.
I'm not asking you to step onto the road right now—not when so many of us are struggling to keep our feet. I'm suggesting you furnish your hobbit-hole properly. The round door will still be there when you're ready.
But right now? Make your hole more comfortable.
AI Disclosure: This piece was written in partnership with Claude, which helped me organize the structure, edit for clarity, and identify gaps. The ideas and experiences are mine.

Lew Ludwig is a professor of mathematics and the Director of the Center for Learning and Teaching at Denison University. An active member of the MAA, he recently served on the project team for the MAA Instructional Practices Guide and was the creator and senior editor of the MAA’s former Teaching Tidbits blog.