The Homework Machine: What AI Is Really Doing in Classrooms – 1319

Home » The Homework Machine: What AI Is Really Doing in Classrooms – 1319

Watch the episode on YouTube.

Show Notes

About the Guest(s)

Justin Reich is an Associate Professor of Digital Media at MIT in the Comparative Media Studies/Writing program and the director of the Teaching Systems Lab. He is a longtime educator and host of the TeachLab podcast. His research focuses on how learning technologies shape teaching and learning in real classrooms and what actually happens when schools adopt new tools. He brings a thoughtful, historically grounded perspective to how generative AI is transforming education.

Jesse Dukes is a journalist, comedian, and audio storyteller with a long career producing narrative audio. He works with MIT’s Teaching Systems Lab on The Homework Machine project, bringing teachers’ and students’ voices into the public conversation about AI in schools. Previously at WBEZ Chicago, he has produced award‑winning radio and documentary work and has a special talent for capturing humanity and humor in complex educational stories.

Episode Summary

Generative AI is entering classrooms quickly—but not evenly, and not without complications. In this conversation, Justin Reich and Jesse Dukes share what they’ve learned while creating The Homework Machine, a seven‑part narrative podcast about how students and teachers are navigating AI in real time.

They discuss why AI is not an unambiguous good, the wide variation in how schools are responding, and why teachers are often left to figure things out on their own without guidance or professional development. They explore the dangers of repeating past mistakes made with the early web, including flawed digital literacy frameworks and rushed adoption of new technologies.

Tim, Justin, and Jesse talk about the potential of AI to support accessibility, instructional design, and differentiation—while also examining the risks of over‑relying on automation in the craft of teaching. They describe four emerging strategies educators are using to handle student misuse of AI, as well as the importance of humility, domain expertise, and listening to teachers and students as the field evolves.

The conversation also steps briefly into the behind‑the‑scenes process of producing a narrative podcast—including the story behind the real harpsichord used in the show’s sound design—and wraps with a lighthearted karaoke question.

Read the transcript

Jesse Dukes
AI has not been unambiguously good. It has raised challenges and complications.

Justin Reich
None of this is inevitable. OpenAI and Anthropic and DeepSeek would like us to believe this is inevitable. It’s not inevitable. The technologies that we choose to bring into schools are our choices.

Tim Villegas
Hey friends. Welcome back to Think Inclusive: Real conversations about building schools where every learner belongs. I’m your host, Tim Villegas. Today’s episode is about how generative AI is showing up in classrooms and what that actually means for educators. We’re trying to balance access, integrity, and inclusion.

This conversation digs into the messy middle—the hopes, the headaches, and the big questions about what helps students learn versus what simply feels new. Our guests today are Justin Reich, Associate Professor of Digital Media in the Comparative Media Studies/Writing Department at MIT, director of the Teaching Systems Lab, longtime educator, and host of the TeachLab podcast, as well as Jesse Dukes, journalist, comedian, and veteran audio storyteller working with MIT’s Teaching Systems Lab on the Homework Machine project.

We talk about why AI has landed unevenly in schools, how teachers are experimenting to support students with different needs, and why listening closely to classroom voices matters more than rushing toward the next shiny framework. They also share what they’re learning from hundreds of students and teachers trying to figure out when AI supports learning and when it gets in the way.

And here’s something fun to look forward to: you’ll hear how a real harpsichord ended up in the production of The Homework Machine. Before we meet our guests, I want to tell you about our sponsor. This episode is brought to you by IXL. IXL is an all‑in‑one platform for K–12 that helps boost student achievement, empowers teachers, and tracks progress in one place.

As students practice, IXL adapts to their individual needs so that every learner gets just‑right support and challenge, and each student gets a personalized learning plan to close gaps. Check it out at ixl.com/inclusive. Again, that’s ixl.com/inclusive.

All right, after a quick break, it’s time to think inclusive with Justin Reich and Jesse Dukes. Catch you on the other side.

Tim Villegas
Jesse Dukes and Justin Reich, welcome to the Think Inclusive Podcast. Really excited for this conversation.

Jesse Dukes
Thanks. It’s great to be here.

Justin Reich
Thanks for having us, Tim.

Tim Villegas
All right. You are here because of the limited‑series podcast that you have called The Homework Machine. And I just want to say upfront—it is fantastic.

And thank you. I think I’m five episodes in out of seven. I can’t exactly remember, but I think “Busted” was the last one I listened to.

Justin Reich
It might be the best. Might be my favorite episode.

Tim Villegas
It is really good. And I really appreciate the storytelling and how relevant it is—how relevant it is to our audience.

I feel like the podcast was written for… you know, like, I think I am your audience.

Jesse Dukes
Good, good. I’m glad. We didn’t know you when we were writing the scripts, but I’m glad it feels that way.

Justin Reich
Like, “I think this would be a good one for Tim.”

Tim Villegas
Yeah. Oh, thanks guys. And I think that how our audience—and how I—listen to it may be a little bit different than maybe what you were looking for as far as audience goes. The people that listen to Think Inclusive are really laser‑focused on how this can improve the lives of learners with disabilities; how it can increase access and opportunity; accessibility in curriculum and in schools in general.

And so that’s kind of my lens as I’ve been listening and hopefully our conversation too. We’ve had discussions on this podcast about AI and the implications of how it affects learners with disabilities. So I’m wondering if you could set the table here. Whoever wants to go first: what’s the big idea you want people to know about what you’ve learned about how AI has impacted education?

Jesse Dukes
There are a couple things I could say to start. If we’re talking about education broadly and not specifically the domains of accessibility and learners with disabilities, I think the one big idea is that AI has not been unambiguously good. It has raised challenges and complications. That’s not to say there aren’t promising potential affordances of AI—particularly in accessibility and helping learners with disabilities.

I think those are domains in which there’s reason to be optimistic about AI’s capabilities as we come to understand what it’s capable of doing more and more. But to me, the big idea is: this has been something that is challenging for schools, for teachers, and for students to make sense of and figure out.

Justin Reich
I’d say one place we could start is with methods. A thing we believe strongly is that when there are interesting and important challenges in education, the first thing to do is listen very closely to the people who are closest to those challenges. If you’re a school leader or district leader—heaven forbid you work in a state office or nonprofit—your day‑to‑day is probably far removed from what’s actually happening in classrooms. And that is the place where our attention should turn first.

Really listen to students and teachers. Tell us what’s happening. What’s going on? What are you seeing? What are you doing? How are you experiencing this?

I think we often try to design schools for the way we wish people learned, rather than the way they actually learn; for the way we wish schools operated, rather than the way they actually operate.

So there’s a method piece that ties to a belief that many of the good ideas—either the best possible applications or the most important constraints—on GPTs are going to come from the discoveries and experimentation of practicing educators and students. They can tell us a lot about when they’re learning, when they’re not, what works for them, what doesn’t.

They don’t always make good decisions, but a lot of times they can tell you a lot about what they’re thinking as they’re making bad decisions.

Tim Villegas
Yeah. As I’ve been listening to these episodes, I don’t know if this is a perfect comparison, but when the internet and search happened in the late nineties, I was working at a university library. My job was to help people find research, and we had these search‑engine‑type things specifically for research articles.

Jesse Dukes
WorldCat and Lexis.

Tim Villegas
There you go. Yeah.

Jesse Dukes
I was there, man.

Tim Villegas
And then all of a sudden my boss says, “Yeah, so there’s this thing called Google that people are using.” And I was just like, what is happening? It completely changed everything. And I feel like with GPTs, with chat features, we’re at another moment where everything is completely changing and there’s no going back.

What I’m hearing from the people you talk to in this series is that the institutions—the schools—they just don’t know. They don’t know what they don’t know, so they don’t have guidance. So it leaves a lot of teachers feeling like it’s the wild, wild west. Either they’re going to be extremely strict and ban everything, or they’re going to be more lenient and take situations into account. But nobody is telling anyone what to do. I’m wondering if you could expand on that.

Justin Reich
This is something Jesse and I have been thinking a lot about. It’s really important to start with the idea that no one knows what to do.

You picked one of my favorite historical examples: the arrival of the web. I was teaching high school history students how to do historical work on the web in 2003. And you were teaching people how to search in the late nineties. The first peer‑reviewed publication that showed verified, demonstrably good ways to teach people how to search the web was published in 2019.

That’s a 20‑year gap.

Not only that, but you probably remember that a lot of what we taught in the late nineties and early 2000s later proved to be totally wrong. Not just benignly wrong—harmful to the development of a generation’s ability to search the web, probably detrimental to our democracy.

We taught people the CRAAP test, these checklist methods—“.edu is more trustworthy than .com”—and “don’t use Wikipedia.” A whole set of checklists that seemed sensible at the time. The experts convinced themselves they had a good approach. Then research showed: no, no, no. When people use these methods in real-world search contexts, they perform poorly—worse than people who use different strategies.

To me, that is a real summons for humility. We don’t want to do that again.

We shouldn’t race to be first. Anyone who says, “Here’s the first framework,” “Here’s the first acronym,” should be treated with suspicion. We don’t want to be first—we want to be right. We need high‑quality research. We need to look back at what we did well and what we got wrong with the web, social media, mobile phones, and apply the best lessons to this moment.

So you’ll hear Jesse and me bring a lot of humility to these questions.

And then, I have to push back on something you said earlier: none of this is inevitable. OpenAI and Anthropic and DeepSeek would like us to believe that it is. But it’s not. The technologies we choose to bring into schools are our choices.

If they harm learning, we fight them. If they help learning, we bring them in and figure out how to make them fit.

We are fortunate that generative AI is arriving at a time when educators across the world are saying, “Remember how we said mobile phones were inevitable? That we could never put the toothpaste back in the tube?” Well, maybe we can. Maybe we should. Maybe banning phones from schools is the right thing because, despite all the benefits, they introduce a host of factors that undermine learning and social development.

If that’s what we discover about generative AI in three years, five years, seven years, then we shouldn’t describe it as inevitable. We should describe it as “under probation.”

If it’s bad, we fire it.

Tim Villegas
Jesse, I have a question for you. It has to do with what your observation was of teachers being willing to talk about this. You said you interviewed a lot of teachers. Is this one of the top things on teachers’ minds right now? Is it filling up their “top of the list”?

Jesse Dukes
Not always. Often, not really. Maybe it’s in the top five things they’re thinking about—but also, we’re emerging from the pandemic, and teachers are thinking: Have my students re‑socialized? Can they play together at recess? Can they work in groups?

Many schools are dealing with absenteeism. There’s burnout. Funding and personnel issues. Teachers are thinking, “How am I going to plan? I promised my colleague I’d cover sixth period because they’re sick and now I won’t have my planning time.” They’re thinking about those immediate stressors.

So AI is in there, but usually the first concern is cheating, because that’s the most urgent challenge. You put out fires before you explore how a tool might be helpful.

I don’t think I talked to any teacher who said it was the biggest concern. Many said: “There’s a lot challenging me right now… and AI is just one of them.”

Justin Reich
And Jesse, would you say that in schools with fewer resources—schools serving students with greater needs—it was less common for AI to be at the top of the list because their more urgent issues were things like absenteeism or resource shortages?

Jesse Dukes
Yes. There might be exceptions, but yes—on the whole, the better‑resourced the school and the better‑resourced the students, the more space there was to have conversations about AI. Those schools were more likely to have someone in the building who made it their job to think about AI.

Tim Villegas
I’d imagine it’s going to continue to be challenging because any local school district can essentially make decisions about what they do with curriculum. A school district and school board decide how they run the school and what they teach, even with state standards. So potentially you could have two schools in the same county that handle AI differently. School District A allows it with parameters; School District B completely bans it.

What are the conversations you’ve had with administrators or superintendents? How are they thinking about tackling this problem?

Jesse Dukes
Not as many administrators—just a handful. Part of that is that it’s been hard to get calls back when you call the superintendent’s office, often because they’re dealing with what feels like a more urgent challenge—funding, absenteeism, something like that.

But I would say, to your point: it’s not just that one school or one district might have a completely different attitude toward AI. You could have two teachers in the same building with different attitudes and approaches. In many cases—we partnered with RAND on a survey—the results are coming out soon—we asked teachers if they’d received any updated policy guidance from their district or school. Only one quarter said they had.

We asked the same question about professional development. Again, only one quarter had received any. This was fall 2024, about a year ago. My sense is that some districts have caught up, but many still haven’t convened PD, haven’t met with curriculum people or department heads, haven’t updated academic integrity guidance. There’s still quite a bit of lag.

Justin Reich
I really like that question about pathways schools could take with new technology, and I think it’s helpful to put it in historical context.

Some technologies have proven durable, and schools are glad they invested—like bringing the web into schools. Others? Well, a lot of schools are probably very glad they never invested in smart boards. Every school that didn’t buy a smart board is probably like, “Yep, saved money and didn’t waste time.”

You could imagine we eventually learn that getting GPTs to spit out text is actually easy. Maybe prompting isn’t that hard. But discerning the quality of output? That might be very hard. One unfortunate thing about GPTs is that even on similar tasks, sometimes they give great output, and sometimes terrible output—both delivered with the same confidence.

And the main way to discern between good and bad output may be domain expertise. And the heart of domain expertise is knowledge.

To judge whether GPT gives you good guidance about fixing a plumbing problem, you need to know something about plumbing. There’s no acronym for that. There’s no AI literacy trick. It’s knowledge.

The good news? Schools have been trying to teach knowledge for centuries. So maybe the most important thing to prepare kids for an AI future isn’t AI literacy—it’s being well‑rounded, informed people.

The story I just told could also be wrong, of course. Maybe there are domains where we can build useful strategies. But from the outset, we cannot confidently say that a school that goes all‑in on AI will prepare students better for the future than a school that doubles down on what it already does well.

And if history is any guide, schools are actually not very good at technology literacy. Look at what we’ve produced: we are not great at social media literacy; web search literacy has been a disaster; and many students still struggle with something as simple as saving a file.

Curriculum is already packed. Nobody says, “We won’t teach world history anymore because we need room for AI literacy.” Instead, we wedge it into the 37 minutes the librarian has with fourth graders twice a year.

That doesn’t work.

And maybe AI literacy won’t work either.

Tim Villegas
And what does going all‑in on AI even mean anyway?

Coming up after the break, we dig into how uneven the rollout of AI has been in schools and why teachers in different contexts are feeling its impact differently. Justin and Jesse talk about what they’re hearing from classrooms, the gaps in guidance, and how educators are trying to navigate the wild west of new tools while still focusing on what helps students learn.

But first, a word from our sponsor. This episode is sponsored by Adaptiverse. If you’re a special education teacher, you already know the time problem. Every week educators spend 15 to 20 hours manually adapting curriculum, especially for students with complex communication needs—non‑speaking learners, AAC users, students with apraxia or language‑based disabilities. Creating personalized materials, visual supports, multiple expressive pathways—it’s critical work, but exhausting, and it pulls teachers away from actually teaching.

Adaptiverse changes that. Teachers describe what they’re teaching and who they’re teaching, and the platform generates rigorous academic lessons personalized to each student’s needs with built‑in scaffolding and multiple ways to demonstrate understanding. These aren’t watered‑down worksheets—they’re rigorous lessons that presume competence.

Adaptiverse is built on 60+ years of combined education experience and is already being used to create more than 2,000 lessons in 35 states. Educators call it “lifesaving” and “irreplaceable.” If you’re ready to get those hours back, visit adaptiverseapp.com to learn more.

Tim Villegas
The most exciting things I’ve come across are organizations customizing generative AI for specific purposes. I’ll give an example. There’s an organization in Maryland called Adaptiverse, and they focus on helping teachers modify lessons. If you have a middle school social studies lesson, that lesson probably isn’t tuned for certain learners right out of the box.

This chatbot will take the lesson and the learner profile and make it more accessible. There are many applications like this. Another example: teachers are learning better instructional techniques by taking a lesson from Teachers Pay Teachers or elsewhere, and having GPT rearrange or redesign it to be more engaging or to include evidence‑based practices. That’s the most exciting application for teachers. Have you heard anything like that?

Jesse Dukes
Yeah. What you’re describing is among the most exciting stories we’ve heard. In fact, our upcoming episodes are full of stories like this. In some cases, it’s EdTech built around AI—like platforms built on the ChatGPT API with modules for exactly these tasks. In other cases, it’s a creative teacher experimenting with ChatGPT.

One teacher—Eric Timmons—told me unambiguously that AI has been great for his practice. It’s saving him time, giving him new ideas, and he shares it with his students. He teaches film in Santa Ana, California. Most of his students are Latino, many qualify for free or reduced lunch. They make movies, and he has them all four years of high school.

He wanted culturally responsive text for his students. He found an academic article on gentrification in Santa Ana and had his students read it. He’s a fan of PBLWorks and Project Zero. He asked ChatGPT to design in‑class activities based on those frameworks. It took a minute or two. And as a veteran teacher, he could shape it further for his students.

Most teachers who find benefits from AI tend to use it for differentiating texts or adapting curriculum. But as Justin likes to point out—and I’ll let him say it…

Justin Reich
The teachers who are doing this effectively usually could do the task themselves. The AI is just doing it faster, and they have the expertise to evaluate whether the AI did it well.

A novice teacher, by contrast, might trust the output even when it’s bad.

Justin Reich
They have a certain degree of expertise. They can discern: “Oh, GPT made a good modification of this lesson,” or “It did a good job turning this article into a Project Zero thinking routine.”

A novice teacher will just trust the output—might trust GPT even when it’s bad, even when it doesn’t create a good lesson.

One of the most striking teacher learning experiences I had was visiting High Tech High, which some of your listeners may know. It’s a project‑based learning school in Southern California. I visited when it was about fifteen years in, and the original art teacher was still there. Their school is all about projects, and thousands of educators had wandered through his classroom over the years.

There was a moment when we were talking, and he suddenly stopped—eyes kind of intense—and said, “Do the project. The most important thing you can do to make effective project‑based learning in your classroom is to do the project yourself before assigning it to students.”

There’s a craftsmanship in that. You don’t know if you have a good project until you’ve created the output yourself.

That reminds me that the act of tinkering with curriculum—the act of modifying documents—is part of the learning and preparation process for teachers. If the machine just spits out suggestions, an expert can pick and choose the good ones. A novice has no idea. But both experts and novices deepen understanding when they build materials themselves.

You figure out what matters as you design. For example, when I taught ancient Athens, I eventually realized the real heart of the lesson was the debate between aristocracy and meritocracy. I didn’t know that when I started teaching it; I learned it through designing lessons, reading materials, and working with students. That discovery process might be bypassed if a machine does the preliminary thinking for you.

We can start with research we already have. There are reasons teachers feel they need Teachers Pay Teachers. But we know when teachers grab random supplemental resources off the internet, they often get materials poorly aligned with standards, not well designed, without coherent learning progressions. In contrast, curriculum developed by departments or teams tends to be better.

If you’re using AI to take high‑quality materials and differentiate them, that’s promising. The way you phrased it, Tim—“How do I pressure‑test this curriculum?”—that’s compelling. Much more compelling than “How do I churn this stuff out as fast as possible?”

From research, I’d bet on pressure‑testing high‑quality curriculum before I’d bet on making garbage from Teachers Pay Teachers slightly less garbage.

Jesse Dukes
Garbáge‑y. I think “garbage‑y.” Or garbasse? From the French?

Tim Villegas
So there is no real shortcut. There’s no shortcut to being a good teacher, just like there isn’t a shortcut to being a good student.

In all the examples I’ve heard in The Homework Machine, students make a decision to use AI to finish a project or paper. They all have different reasons, but they haven’t learned anything. They may check something off their to‑do list, but it doesn’t make them a better student.

And for teachers—who should be masters of their craft—there’s no shortcut to designing lessons, reading curriculum, deciding what to include or not include, differentiating for students. That is skill. AI can’t replace that.

Jesse Dukes
Yeah. I would say there are probably no shortcuts to being a good teacher. But there may be administrative tasks that are not worth the time teachers put into them. Maybe ChatGPT or other AI can help with those.

I’ve had teachers tell me they use ChatGPT to draft letters or emails to parents, or to draft self‑evaluations. Some teachers talk about maybe using ChatGPT to help offer mid‑project feedback—formative assessment.

But that also raises a question: if a tool that’s really good at generating formulaic writing can handle a task, was the task valuable in the first place? If the self‑evaluation process is so formulaic that ChatGPT can do it well, is it really a meaningful self‑evaluation?

But yes, teachers do see ways AI might help with the administrative tasks.

Justin Reich
We should be cautious of efficiency arguments. I was standing in my kitchen today thinking about this. Efficiency arguments have played us for a century. “Vacuum cleaners will make housework easy!” “Dishwashers will free up your time!” But it’s still a lot of work. Historians and sociologists of work can tell you all about how efficiency gains often change expectations, not workload.

And education provides ready examples. The Scantron machine made grading efficient. But does any educator today say, “I’m so glad the Scantron made multiple‑choice so central to our system—really helped learning”? Probably not.

Another example: online gradebooks. I went to my daughter’s middle school parent night, and the whole time was spent showing parents how to navigate PowerSchool and Google Classroom—how to read the gradebook codes, what kinds of zeros count or don’t count. I wanted to hear what books the kids were reading, what projects they were doing, what the approach to math instruction was. But instead it was all grade surveillance.

Online gradebooks were sold as efficiency tools, but they changed parent‑teacher communication fundamentally. Instead of conversations about learning, it became “track the data constantly.” That was not the intended use, but that’s what happened.

So with GPTs, in any domain claiming efficiency, we need to ask: What might this change unintentionally? And why might that be bad?

Educators generally understand that efficiency is not always good. A worksheet students can race through quickly is usually not as powerful as a task with productive struggle or friction.

Jesse Dukes
Just to clarify: are you saying teachers and school leaders should not experiment with AI to see if they can automate tasks, gain efficiencies, or create accessibility benefits? Should they avoid trying things that might help them spend more time building relationships with students?

Justin Reich
I think they should not assume efficiency will be the outcome.

Ask any veteran teacher who has watched technology change dramatically over the past 20 years: “Can you spend more time on the most important parts of teaching because of all the technology in your building?” I don’t think many will say yes.

Tim Villegas
I’m just remembering how many times I had to fight with the smart board or Promethean board in the middle of a lesson because it wasn’t working.

Justin Reich
Exactly. GPT is similar. A friend told me recently, “I had GPT make some quizzes, but they had stuff that was wrong.” There’s a saying in software engineering: “For every hour you save with a copilot, you’ll spend an hour fixing what the copilot broke.” That’s the quality assurance cost.

The quiz example is exactly that.

Justin Reich
To a 30‑second podcast ad about AI‑powered EdTech—“This podcast brought to you by MagicSchool, by SchoolAI”—Jesse and I say: don’t waste your money on this garbage. No—experiment with this garbage. But… well, I mean…

Tim Villegas
Yeah, we won’t—I won’t tell you who our sponsor is for this podcast.

Justin Reich
Actually, a thing I’ve been saying over and over is: I’ve taken money from every technology company you can think of—Google, Microsoft, CZI…

Jesse Dukes
Google is supporting our research right now. And our contact there is really interested in what we’re learning about the sociotechnical dimension of the technology they spend so much time developing.

Tim Villegas
Yeah. Okay, so one more question about—and this is for Justin—and then I want to get into the weeds a bit about the podcast production side. Because Ariel worked with you, right?

Jesse Dukes
Yep.

Tim Villegas
Ariel S. and Black. Ariel and I go way back—we produced a podcast together called Trailer Park.

Jesse Dukes
I’m a fan of Trailer Park, the podcast.

Tim Villegas
Okay, good.

Jesse Dukes
And I was living in a trailer that looks like the trailer on the Trailer Park podcast cover when you launched that podcast. I lived about nine months in a camper trailer before settling in Los Angeles.

Tim Villegas
Oh, that’s wild.

Jesse Dukes
So I was a Trailer Park fan in a trailer.

Tim Villegas
You’re one of the OG trailers… trailer parkers.

Jesse Dukes
I don’t—one of those. Again, look up the noun form.

Tim Villegas
Okay, so Justin: how have you AI‑proofed how you’re working with students? Or have you?

Justin Reich
Dang—I haven’t. That’s what I have to do this week. My first class is on Wednesday. Jesse and I, right after this, are going to record a little segment. And I’ve been thinking about it a lot.

I have some assignments that took me twenty years of practice to “web‑proof.” I have assignments you cannot answer with a Google search. And GPT does an okay job with them.

One example: I teach a class called Learning, Media, and Technology. Students take a piece of EdTech and identify whether it’s inspired more by direct instruction or by constructionism/apprenticeship—more Dewey or more Thorndike. They explain why, then reimagine the technology as if it were designed by the other theoretical camp. It’s a great assignment. You can’t Google it.

GPT does an okay job.

Last year, I asked my students what I should do about GPT, and they said, “We’re not going to use it. Your class is harder than the other electives we could have taken. If we wanted to blast through something, we’d take something else.”

And I was like, “Oh… you flattered me right out of this dilemma.”

This year I’m trying something different. There’s good scientific inquiry about recognition vs. generation as learning activities. So instead of writing the essay themselves, I’m going to have students write ten versions using GPT. I’ll say:

“Ask GPT to do ten versions of this exercise. Pick a technology, map it to the frameworks, flip it. Generate ten drafts.”

Then:

Read across all ten.

You couldn’t have done this before—writing ten versions takes too long. But now you can see patterns, strengths, weaknesses. Then they’ll pick one and refine it.

I’m going to introduce that next Wednesday. In a month, I’ll test the assignment. We’ll see how it goes.

Tim Villegas
So the ten—are they going to do ten discrete prompts? Or…?

Justin Reich
Yes. Ask GPT ten times. Pick ten different technologies. Run the framework comparison. Flip it. Let GPT write the essay ten times.

Tim Villegas
Got it.

Justin Reich
Then read across them and refine one.

I’ll probably encourage them strongly to do it this way, because last year they said, “Nah, we’ll just write the essay.”

I also teach Introduction to Media Studies. For decades, we’ve had students not use media for a day. For this generation, that’s profound—no phone for a day, no screens. For that essay, I’m going to do the thing everyone says you can’t do: tell them not to use ChatGPT.

I’ll have to work on the front end to convince them that it’s a rich learning experience. But I’ve spent twenty years teaching students to write better. I know how to help them do better—and I’m almost positive that if they use the AI crutch, I can’t help them the same way. I can help them if they do it the way I know.

So those are two examples. Not AI‑proofing, exactly—one leans in, the other abstains.

Jesse Dukes
If I can tie Justin’s answer back to The Homework Machine: in Episode 3, we talk about three strategies teachers use to keep students from using AI to bypass learning.

Justin has proposed a fourth strategy.

The first is Monitor and Communicate—or “plead with your students.” We have one teacher talking to her students about “dendrite connections failing.”

The second is Detection—Turnitin, homemade detection tricks, hidden text.

The third is In‑Class Handwriting—pen and paper, composition notebooks, “I’m watching you do this.”

The fourth, which Justin described, is Lean In:

“You want to use AI? Fine. Use it ten times. Use it until you hate it.”

Justin Reich
Full Bruce Bogtrotter from Matilda. You take one bite of cake, you eat the whole cake.

Jesse Dukes
Exactly. Some teachers are doing interesting experiments: encouraging students to use AI, then evaluating AI output as part of the assignment. That seems promising. I’m curious to hear how Justin’s experiment goes.

Justin Reich
Yeah. The GPT… you’re getting—well, I was at a funeral the other day talking to a friend who’s a teacher. She said, “I had GPT make some quizzes, but they had stuff that was wrong in them.”

There’s a joke in software engineering: for every hour you save coding with a copilot, it’ll cost you an hour in quality assurance down the line. The time you save writing code faster, you pay for fixing the errors introduced by lousy code.

The quiz example is exactly that.

Tim Villegas
Yeah…

Justin Reich
So, you know—this turns into a 30‑second podcast ad for “AI‑powered EdTech.”
“This podcast brought to you by MagicSchool… by SchoolAI…”

And Jesse and I would be like: “Don’t waste your money on this garbage. No—experiment with this garbage.”

Tim Villegas
Well—you know what? I won’t tell you who our sponsor is for this podcast.

Justin Reich
Yeah. I mean, a thing I’ve been saying over and over: I’ve taken money from every technology company you can think of—Google, Microsoft, CZI…

Jesse Dukes
Google is supporting our research right now. And our contact there is really interested in what we’re learning—what he called the sociotechnical dimension of the technology they spend so much time building.

Tim Villegas
Yeah. Okay—one more question for Justin, and then I want to get into the production side of the podcast. Ariel worked with you, right?

Jesse Dukes
Yep.

Tim Villegas
Ariel S. and Black. Ariel and I go way back—we produced a podcast together called Trailer Park.

Jesse Dukes
I’m a fan of Trailer Park, the podcast.

Tim Villegas
Oh, good.

Jesse Dukes
And I was living in a trailer that looks exactly like the one on the Trailer Park cover when you launched that podcast. I lived nine months in a camper trailer before settling in Los Angeles.

Tim Villegas
That’s wild.

Jesse Dukes
So I was a Trailer Park fan in a trailer.

Tim Villegas
You’re one of the OG… trailers? Trailer‑parkers?

Jesse Dukes
I don’t know—one of those. Again, look up the noun form.

Tim Villegas
Okay. So Justin, how have you AI‑proofed how you’re working with students? Or have you?

Justin Reich
Dang. I haven’t. That’s what I have to do this week. My first class is on Wednesday. Jesse and I, right after this, are going to record a little segment. I’m thinking about it, and struggling a lot.

I have some assignments that took me twenty years of practice to “web‑proof.” Assignments you can’t answer with a Google search. GPT does an okay job with some of them.

I’ll give you one example. I teach a class called Learning, Media, and Technology where students take a piece of EdTech and identify whether it’s more inspired by direct instruction or constructionism—more Dewey or more Thorndike. They explain why, then reimagine the technology as if it were designed by the other camp. Fantastic assignment. You cannot Google your way through it.

GPT does an okay job.

Last year, I asked my students, “What should I do about GPT?” And they basically said, “We’re not going to use it. Your class is harder than the other electives. If we wanted something easy, we’d have chosen something else.”

And I thought, “Oh. You flattered me right out of my problem.”

This year I’m going to try something else. There’s research about recognition versus generation as learning activities. So instead of having them write the essay, I’m having them write ten versions using GPT.

I’ll tell them:
“Have GPT generate ten versions of this assignment. Pick ten different technologies. Map each to the two frameworks. Flip it. Have GPT write the essay ten times.”

Then they will read across all ten. This is something they couldn’t do before. Writing ten versions manually would take too long.

Then they pick one and refine it.

I introduce this next Wednesday. A month from now, I’ll find out whether it worked.

Tim Villegas
So will they do ten separate prompts?

Justin Reich
Yes. Ten distinct prompts. Ten technologies. Same structure. Generate.

Then read across them and refine one.

I probably will push them to do it this way. Last year they said, “Nah, we’ll just write the essay.”

I also teach Introduction to Media Studies, where for decades we’ve had students not use media for a day. For this generation, that’s a profound experience—no phone, no screens, nothing. For that essay, I’m going to do the thing everyone says is no longer possible: tell them not to use ChatGPT.

I know I’ll need to convince them that it’s a rich learning experience. But I’ve spent twenty years teaching students to write better. I know I can help them if they do the work themselves. I don’t know that I can help them if ChatGPT does the heavy lifting.

So I have one assignment leaning into AI, and one completely abstaining.

Jesse Dukes
And tying this back to The Homework Machine: in Episode 3 we talk about three strategies for preventing students from using AI to bypass learning. Justin just articulated a fourth.

Strategy 1: Monitor and Communicate—plead with students. “This is why I don’t want you to use AI. This is why learning matters.”

Strategy 2: Detection—Turnitin, tricks, invisible text.

Strategy 3: In‑Class Handwriting—composition books, do it under supervision.

Strategy 4, which Justin articulated: Lean In.
“You want to use AI? Fine. Use it ten times. Use it until you’re sick of it.”

Justin Reich
Full Bogtrotter from Matilda. You take one bite of cake, you eat the whole cake.

Jesse Dukes
Exactly. Some teachers are doing interesting experiments in this zone. They have students use AI, then evaluate the output as part of the assignment. That iterative process becomes the learning.

I’m excited to hear how Justin’s experiment turns out.

Tim Villegas
Well, The Homework Machine—every single episode I’ve listened to has been fantastic. I encourage everyone to listen and subscribe wherever you get your podcasts. And this is a seven‑part limited series, correct?

Jesse Dukes
Yes, yes. And there may be some bonus episodes. It was brought about by very generous funding from specific funders—you can hear them in the credits. And if we’re able to convince more funders, or the same funders, that a second season would be a good idea, we’d be excited to do that.

We’ve also talked about maybe Season 2 being The Distraction Machine, all about mobile phone bans and restrictions and how that’s playing out in schools. I think that would be a fun season. Justin and I probably have an overdue conversation about what we do next after The Homework Machine, but right now we’re still in the midst of production. That’s all I can think about. I’ve got to edit Episode 5 pretty soon—I’ve got to crank on that in a few minutes.

Tim Villegas
I love it. I love it. We are right at the top of the hour—can you stay a little longer?

Jesse Dukes
I can. Yeah, I can do another five or ten minutes.

Tim Villegas
Okay, great. One podcast‑related question—the process. The storytelling is really compelling, and the sound design and music are wonderful. Justin, you were saying the harpsichord is a literal harpsichord?

Justin Reich
A real harpsichord from the MIT Music and Theater Department.

Tim Villegas
That is—

Jesse Dukes
Somewhere there’s an email chain that’s just Justin to me, me to Justin: “Do you know where I can get a harpsichord for one day when I’m in Boston or Cambridge?” And Justin—actually, I believe Elsa, Justin’s wife, came up with the harpsichord connect.

Justin Reich
I think I went to the music department, but yes.

Tim Villegas
Here’s me not realizing MIT had a Music and Theater Department.

Justin Reich
It’s actually really good. We just built a giant new building. A huge number of students play instruments—which probably doesn’t surprise you. Kids who are nerdy about machines and science also like violins and pianos and guitars.

Tim Villegas
Yeah.

Justin Reich
What is a musical instrument but a technology for your hands?

Jesse Dukes
For their parents!

Tim Villegas
Oh yes.

Justin Reich
Part of why I knew they’d have a harpsichord is that I used to teach on the floor where all the practice instruments were. There was a hallway full of semi‑soundproof practice rooms, and you’d walk by and hear beautiful music on your way to class to lecture students about MOOCs or something like that.

Jesse Dukes
When I was a kid in Charlottesville, Virginia—before my dad went back to grad school—he was a piano tuner. He would tune the piano, and I think the harpsichord as well, for the University of Virginia’s music department. So I was aware that universities often have an infrastructure around these kinds of instruments.

Tim Villegas
Fantastic.

Tim Villegas
The choice to make this a narrative series—had this been a dream of yours? Like, “I want to tell this story and I want to tell it this way”? How did that come about?

Jesse Dukes
I can take a stab at that, and Justin can tell me if he agrees.

Justin brought me in when I was a freelancer. We had done work together before. When the TeachLab podcast started, I was a consultant—I was working full‑time at WBEZ then—but I helped Justin and his team with some production tasks, ideation, thinking through formats, some early editing when they were getting started. TeachLab was mostly an interview‑driven podcast.

In late 2023, Justin and I realized it would be interesting to talk to teachers and students about this topic. We thought maybe we could find some grant funding. When we got our first grant—from the Spencer Foundation—I already thought: these stories are incredible. These are teachers adapting to a disruptive moment. There are wonderful human stories here. It would be great to tell a narrative podcast, not just academic publishing or two‑way interviews.

We didn’t have funding for a full narrative project at first, but we kept applying for grants, and we had a good track record. And because narrative audio is my background, and because I had a vision for it, Justin agreeably went along once the resources were there. And once I started sharing initial field stories, I think that helped.

Does that line up with your memory, Justin?

Justin Reich
Yeah. It’s funny how it developed. A crucial piece was that Jesse helped us with a project funded by the Office of the President—we were writing about AI and education. We decided we should talk to teachers and students, not just sit on Mass Ave and issue opinions about schools without talking to anybody.

Jesse helped pull together material from those first interviews, then did more interviews, and the project snowballed.

The generous support from the Jameel World Education Lab, the Spencer Foundation, MIT’s Social and Ethical Responsibilities of Computing initiative, and eventually a Google Academic Research Award all helped. It was a moment where, if you were an academic, getting small grants for AI research was easier than it might be later.

And we already had a podcast infrastructure, and Jesse’s background in narrative production. And philosophically, my research group believes deeply in talking to teachers and students first. The expertise closest to the problem is often the most valuable.

Tim Villegas
Thank you for indulging me with that behind‑the‑scenes look.

We’ve been talking about how teachers are navigating AI in real time—the trials, the experiments, the very human moments that come with figuring out something new. Before we wrap up, I asked Justin and Jesse a mystery question that takes things in a more musical direction. Here’s how that went.

Tim Villegas
If I can have you for one more question—I like to end my interviews with a mystery question. Typically, they’re written by my 13‑year‑old, but I’m out of questions. So I asked my friend, Copilot.

Justin, give me a number between one and ten.

Justin Reich
Four.

Tim Villegas
All right. What is your go‑to karaoke song—even if you can’t sing?

Justin Reich
I have more of a “walk‑on” song. My walk‑on song is “Rodeo Clowns” by G. Love and Jack Johnson.

“Disco ball spinning, all the music and the women and the shots of tequila…”

It has a great drumbeat. You time it so you push through the paper or hop over the wrestling ring ropes right when the drum hits.

Tim Villegas
Oh my gosh.

Justin Reich
And I would karaoke that. But you can also do “Nothing Compares 2 U.”

Tim Villegas
What about you, Jesse?

Jesse Dukes
I was thinking I should get a walk‑on song—I don’t really have one. A few years ago, I won second place at the WBEZ Holiday Party karaoke contest with my rendition of Billy Joel’s “Piano Man.” It’s an excellent anthemic sing‑along, especially when people have been drinking a little. I’ll stand by “Piano Man” as an excellent karaoke song. And it’s in my range—I can just about hit the high note.

I don’t think “Nothing Compares 2 U” is in Justin’s range, but I don’t think he’d let that stop him.

Tim Villegas
My wife and I like to sing “Total Eclipse of the Heart” as a duo. I used to play music—I was in bands—but karaoke was not my favorite thing. I’d rather hide behind the guitar.

Jesse Dukes
I love karaoke. I wish we, as a culture, sang together more. Karaoke gets people doing that. Secretly, we all want to be in a group singing. But something about 21st‑century life doesn’t make space for it.

I was also in bands, and I probably enjoy that more, but I’m always up for karaoke.

Tim Villegas
Jesse Dukes and Justin Reich, thank you so much for being on the Think Inclusive Podcast.

Jesse Dukes
Tim, this was a fabulous conversation. Thanks for your great questions.

Justin Reich
And for giving The Homework Machine a careful listen. It’s a real honor. Thanks so much, Tim.

Tim Villegas
That was Justin Reich and Jesse Dukes. Here’s what I’m taking away from this conversation: educators are still sorting out what AI actually means for day‑to‑day learning, and how important it is to slow down and listen to the people closest to kids, and stay grounded in the core work of teaching.

Justin and Jesse reminded us that new tools don’t replace good thinking or strong relationships, and that inclusion still comes down to designing environments where every learner can participate and grow. And that’s right in line with MCIE’s mission—removing barriers, centering learner variability, and making sure students with disabilities are fully included in their neighborhood schools and classrooms.

Here’s one practical step for educators: pick one upcoming lesson and run a small experiment. Use an AI tool to draft a few variations of a task or text, and then look at them with a critical eye. Ask yourself: does any version reduce barriers for a student who often gets stuck?

The goal isn’t to let the AI decide, but to help you notice new possibilities you might refine with your own expertise.

Share this episode with a colleague who’s building inclusive schools. Rate and review us on Apple Podcasts or Spotify. Follow Think Inclusive wherever you get your podcasts. Shout‑out to everyone who attended the Educating All Learners Alliance Community of Action this past weekend. I had the opportunity to be a coach this time around, and big thanks to Aurora Dreger for leading that event. I hope to be part of it next time.

What’s on your mind right now? I’d love to hear about it. You can always email me at tvillegas@mcie.org.

Okay—time for the credits.

Think Inclusive is brought to you by me, Tim Villegas. I write, edit, mix, master—I basically wear all the podcast hats and the baseball caps. This show is a proud production of the Maryland Coalition for Inclusive Education. Scheduling and extra production help from Jill Wagoner. Our original music is by Miles Kredich, with extra vibes from Melod.ie.

Big thanks to our sponsors: ixl.com and Adaptiverse. Visit ixl.com/inclusive and adaptiverseapp.com.

Fun fact: over 70% of all images posted on social media are either AI‑generated or enhanced with AI. And that doesn’t even include videos, which are even harder to detect as authentic.

So what does this all mean? I don’t know. But it’s really hard to believe anything you see on the internet anymore. How are you feeling about AI? Are you adopting an AI practice in your teaching or in your life? I’d love to know about it. Email me at tvillegas@mcie.org. I read every single message.

And if you’ve made it this far, you’re officially part of the Think Inclusive Inclusion Crew. Want to help us keep moving the needle forward for inclusion? Head to mcie.org and click the donate button. Give $5, $10, $20. It helps us keep partnering with schools and districts to move inclusive practices forward and support educators doing the work.

Find us on the socials almost everywhere @ThinkInclusive. Thanks for hanging out, and remember—inclusion always works.


Key Takeaways

  • AI in schools is not universally positive. Teachers face real challenges—cheating, uneven policies, privacy concerns, and unclear expectations.
  • There’s no shared roadmap. Most teachers haven’t received guidance or professional development on AI, leaving individual educators to experiment on their own.
  • History warns against rushing. Early web literacy efforts were misguided for decades. Educators must approach AI with humility, research, and caution.
  • AI shows promise for accessibility and differentiation. Tools that adapt text, modify lessons, or support learners with disabilities can meaningfully reduce barriers.
  • Good teaching still requires human expertise. AI can speed up tasks, but it can’t replace the craft of lesson design, deep domain knowledge, or the relationships that drive learning.

Resources

Thank you to our sponsors!

Watch on YouTube

Scroll to Top