The Good and Bad of ChatGPT in Schools

This week on Gadget Lab, WIRED and NPR team up to cover the debate about students and teachers using generative AI in the classroom.
One orange broken pencil among a pattern of intact white pencils on a black backdrop
Photograph: MirageC/Getty Images

The worst part of going to school is all the homework. Nothing strikes dread in a student’s heart quite like facing down a deadline on a seven-page essay. That’s why some of them may find it tempting to turn those hours of work into a task that can be breezed through in a matter of seconds by an AI-powered app. Generative tools like ChatGPT have wormed their way into the school system, causing panic among teachers and administrators. While some schools have banned the tech outright, others are embracing it as a tool to teach students how to tell the difference between reality and science fiction.

This week, we're bringing you a special show about the perils and opportunities of AI in the classroom. This episode is a collaboration between WIRED and the NPR show 1A. It's the second episode in a series called “Know-It-All,” which focuses on all the ways AI is affecting our world.

Show Notes

Listen to every episode of Know It All: 1A and WIRED’s Guide to AI. Read more from WIRED about how chatbots are coming for the classroom.

Pia Ceres can be found on Twitter @lapiaenrose. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.

If you have feedback about the show, take our brief listener survey. Doing so will earn you a chance to win a $1,000 prize.

How to Listen

You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:

If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. If you use Android, you can find us in the Google Podcasts app just by tapping here. We’re on Spotify too. And in case you really need it, here's the RSS feed.

Transcript

Lauren Goode: Hi Gadget Lab listeners. We are taking a break from our usual show this week. Instead, we're bringing you a special collaboration between Wired and the NPR Show, 1A. It's a four-part series called Know It All, 1A and WIRED's Guide to AI. It's about how AI will affect everything, from education to health care to national security. The series features conversations with people at the forefront of this AI transformation and people whose lives are being directly affected by it.

The episode we're sharing here is called "ChatGPT in the Classroom." As you might expect from that title, it's all about how AI generators are affecting life in schools. After all, what's the point of giving kids essays as homework if they can just offload their work onto an AI that can generate a response in seconds? This show features WIRED's Pia Ceres, as well as professors and teachers dealing with AI in the classroom right now.

Lauren Goode: You can hear more at 1A's website, the1a.org/series. We'll have a link in the show notes. Enjoy the show and stay human.

[Gadget Lab intro theme music transitions to 1A theme music.]

Celeste Headlee: From WMU and NPR in Washington, this is 1A. Hi, I'm Celeste Headlee in for Jenn White. Today on 1A, we go back to school. Since its launch last year, ChatGPT has been hailed as a dramatic step forward for AI, but it's also started an online arms race to invent new tools to detect when students are cheating, and if students see it as a quick shortcut, what's stopping overworked educators from doing the same? Today, we'll look for some answers to that question and your questions too. Email us at 1a@wamu.org or talk to us using our app, 1A Vox Pop. ChatGPT has become one of the most popular sites online, more than 100 million monthly users within just two months of its launch last November. It took Instagram two and a half years to reach that number. Users can type in a prompt and quickly get whatever they asked for written by artificial intelligence, and it didn't take long for students to give it a try.

Kay: I teach freshman English at a local university, and three of my students turned in chat-bot-written papers this past week. I spent my entire weekend trying to confirm that they were chat-bot-written, then trying to figure out how I'm going to confront them, how I'm going to turn them in as plagiarists, which is what they are, and how I'm going to penalize their grade. This is not pleasant, and it's not a good temptation. These young men's academic careers now hang in the balance because they've been caught cheating.

Celeste Headlee: Thanks for that message. Kay. We continue our series today, Know It All, 1A and WIRED's Guide to AI. We have partnered with the digitally focused news outlet and today we'll talk about artificial intelligence in the classroom. Later in the hour, we'll hear from a student at Princeton University whose peers enthusiastically embraced ChatGPT. So he developed an app to detect when AI wrote a piece of text. But first, let's get the basics with Pia Ceres, she's a senior digital producer at WIRED. Hi Pia.

Pia Ceres: Hi. Thanks so much for having me.

Celeste Headlee: Great to have you and Lalitha Vasudevan, sorry about that, Lalitha. She's professor of technology and education at Columbia University's Teacher College. Also the college's vice dean for digital innovation. Lalitha, thanks so much for being with us.

Lalitha Vasudevan: Thanks so much for having me on.

Celeste Headlee: So Pia, it's incredible how much students are already using ChatGPT. How is it appearing in classrooms?

Pia Ceres: That is such a great question. So I guess just a little bit of background on ChatGPT and how it works, I think, will give us a better understanding of how students have been adopting it. I think what makes ChatGPT so distinctive, compared to its predecessors, is that the interface is more accessible than ever compared to, for example, GPT-3. So truly anyone with internet access and an email address could theoretically access this tool, which is what we're focused on here in this conversation on education. Any child with an email address could easily use this tool, and right now, unfortunately, we are seeing some cases of students who are using ChatGPT, in the case here in this voicemail, to plagiarize their papers. But we're also seeing … and this is more the focus of the story I was working on for WIRED, some teachers trying to proactively discuss ChatGPT and even incorporate it in some of their lessons.

Celeste Headlee: Yeah, we got a tweet from Damien who says, “As a professor, I've been devising multiple assignments that are meant to teach students how to use and assess GPT-type tools in their work so they can get a hands-on understanding of the limitations and implications of the technology.” We'll definitely be talking about that today, but Lalitha, ChatGPT itself is quite new, but AI is not new. How long have you been thinking about AI and the ways it might be used or maybe banned in the classroom?

Lalitha Vasudevan: Yeah, thank you for that, and I really appreciate what Pia just said about how accessible this particular tool is. But AI has been increasingly creeping into the education sphere for a while now, and my colleagues and I at Teachers College, we're in the technology and education program, have spent a lot of time really thinking about what automation means for various aspects of teaching. And I think we collectively take the approach that automation is not about supplanting the role of the educator or the teaching-learning relationship, but when you have districts like New York City that rely on Google Classroom to help support students' learning to provide opportunities for students, not just to be able to navigate these different tools, but also to create increased access for students, for all students enrolled in the district. And this is the question we have around the use of AI technology. So I think people have been using various forms of AI in increased uses of virtual reality, augmented reality, to really help build out simulations when other forms of access were not available. We see the use of various AI tools to assist with some basic assessment. Now there's a spectrum here. We don't want to necessarily outsource everything to artificial intelligence tools, but like Pia mentioned, there are teachers who are trying to be thoughtful about how the existing tools can support the kinds of teaching that I'm trying to do in the classroom.

Celeste Headlee: Which is different when you're talking about assessments or an analysis than it is when you're trying to teach someone how to write, right? So let's explain exactly the difference between, say, plagiarizing Wikipedia, which is quite easy to detect, and ChatGPT. You spoke with Kelly Gibson, an English teacher who includes the Great Gatsby in her curriculum, and she asks her students to write a 300-word essay on what the green light symbolizes in the Great Gatsby. So 1A producer Chris Remington asked ChatGPT to do the assignment for him.

Chris Remington: In F. Scott Fitzgerald's novel, the Great Gatsby, the green light is a powerful symbol that recurs throughout the story. The green light is first introduced in chapter one as a distant yet visible beacon across the water, visible from the end of Daisy and Tom's dock. The light represents hope, dreams, and aspirations, but it also has a deeper meaning. The green light represents Gatsby's unattainable dream of being with Daisy. Gatsby sees Daisy as the epitome of everything he desires, and the green light represents his hope of one day winning her over. Gatsby believes that the past can be recreated and that he can relive his former romance with Daisy.

Celeste Headlee: I mean, that's not brilliant writing, but holy cow, that's close. Pia, look, I have a surprising number of friends who are professors and teachers. They're freaked out. They're very concerned that students are just going to submit essays written by ChatGPT. Is that a reasonable fear?

Pia Ceres: I think that that is a totally valid first response, and the teacher that I spoke with for this story, Kelly Gibson, that was her response as well. I think it's important to recognize and validate the experiences and feelings of teachers and professors right now, especially coming out of the tumult that was emergency remote learning in 2020. I think a lot of what I've heard from teachers is not just a direct response to ChatGPT, but just continued exhaustion built up from the shutdowns of March 2020. So I absolutely hear teachers who are feeling fearful or just tired, because it wasn't too long ago. The technology, somewhat similarly, upended their lives and their practices. On the other hand, I feel like panic, while it's a valid response, is a one-note, alarm bell kind of response. And it might be, as some teachers see it, an invitation to instead step back and ask, well, if a chatbot could answer a question like what does the green light symbolize in the Great Gatsby, was that a question worth asking my students anyway? Was it a question—

Celeste Headlee: Shots fired.

Pia Ceres: —that students that could actually demonstrate understanding.

Celeste Headlee: Pia Ceres getting spicy. She's a senior digital producer at WIRED. Also with me is Lalitha Vasudevan. She's a professor at Columbia University's Teacher College, and we're hearing from you. Jay emails, “At the university where I work, students are asking whether using AI like ChatGPT counts as plagiarism. While using AI to do your work for you may be useful in the professional world, we need students to write their own papers so they can understand the concepts they need to learn. How do we ensure that AI is being used honestly?” Lalitha, do you have a response to Jay's question?

Lalitha Vasudevan: I think it's a really valid one, and again, I want to echo a couple of points that Pia made in response, and that is truly about how hard teachers work and the level of exhaustion that is prevalent across teachers right now. They've been going nonstop in continually volatile conditions for a few years. And I think the question about plagiarism versus what is this thing? Is ChatGPT an author? Is it a sounding board? Is it an editor? Is it a conversation partner? I think one way to answer that question, I would say, would be to invite teachers into that conversation and students into that conversation. You can easily Google phrases and it'll show you where they appear. Universities and colleges have been using Turn It In for a number of years to address this very thing, but I would err on the side of two points. One is, the question around cheating and ChatGPT is the tip of the iceberg in terms of what we could potentially see as the value of such a tool for learning. And so ChatGPT, I mean, cheating obscures the real set of questions that I think we want to move towards, but that doesn't negate the concerns that teachers have. And then the second question or second point, I would just say it really echoes something Pia said, and that is, and I'll put a little bit of a spin on it, not only does it call into question what kinds of assignments, what are we asking students to perform and produce in schools, but how are we supporting teachers to ask different types of questions themselves?

Celeste Headlee: Interesting. We are talking about AI and ChatGPT and we're speaking with Lalitha Vasudevan, she is a professor at Columbia University's Teacher College, and also Pia Ceres, she's a senior digital producer for WIRED magazine. Coming up, we'll meet a teacher who fears AI could bring an end to high school English as we know it. 

[break]

Celeste Headlee: AI can write essays, it can make music or at least remix it. I'm Celeste Headley, and this is 1A. I'm not an AI by the way. I'm at WMU and NPR. I'm joined by Lalitha Vasudevan. She's a professor at Columbia University's Teacher College, and also with me is Pia Ceres. She's a senior digital producer at WIRED. And let's bring a new voice into the conversation. Daniel Herman is a teacher at Maybeck High School in Berkeley, California. He wrote an essay for The Atlantic magazine called “The End of High School English.” Daniel, good to have you with us.

Daniel Herman: Good morning. Thank you for having me.

Celeste Headlee: And we would love to hear from you as well. Jack tweets us this, “College professor here. I checked an essay recently using three different AI detectors. One showed the essay almost entirely written by AI. Another showed 42 percent by AI, the third none, all human written. How can we trust the detectors?” And we'll be talking about that with the guy who built one of those detectors. But we would love to hear from all of you. If you are a parent, teacher, or student, how is AI showing up at your school? You can send us an email at wmu.org. So Daniel, when did you first start experimenting with ChatGPT, which only came out in November, but still, when did you start?

Daniel Herman: Yeah, it was, I guess, in early December. I started seeing things show up on my Twitter feed like a lot of people did, I think, rewrite famous Hamlet soliloquies in the voice of Donald Trump, that sort of thing. And I made an account, and of course the very first thing I did was start plugging in prompts for assignments that I give my students, and I'll never forget that moment, my heart started racing and I really couldn't believe what I was seeing appear on my screen pretty much instantaneously.

Celeste Headlee: Because it was so close to what you read from students.

Daniel Herman: Oh yeah, totally. And I think for the past few decades, it's been a bit of an arms race for teachers to craft assignments that are, what I would call, unhackable. And it immediately became clear that the definition of what unhackable meant had changed irrevocably.

Celeste Headlee: OK, so let's dig into what you mean by unhackable. One of the assignments you give your students is to write a 12- to 18-page paper comparing two of the great literary works. Here is part of a ChatGPT essay that compares Homer's epic Odyssey and Dante's Inferno.

Chris Remington: “Homer's Odyssey and Dante's Inferno are two epic poems that represent important works in the history of literature. Both works explored themes of journey, heroism, and human nature, but they also have significant differences in their style, content, and purpose. One of the most significant differences between the two works is their time periods. Homer's Odyssey was written about 800 BC while Dante's Inferno was written in the early 14th century. This temporal difference results in different styles and language use. The Odyssey uses a straightforward narrative style while the Inferno is written in a more complex and poetic form, featuring metaphors, allegories and symbolism.”

Celeste Headlee: So Daniel, would you immediately realize that was written by ChatGPT and not one of your students?

Daniel Herman: Yeah, I think so. There's a certain level of idiosyncrasy to any student's writing. I listened to your show yesterday and heard one of your guests say that AI models merely give the average of the data that it's called from. Another way to say that is that it delivers the conventions of a form or the standards and norms associated with a task—in this case, correct grammar, syntax, punctuation, which, let's be honest, is more than we can say for many high school and college students.

Celeste Headlee: There's no mistakes in it. That's how you'd know. But before we get back to our other two guests as well, I want to have you respond to something that Pia said and other people are saying. We got this tweet from a listener who says they are a former high school teacher who says, “Teaching has fundamentally failed to keep up with the world. A person from the 1950s would find most aspects of society dramatically changed, but a typical high school classroom almost entirely unrecognizable.” And Pia said, maybe ChatGPT is letting teachers know that they're asking the wrong questions.

Daniel Herman: Yeah, 100 percent. I'm with Pia. Honestly, who cares about the green light in Great Gatsby? We've collectively decided on this very narrow definition of what writing is and what we need students to be able to learn how to do. There's just this certain standard of expository writing or sometimes a narrative essay writing. One would never say that a student needs to graduate high school needing to write a sonnet or a short story, but everybody assumes, for some reason, this dreaded five paragraph essay. But very few of my students are going to become literary scholars. So why would this particular task about the green light be valued over way more interesting questions about what does it mean to desire something? What does it mean to be excluded? These are all things that the Great Gatsby is offering, and there's a lot of data and research that shows that sort of writing—spontaneous, expressive, reflective writing—can actually be really beneficial for students' well-being. And I really appreciate what the two guests have said about the crisis of teacher mental health. But I really think that we need to also focus on the fact that nobody needs to be told that teenagers are not OK and are experiencing their own mental health crisis. And for them to have to focus on the literary background of The Odyssey and Inferno, maybe let's throw that away and do something way more useful for them and for the teachers.

Celeste Headlee: Jerry disagrees with you. One of our listeners says, “Was the question about the meaning of the green light worth asking? Absolutely. Not all students in a class can handle the deepest questions. Some will need lower-level questions to help them develop their thinking. And as someone who often employs people, I would love to find people who could really write.” But Pia, let me put this question to you. When researchers tried out ChatGPT, they found that the AI client passed the US medical licensing exam. That is what students take before they become doctors. It also passed the final exam of a core course at the prestigious Wharton Business School. So how are we to understand the implications of an AI being able to pass these tests that make someone an MBA or a doctor?

Pia Ceres: That is a great existential question for education.

Celeste Headlee: It's an existential … It feels very real.

Pia Ceres: I think by existential, I really mean a fundamental question.

Celeste Headlee: I see.

Pia Ceres: Education and what it means. This calls to mind a conversation that I had with a professor at the higher level, who teaches master's-level courses, on really taking ChatGPT as an invitation to take a step back and have a bird's-eye view of the institutions that we have built formalized education on. And he observes that in his years teaching, there's been so much more of an emphasis on testing and testing and testing students. And instead of having a more expansive idea of what education is, is it dialogue based? Should it be rooted in students' personal experiences? Should it be more interpersonal? And I know that sounds all very abstract and vague as compared to a test on pen and paper that we are all familiar with, but I think it's worth again using ChatGPT as an invitation to maybe think bigger about what assessment should look like.

Celeste Headlee: Maybe we need to go back to the medieval practice of having all exams be oral and in Latin, but … Maybe not the Latin part. Lalitha, is it effective to make an outright ban on a client like ChatGPT? Would that work?

Lalitha Vasudevan: No. And districts are trying this or have tried this. It has two, I think, deleterious effects. One is it inhibits the ability for experimentation within the classroom space. It prevents the teachers and students from creating some community around collaboratively investigating this tool. But it also takes away access for students for whom the school technology might be the only access to this kind of technology they have. I think the other piece that connects to what Pia just said has to do with, I think education has been edging toward what might feel now like a sea change. I think we had a full upending happen with Covid. People were forced onto rightfully phrased emergency teaching. And so I think where we are now, I really liked Matt's term of “unhackable.” I think where we are now is we have an opportunity. People's attention and orientation toward potentially new tools in teaching, for better or worse, was opened up a few years ago with Covid. But we have an opportunity now to think how might we really answer the question, what is school for? Who is school for? And how can school be a place of greater belonging, not only for teachers, but also for young people?

Celeste Headlee: Many of you have a lot to say about this that we seem to have agreement. From Guy and Charles, Guy says, “A decade or so ago, I was talking to a friend's high school aged child. I was surprised to hear he wasn't required to do his math problems with a pencil as long as he could with a calculator. The AI writing program seems to be a logical extension of that. Why be required to write, write when you can write with the push of a button?” And Charles emails, “As the computers get smarter, people get increasingly stupid.” We are speaking with Lalith Vasudevan, she's a professor at Columbia University's Teachers College. Also with us is Daniel Herman. He teaches English at Maybeck High School in Berkeley, California. And Pia Ceres is senior digital producer for WIRED. I'm Celeste Headlee. You're listening to 1A and we're also hearing from you.

Kelly Gibson: “First, students are expected to read a novel and come to class with a very solid knowledge of that book, just like the good old days. Secondly, they will be given a prompt by which they will develop their own thesis statement in class, with me, with no computers. The goal is to not only make a very precise thesis statement, but actually couple that with a request for the AI bot to write this essay. And the more precise they are, the better off they will be. They will give those to me, and I will generate all of those essays using the AI bot. The next day they will come to class and using pen and paper, not computers, I will hand them the essays that the bot generated for them, the one they generated based on their individual thesis statement. And they will be given a graphic organizer and asked to deconstruct what happened.”

Celeste Headlee: So that voice is Kelly Gibson, who is a high school teacher that Pia interviewed in Oregon. Gibson was concerned about using ChatGPT. And so you just heard her strategy for working with ChatGPT, and she explained it during a TikTok video. Daniel, I wonder what you make of that, Kelly Gibson's solution, to the rise of ChatGPT.

Daniel Herman: I think that sounds fantastic. I'm really thinking about how I can reorient my own classes toward reading. For me in my life, books are just about the best thing that there is. And when I think about all the students who are assigned a text and are so consumed by anxiety about the paper that will have to be written, that they just suffer through the entire book. An exercise like that really opens them up to way more flexible and creative experience of reading, because they're not worried about producing this product on the receiving end.

Celeste Headlee: Renee emails, “ChatGPT can be a significant tool for teachers. It saves a lot of time searching for articles and then having to revise the text, to be accessible for students. ChatGPT can also generate examples for students to see as models embrace this tool. It's here.” Speaking of accessibility, let's turn to our voicemail box. This message is from Anne in Atlanta.

Anne: I work with a lot of attention deficit disordered adults. One of my recent persons suggested chat AI helped that person get started on writing assignments that were so difficult due to the procrastination that so often accompanies that diagnosis. I was wondering about your audience's thoughts on this use.

Celeste Headlee: So Pia, can you answer that? What are the potential benefits of this technology, especially for students like her ADD adults who might struggle to write an essay?

Pia Ceres: Yeah, I think that this could potentially unlock so many more learning supports for students who historically have been underserved in institutions of formal education. So I think specifically about students with learning disabilities or students for whom English is not their first language, being able to have something that can generate maybe sentence starters for them or give hints at syntax or structures that can help them brainstorm ideas, could actually be quite supportive. And I think that there's an emotional aspect to this as well. One common practice that I grew up with in high school was peer edits. So passing papers with your neighbors and then marking up in their margins with red pen. I think that some students might feel, as Daniel mentioned, a level of anxiety around that. But an AI chatbot won't judge you for making mistakes. It can't. It's a nonhuman entity. And I think it was also Kelly who, and other teachers, who suggested using ChatGPT to generate essays that students can then critique and edit themselves. And that way they can practice the skill of editing, which requires an awareness of what good writing, is in a nonjudgmental space.

Celeste Headlee: Yeah, I think AI can be quite snarky. Siri really is quite passive aggressive at times. Kathleen is a former teacher and emails us, “Apparently I retired at exactly the right time,” and a listener tweets, “I foresee the words, ‘Prompt engineer,’ showing up in job applications and résumés by the summer and it being an entire job by the end of the year as all schools ban kids from developing those exact skills.” And speaking of developing skills, in the minute or so we have left, Daniel, you argue in your Atlantic piece that ChatGPT could, quote, “Bring the end of writing as a gatekeeper, a metric for intelligence and a teachable skill.” How would that happen? How would AI make writing skills irrelevant?

Daniel Herman: Yeah, I'm really interested in this question. Anyone who teaches writing, or English teachers and tutors who are listening to this, know how much of teaching English is just absolute tedium. Helping students understand you need an apostrophe to make something possessive. This isn't a positive phrase. This is how to do an MLA citation or even just spelling, the standardization of spelling is just a made up thing. There's so much of writing education that's focused on that. If we can just give that over to the chatbot and we don't have to worry about apostrophes anymore, that seems great. The second thing is that there are many, many ways to have an experience with a piece of text and to demonstrate learning about of a piece of text.

Celeste Headlee: Yeah, without writing.

Daniel Herman: You can … Yeah, totally. I mean, you can do a drawing, you can do a presentation, but we've always assumed that writing is an essential way to engage with text. And maybe that's not true anymore. And maybe that's OK.

Celeste Headlee: Well, AI is going to change education, like it or not. We've been speaking with Daniel Herman. He teaches English at Maybeck High School in the Bay Area. Daniel, thank you so much.

Daniel Herman: Thank you for having me.

Celeste Headlee: Also sticking with us is Lalitha Vasudevan. She's a professor at Columbia University's Teacher College. And Pia Ceres is a senior digital digital producer at WIRED. Coming up, we'll meet with a student at Princeton who saw how ChatGPT was being used by his peers and decided to do something about it. I'm Celeste Headlee. We'll hear more from you and our guests in just a moment. 

[break]

Celeste Headlee: Now let's get back to our conversation about artificial intelligence in classrooms. Still with us are Pia Ceres, senior digital producer at WIRED and Lalitha Vasudevan, Ppofessor at Columbia University's Teachers College. Today's conversation is the second in our series on AI, in partnership with WIRED. Yesterday we began, as we usually do, by reading an introduction scripted ahead.

"As AI technology continues to evolve at a rapid pace, it's changing the way we live, work and even think. In this series, we'll hear from experts, innovators, and thought leaders who are on the forefront of AI research and development. We'll dive into the ethical and social implications of AI, explore the latest breakthroughs, and examine the impact of this cutting-edge technology on our lives."

So that was not written by one of our incredibly talented producers, but by a machine. The result was not perfect, but frankly, it wasn't notable for being written by AI. Our next guest probably could have caught that cheat with a little help from, well, artificial intelligence. Edward Tian is a senior at Princeton University, and he developed a free web program, GPT Zero, that detects when text is machine-written. We entered yesterday's intro into GPT Zero and it told us, quote, “The text is likely to be written entirely by AI.” Check and check. You caught us, Edward. Welcome.

Edward Tian: Hi. Thank you so much for having me.

Celeste Headlee: So can you explain in terms that even I could understand how you caught us?

Edward Tian: Yeah, well, so I built GPT Zero over winter break and released it in January, and the initial version uses these ideas of variance in human writing, that in human writing we have creativity, we have short-term memory, which spurs bursts in creativity, versus this machine writing which is pretty constant over time. So it started with that baseline. Since then it's been a month, and the program is a lot better now. So I have a team working on it, and we're taking AI data and human data and training a model to be better and better at detecting AI.

Celeste Headlee: So your software actually scores written texts for things like “burstiness” and “perplexity.” How do you measure burstiness?

Edward Tian: Burstiness is measured in variance in writing. It's funny because burstiness was a term I borrowed from linguistics, but in the last months I've slowly seen it seep into the machine learning lexicon, which has been really cool to observe, but it's plotting this first variable over time and measuring the variance in writing.

Celeste Headlee: So when you created GPT Zero, which I understand you did over winter break from school, as one does, why did you do it? I mean, it seems as a student you'd be motivated to not do this, right?

Edward Tian: Yeah. Well, initially I wasn't just approaching it from the education use case, although I understand that for a lot of teachers and students, this technology was suddenly thrust upon us. I was more approaching it from the technology perspective, that when we are releasing these admittedly brilliant and innovative technologies like ChatGPT and generative AI, we also have to really release the safeguards so that they're adopted responsibly, not months or not years after the technology's released, but right away. And that's where I thought, hey, maybe I could step in as well.

Celeste Headlee: OK. So I want to read you this tweet from Damien who says, "I work as a professor and AI ethics researcher and philosopher. I worry about how suspicion of AI's use will be invariably levied against some students more than others. If we aren't careful, these tools will expand disparities in educational outcomes. What's your response?

Edward Tian: Yeah, so that's an incredibly good point. I would say the GPT Zero app I released in January was like a lot of other apps people have been releasing, which is very imperfect and it has a black-and-white approach, that this is either human or AI. And in the last month, our approach is to really completely shift away from that. So if you try the app now, it's not going to say this is AI or human. It's going to say and highlight portions of an essay that's more likely to be AI generated. And we did this for two reasons. One is we have a GPT Zero educators community of around 4,000 teachers, and they told us this is something they want because students aren't writing entire essays in AI. They're mixing. But two, it shifts away from this catch tool that the teacher pulls out at the end into something that's highlighting portions and starting the conversation between teacher and student on what's an acceptable level of AI involvement, which might not be the entire essay, but might be portions.

Celeste Headlee: So as you mentioned, you have a team now, six people I understand, who are updating your program, improving it, but at the same time, I have to imagine the folks at ChatGPT are also updating their program and improving that. Is this just going to be sort of an escalating arms race?

Edward Tian: Yeah, it's an interesting question. So I would say two things. One is, it's absolutely an arms race, but more in terms of not the lab development of these technologies, but in the real world because everybody's building these classifiers with some of the same logic perplexity and burstiness that GPT Zero initially used. But no one's really tuning it to the education use case. And that's what we're focusing on. So one, we want to build this classifier to be specific for education and trained on student essays instead of a general classifier. And two, yeah, we want to be talking to teachers on how they want this tool to work. And then on the lab side, I think it's too early to say. There's some metrics in terms of perplexity and burstiness that might be innate for all of these generative AI models, whether it's GPD-3, 3.54, but also it could be that as these models get better and better, we need to train the detection model to be better and better, which so far costs a lot less money and is a lot easier to do than training another GPT.

Celeste Headlee: So you are still at this point a student. What would you tell teachers who are really worried about ChatGPT? We've heard from a lot of people who say, well, this means that teachers were asking pro forma questions. They need to change the questions that they ask. They need to change the ways they educate. What would you tell to teachers and professors?

Edward Tian: Well, the first thing is, the biggest piece of feedback I've gotten from teachers and professors is that it was reassuring that GPT Zero came out so early, even if they weren't using the tool. I don't know if that makes sense. It was just reassuring to know someone was working on this problem. But two, I would say that … Well, it is going to change how things are taught in schools. I would say that students still need to write. At the end of the day, these ChatGPT technologies aren't coming out with anything new, and it might replace certain portions of essays. And we're working together with teachers to navigate how, yeah, AI and human technologies mix, and what portions are really important skills and what portions are not. And yeah, we're excited to build the right tool to do that, because I think teachers also recognize that these technologies are here too.

Celeste Headlee: I feel like we can sum up your message to teachers as “Don't panic.”

Edward Tian: Yeah, that's totally true. And sure, maybe your Shakespeare essay, the student might use AI to write entirely, but if you're writing a niche summary of what you learned in class, ChatGPT doesn't have context. It's not coming up with anything new. It's only regurgitating what it's seen on the internet. Yeah, there's so many things that ChatGPT can't do.

Celeste Headlee: Edward Tian is a senior at Princeton University and the creator of GPT Zero. Thank you so much for speaking with us, Edward.

Edward Tian: Yeah, thanks so much for having me.

Celeste Headlee: Still with us, we have Pia Ceres, senior digital producer at WIRED and Lalitha Vasudevan, professor at Columbia University's Teacher College. Pia, it sounds like, in your reporting at least, your message is also to teachers and professors, don't panic.

Pia Ceres: Yeah. That is the headline of the story. I would say you're allowed to panic and feel your feelings, but now what? Where do we go from there? And what was so hopeful to me about hearing Edward speak was that it feels like students are taking ownership over this sea change that's happening in their lives. And I wanted to bring it back to something that Lalitha said really early in the conversation, which is inviting students to dialog about this tool. All the teachers that I spoke with also felt panic initially, but eventually became strong advocates of not ignoring this tool's presence, but rather using it as a jumping-off point to their students to engage in a critical dialog about technology and academic integrity, the role of writing in their own lives, because this kind of technology will change the world they live in, and they will also become the people shaping this technology.

Celeste Headlee: So Lalitha, ChatGPT has been free to use so far, but earlier this month, Open AI, which is a different, similar but a different brand, announced a premium membership subscription that's going to cost about $20 a month. I wonder if you have concerns about equity issues when it comes to the expansion of AI in the classroom?

Lalitha Vasudevan: Yeah, certainly, and we want … Just to build on what Pia said, we want all students to be part of building that future. We want students who go to school to be prepared to not only graduate school, high school and college, but solve the problems that haven't yet been discovered. And I think moving to a subscription model is predictable. But we also know, with varying results, other companies are working on, other organizations are working on their own chatbots and AI tools.

Celeste Headlee: That's true.

Lalitha Vasudevan: So I'm hopeful that as more people get involved and feel invested, and I think that to me is … Even as there's been so much fear, the fact that so many people are engaged is a hopeful sign to me. And I think we have a chance to do a few things as people using these tools. And one is, I think, to be part of the conversation around things like how are these … Yes, we have chatbot detectors. That's great to hear that that's being built, but some people have said, should we be citing AI-generated text? Should we be referencing it or naming it in some way so it becomes normalized and not sensationalized? I think it gives us a way to really open up the conversation even more about media literacy and critical literacy that scholars and teachers have been doing for a long time.

I think the two other points I wanted to make about this, because I think we want more people to use these tools, because we want to demystify them, and we want the tools to be more responsible, the makers of these tools, and that is to continue the teaching and learning relationships that I think all of your guests have talked about, really addressing those, because that also can feed or diminish equity and access to education opportunities. And the last thing is we want, and I think I say we it an education community, but also as an educated community, we want people to ask better questions. We want students to really to dive into their inquiries. We want teachers to deepen their inquiries. And I think only good things can come from people asking better questions, more questions. And I think that's what, both from an ethical perspective in terms of who has access, but also from how we use these tools, that's what's going to help us, I think, shape and agitate in productive ways.

Celeste Headlee: Yeah. I wonder, Pia, because perhaps the solution is using the Salcon methods, Salcon of the Khan Academy where you do the lectures at home and do the homework in class. Jeff emails us, “Maybe English teachers should have all essays done in class. I have long hated the idea of assigned homework. It's not necessary.” Do you think something like ChatGPT is going to reopen that long-standing debate about homework?

Pia Ceres: Oh, absolutely. I think that it will definitely explode our notions of what is the best use of time in class and what is the best use of learning time outside of class? So I think, to go back to what Daniel said earlier, something that I have been seeing teachers experiment more with is just switching up that format of multimodal learning, to find a better use of time in class—demonstrating learning in other ways outside of writing, having a dialog, drawing a picture about something that they've been reading in class. So I definitely think there's room for more creativity there.

Celeste Headlee: Pia, we have only about 30 seconds left, but I wonder, do you expect reporters and journalists to start using ChatGPT to write up their stories when they're on deadline?

Pia Ceres: Don't tell my editor any of this. No, I'm joking.

Celeste Headlee: I didn't say you. I just said people.

Pia Ceres: I think that that's something that every newsroom will have to navigate on their own. We're starting conversations at WIRED about it, but I think that remains to be seen and will be developed newsroom by newsroom.

Celeste Headlee: Interesting. That is Pia Ceres, senior digital producer with WIRED, and Lalitha Vasudevan is a professor of technology and education at Columbia University's Teacher College. She's also the college's vice dean for digital innovation. Pia and Lalitha, thank you so much for joining us today. We continue this series, Know It All, 1A and WIRED's Guide to AI, tomorrow with a conversation about artificial intelligence and health care. And WIRED has a newsletter if you want to learn more about how technology is changing our lives. It's called Fast Forward and explores the latest advantages in AI as well as other technologies. You can sign up at WIRED.com/newsletter.

Today's producers were Chris Remington and Avery Jessa Chapnick. This program comes to you from WAMU, part of American University in Washington, distributed by NPR. I'm Celeste Headlee. We'll talk more soon. This is 1A.

[Music rises, then fades out.]

Lauren Goode: Hi, it's Lauren again. Thanks for listening to this special show. If you want to hear more of these conversations, you can find the entire Know It All series at the1a.org/series. That's one as in the numeral one, so it's the1a.org/series. Thanks to WAMU and NPR for the use of this episode. We'll be back to our regular programming next week. Until then, goodbye.

[Gadget Lab outro theme music plays]