The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.

Will Douglas Heavenarchive page

A ballpoint pen doodled OpenAi logo is in the center of a lined paper from a school notebook, surrounded by other doodles.

SELMAN DESIGN

The response from schools and universities was swift and decisive.

 

Just days after OpenAI dropped ChatGPT in late November 2022, the chatbot was widely denounced as a free essay-writing, test-taking tool that made it laughably easy to cheat on assignments.

 

Los Angeles Unified, the second-­largest school district in the US, immediately blocked access to OpenAI’s website from its schools’ network. Others soon joined. By January, school districts across the English-speaking world had started banning the software, from Washington, New York, Alabama, and Virginia in the United States to Queensland and New South Wales in Australia.

 

Several leading universities in the UK, including Imperial College London and the University of Cambridge, issued statements that warned students against using ChatGPT to cheat.

 

“While the tool may be able to provide quick and easy answers to questions, it does not build critical-­thinking and problem-solving skills, which are essential for academic and lifelong success,” Jenna Lyle, a spokeswoman for the New York City Department of Education, told the Washington Post in early January.

 

This initial panic from the education sector was understandable. ChatGPT, available to the public via a web app, can answer questions and generate slick, well-structured blocks of text several thousand words long on almost any topic it is asked about, from string theory to Shakespeare. Each essay it produces is unique, even when it is given the same prompt again, and its authorship is (practically) impossible to spot. It looked as if ChatGPT would undermine the way we test what students have learned, a cornerstone of education.

But three months on, the outlook is a lot less bleak. I spoke to a number of teachers and other educators who are now reevaluating what chatbots like ChatGPT mean for how we teach our kids. Far from being just a dream machine for cheaters, many teachers now believe, ChatGPT could actually help make education better.

 

Advanced chatbots could be used as powerful classroom aids that make lessons more interactive, teach students media literacy, generate personalized lesson plans, save teachers time on admin, and more.

 

Educational-tech companies including Duolingo and Quizlet, which makes digital flash cards and practice assessments used by half of all high school students in the US, have already integrated OpenAI’s chatbot into their apps. And OpenAI has worked with educators to put together a fact sheet about ChatGPT’s potential impact in schools. The company says it also consulted educators when it developed a free tool to spot text written by a chatbot (though its accuracy is limited).

 

“We believe that educational policy experts should decide what works best for their districts and schools when it comes to the use of new technology,” says Niko Felix, a spokesperson for OpenAI. “We are engaging with educators across the country to inform them of ChatGPT’s capabilities. This is an important conversation to have so that they are aware of the potential benefits and misuse of AI, and so they understand how they might apply it to their classrooms.”

 

But it will take time and resources for educators to innovate in this way. Many are too overworked, under-resourced, and beholden to strict performance metrics to take advantage of any opportunities that chatbots may present.

 

It is far too soon to say what the lasting impact of ChatGPT will be—it hasn’t even been around for a full semester. What’s certain is that essay-writing chatbots are here to stay. And they will only get better at standing in for a student on deadline—more accurate and harder to detect. Banning them is futile, possibly even counterproductive. “We need to be asking what we need to do to prepare young people—learners—for a future world that’s not that far in the future,” says Richard Culatta, CEO of the International Society for Technology in Education (ISTE), a nonprofit that advocates for the use of technology in teaching.

 

Tech’s ability to revolutionize schools has been overhyped in the past, and it’s easy to get caught up in the excitement around ChatGPT’s transformative potential. But this feels bigger: AI will be in the classroom one way or another. It’s vital that we get it right.

 

From ABC to GPT

Much of the early hype around ChatGPT was based on how good it is at test taking. In fact, this was a key point OpenAI touted when it rolled out GPT-4, the latest version of the large language model that powers the chatbot, in March. It could pass the bar exam! It scored a 1410 on the SAT! It aced the AP tests for biology, art history, environmental science, macroeconomics, psychology, US history, and more. Whew!

It’s little wonder that some school districts totally freaked out.

 

Yet in hindsight, the immediate calls to ban ChatGPT in schools were a dumb reaction to some very smart software. “People panicked,” says Jessica Stansbury, director of teaching and learning excellence at the University of Baltimore. “We had the wrong conversations instead of thinking, ‘Okay, it’s here. How can we use it?’”

 

“It was a storm in a teacup,” says David Smith, a professor of bioscience education at Sheffield Hallam University in the UK. Far from using the chatbot to cheat, Smith says, many of his students hadn’t yet heard of the technology until he mentioned it to them: “When I started asking my students about it, they were like, ‘Sorry, what?’”

 

Even so, teachers are right to see the technology as a game changer. Large language models like OpenAI’s ChatGPT and its successor GPT-4, as well as Google’s Bard and Microsoft’s Bing Chat, are set to have a massive impact on the world. The technology is already being rolled out into consumer and business software. If nothing else, many teachers now recognize that they have an obligation to teach their students about how this new technology works and what it can make possible. “They don’t want it to be vilified,” says Smith. “They want to be taught how to use it.”

Take cheating. In Crompton’s view, if ChatGPT makes it easy to cheat on an assignment, teachers should throw out the assignment rather than ban the chatbot.

 

We need to change how we assess learning, says Culatta: “Did ChatGPT kill assessments? They were probably already dead, and they’ve been in zombie mode for a long time. What ChatGPT did was call us out on that.”

 

Critical thinking

Emily Donahoe, a writing tutor and educational developer at the University of Mississippi, has noticed classroom discussions starting to change in the months since ChatGPT’s release. Although she first started to talk to her undergraduate students about the technology out of a sense of duty, she now thinks that ChatGPT could help teachers shift away from an excessive focus on final results. Getting a class to engage with AI and think critically about what it generates could make teaching feel more human, she says, “rather than asking students to write and perform like robots.”

 

This idea isn’t new. Generations of teachers have subscribed to a framework known as Bloom’s taxonomy, introduced by the educational psychologist Benjamin Bloom in the 1950s, in which basic knowledge of facts is just the bedrock on which other forms of learning, such as analysis and evaluation, sit. Teachers like Donahoe and Crompton think that chatbots could help teach those other skills.

 

In the past, Donahoe would set her students to writing assignments in which they had to make an argument for something—and grade them on the text they turned in. This semester, she asked her students to use ChatGPT to generate an argument and then had them annotate it according to how effective they thought the argument was for a specific audience. Then they turned in a rewrite based on their criticism.

Breaking down the assignment in this way also helps students focus on specific skills without getting sidetracked. Donahoe found, for example, that using ChatGPT to generate a first draft helped some students stop worrying about the blank page and instead focus on the critical phase of the assignment. “It can help you move beyond particular pain points when those pain points aren’t necessarily part of the learning goals of the assignment,” she says.

 

Smith, the bioscience professor, is also experimenting with ChatGPT assignments. The hand-wringing around it reminds him of the anxiety many teachers experienced a couple of years ago during the pandemic. With students stuck at home, teachers had to find ways to set assignments where solutions were not too easy to Google. But what he found was that Googling—what to ask for and what to make of the results—was itself a skill worth teaching.

 

Smith thinks chatbots could be the same way. If his undergraduate students want to use ChatGPT in their written assignments, he will assess the prompt as well as—or even rather than—the essay itself. “Knowing the words to use in a prompt and then understanding the output that comes back is important,” he says. “We need to teach how to do that.”

 

The new education

These changing attitudes reflect a wider shift in the role that teachers play, says Stansbury. Information that was once dispensed in the classroom is now everywhere: first online, then in chatbots. What educators must now do is show students not only how to find it, but what information to trust and what not to, and how to tell the difference. “Teachers are no longer gatekeepers of information, but facilitators,” she says.

 

In fact, teachers are finding opportunities in the misinformation and bias that large language models often produce. These shortcomings can kick off productive discussions, says Crompton: “The fact that it’s not perfect is great.”

 

if u need more help please contact us at +91- 93 92 91 89 89 or sales@qaprogrammer.com, www.qaprogrammer.com


Share on: