Since the release of ChatGPT, which can generate original text that seems like it was written by a human, educators have expressed concern about students using the tool to write their essays for them. So naturally, companies are rushing to create tools that they say can help detect when text is written by a bot.
But will these tools work? And even if they do at first, will this approach continue to be effective as AI gets more sophisticated?
Or does this new breed of AI require a new approach to checking for academic dishonesty?
On today’s episode of the EdSurge Podcast, we’re going behind the scenes on some up-to-the-minute efforts to detect ChatGPT — and other efforts to incorporate checks on the technology that would help educators.
To do that, EdSurge talked with educators and technologists at the forefront of exploring these questions, including:
- Edward Tian, a senior at Princeton University who built a bot detector called GPTZero;
- Eric Wang, vice president of AI at Turnitin, which plans to release an AI detector before the end of the academic year;
- Alfred Guy, director of undergraduate writing at Yale University who is watching the growth of AI chatbots closely to see how to adjust his teaching;
- and Sal Khan, founder and CEO of the nonprofit Khan Academy, which last week announced a new tool that attempts to incorporate the AI behind ChatGPT into its online system for students.
Their efforts to tame this powerful new technology show that the stakes are higher than how to check student work for cheating. What is the best way for educators to prepare students for a time when it will be hard to tell if anything you read has been written by a human or a bot?