If you’ve had any connection to the internet in the past few months, you’ll have heard about a scary new threat to education known as “ChatGPT”.
ChatGPT is a free online program that can produce tailor-made answers in essay form to almost any prompt that is typed in. Fundamentally, the fear comes from the idea that students can now outsource their school work to this tool and fool their teachers into giving them good grades that reflect no genuine learning. While it is true that teachers will have to adjust to the existence of this new online tool, I believe that any changes we will be forced to make are ones we should welcome.
Contrary to undermining education, ChatGPT is exposing weaknesses that are already there. The reality is that while ChatGPT can write credible poems and essays, it cannot write good ones. Its work is formulaic, uninteresting, and—strangely for an “intelligence” with direct access to millions of works—insubstantial. For instance, in an essay I gave it, the bot wrote that the Korean War “was not a direct result of the Cold War but rather a byproduct of WWII,” a statement that manages to be both pedantic and incorrect. The AI “hallucinates,” confidently asserting wrong statements. It wrote that “it is often overlooked” that the Korean War was also a civil conflict. Well, no, this fact is very commonly noted. Real students sound less professorial and say smarter things.
While ChatGPT is correctly trumpeted as a breakthrough in readily accessible “artificial intelligence,” that does not mean that it is close to replicating the human intelligence that all students already possess.
As Yann LeCun, chief AI scientist at Meta and professor of computer science at NYU, recently explained, AI’s like ChatGPT are completely text-based and lack the various mental models that every human possesses and which give humans the capacity for “common sense.” They do not really understand. Rather they know how to produce the words that commonly accompany understanding.
The situation reminds me of “Clever Hans”, a trained horse who amazed audiences in the late 19th century with his “ability” to do math and other intellectual feats. It turned out that the horse didn’t understand math at all. Rather, his owner had unintentionally trained the horse to read his unconscious cues so precisely that he could consistently give the “correct” answer and get the treat. This is precisely how large-language machine (LLM) learning occurs: It depends on the responses of human interlocutors to determine whether its responses are “good” or not. Thus, while ChatGPT may become another tool in students’ toolkit next to their calculator, Wikipedia, and Grammarly, it is still far from able to “do the work for them.” In an informal “Turing Test” I conducted with some of our faculty in which they had to distinguish a real student essay from one generated by ChatGPT, they identified the real student work 70% of the time—bad odds for any would-be cheaters.
But even if ChatGPT cannot learn to fool teachers 100% of the time, if all we expect students to do is replicate arguments they find online, then we are doing our students a disservice. Fortunately, our strategy at Post Oak is to continually put students into developmentally-appropriate environments where they must use their uniquely human abilities. Theoretical problems, i.e., problems out of a book, won’t suffice; these problems have all been worked over and solved. Any “Clever Hans” can learn to come up with the right answers. The real world presents an ineluctable, inexhaustible, and ever-renewing source of situations, challenges, questions, and opportunities that require true human intellect to comprehend, and true human ingenuity, creativity, caring, and judgment to solve.
So how do Post Oak teachers know the difference between mere information retrieval and true application of knowledge?
One sure sign of application of knowledge is struggle. Students who struggle are transcending the textbook and learning to grapple with the kinds of problems encountered in the real world. At Post Oak, our teachers excel at creating environments that are rich in opportunities for thinking creatively, applying knowledge, transferring skills in unfamiliar contexts, and practicing citizenship, all in a way that is developmentally appropriate. Post Oak parents understand that growth necessarily involves struggle. The furrowed brow, the sweat, perhaps some frustration and complaining, the problem that takes days to solve, the essay topic that at first seems inscrutable and then slowly comes into focus after much - sometimes anguished - deliberation - these are the signs of a genuine human personality under construction. We know the experience was developmentally-appropriate when students emerge on the other side with their enthusiasm for the work enhanced.
Thus, as an educational leader, a teacher, and a parent, I am challenged, excited, and ultimately grateful for the opportunity presented by the invasion of the chatbots. We see their advent as a healthy challenge that will only reinforce our commitment to the “bold pathway” on which Post Oak has already embarked.