ChatGPT, the AI-powered chatbot, is something new. In response to prompts, it can answer questions, write essays, generate code, and otherwise do your creative bidding. It has already passed medical licensing, MBA, and law school exams; co-authored scientific papers; ghostwritten sermons; and inspired headlines about the death of the college essay and the obsolescence of white-collar jobs.
But ChatGPT is also something very old—a golem. In Jewish folklore, golems were artificial humans created via esoteric incantations (like a programming language) to perform tasks both menial and superhuman (like computer software).
Most famously, the Maharal of Prague, a 16th-century sage, is said to have fashioned a golem out of clay to protect the Jewish community from pogroms. Eventually, the golem ran amok, and the Maharal deactivated it by erasing the letter aleph from the word emet (“truth”) written on its forehead, leaving only met (“dead”).
Like a golem, ChatGPT doesn’t think for itself: it responds to commands. It takes those commands at face value, which is to say, without nuance or judgment. The Maharal’s wife ordered the golem to fetch water for the kitchen from a nearby brook. Unsupervised, the golem kept pouring water into the kitchen until it flooded. Oblivious to the consequences of its actions, the golem only stopped at the Maharal’s command when water burst into the courtyard.
ChatGPT can perform the white-collar equivalent of fetching water. At your command, it will draft emails, debug code, and argue a case. But like a golem, it requires close supervision. That’s because, as a large language model, ChatGPT’s answers are based on statistical patterns in its training data, not an experiential understanding of the world or a human differentiation between right and wrong.
Researchers use the term “hallucination” to describe AI’s tendency to confidently assert plausible-sounding falsehoods. Kevin Scott, Microsoft’s chief technology officer, said of AI that “the further you try to tease it down a hallucinatory path, the further and further it gets away from grounded reality.” ChatGPT and its eventual successors may not be capable of flooding the kitchen, but they can flood the internet, the public sphere, and the minds of unstable individuals with even more damaging lies.
Stack Overflow, a question-and-answer website for programmers, banned posting ChatGPT-generated content “because the average rate of getting correct answers from ChatGPT is too low.” Researchers at NewsGuard, which tracks online misinformation, found that ChatGPT could be induced to generate false narratives about controversial topics 80 per cent of the time. Although the bot has built-in safeguards to prevent abuse, crafty users have already found loopholes to make it bypass its own restrictions.
Two of OpenAI’s own researchers contributed to a report warning that “language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor.” If ChatGPT can help a high-school English student pretend she’s literate enough to write an A+ essay, it can provide the same persuasive veneer to conspiracy theorists, online scammers, and government-sponsored trolls.
Golems were said to possess a portion of da’at (knowledge) but to lack the other facets of intelligence: chokhmah (wisdom) and binah (understanding). While ChatGPT seems more advanced than the Maharal’s golem—crafted as it is out of gigabytes instead of clay—it is subject to the same limitations. Both artificial beings were given da’at, or data, but not the wisdom or judgment to interpret, contextualize, and actualize that data in a responsible way.
Moreover, the Maharal’s golem was created by a wise man for a noble task. ChatGPT was created by an amoral Silicon Valley startup whose CEO once said “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”
AI-powered chatbots may be useful for taking on some of the drudge work of the knowledge economy. But like golems, they’re instruments of humans, not our replacements. And if they run amok—becoming tools to conjure up scams and propaganda, malware and misinformation—they should, like any golem, be rendered met.
Ben Shragge is the HJN’s digital editor. He lives in Boston with his wife and young daughter.