
When historian Jan Burzlaff from Cornell University examined the capabilities of artificial intelligence in summarizing Holocaust testimonies, he encountered a significant shortcoming. Tasked with summarizing the account of Luisa D., a seven-year-old survivor, the AI tool ChatGPT omitted a critical detail: Luisa’s mother cutting her own finger to provide drops of blood — described as “the faintest trace of moisture” — to help her daughter survive. This oversight highlights the limitations of AI in capturing the profound emotional weight of historical narratives.
In his recent paper titled “Fragments, Not Prompts: Five Principles for Writing History in the Age of AI,” published on September 11, 2023, in the journal *Rethinking History*, Burzlaff argues that human historians are essential for interpreting the emotional and moral complexities behind historical events. He cautions that an increasing reliance on AI technology may lead academics to overlook the deeper meanings embedded within history.
Burzlaff asserts, “If AI falters with Holocaust testimony – the most extreme case of human suffering in modern history – it will distort more subtle histories too.” He emphasizes that the act of summarization can obscure the fractures, silences, and ethical dimensions that define such narratives.
A recent study conducted by Microsoft ranked historians as the second most threatened profession by AI innovations. Burzlaff, who specializes in Nazi Germany, contends that historical writers possess unique skills that AI currently lacks, particularly the ability to convey human suffering authentically. He notes that his ongoing research involves using ChatGPT to summarize testimonies from Holocaust survivors recorded in locations such as La Paz, Kraków, and Connecticut in 1995. The AI’s performance revealed a tendency to overlook the emotional suffering of the individuals involved.
As tools like ChatGPT become more prevalent in education and public discourse, Burzlaff believes historians must critically assess what these systems can and cannot achieve. He writes, “They summarize but do not listen, reproduce but do not interpret, and excel at coherence but falter at contradiction.” He urges historians to reaffirm their role in recognizing the meaning of historical events, emphasizing the need for a nuanced understanding of human experience.
The impetus for Burzlaff’s article arose from his course, JWST 3825: The Past and Future of Holocaust Survivor Testimonies. This course juxtaposed close listening to survivor accounts with a responsible use of AI technologies like ChatGPT. Observing students engage with AI’s capabilities inspired him to formulate guidelines for historians and educators, particularly those addressing sensitive subjects such as trauma and genocide.
Burzlaff emphasizes that the stakes extend beyond the memory of the Holocaust. He asserts that how societies interpret their pasts in an era dominated by predictive technologies is at risk. He encourages historians to embrace the diversity of survivor accounts, recognizing that individual experiences cannot always be categorized neatly by algorithms.
“If historical writing can be done by a machine, then it was never historical enough,” Burzlaff concludes, advocating for an approach that prioritizes the ethical, intellectual, and stylistic responsibilities inherent in the discipline of history.
As the discourse surrounding AI’s role in academia continues to evolve, the challenge remains for historians to navigate these technological advancements while preserving the richness and complexity of the human experience.