A German professor has lost two years of academic work stored within OpenAI's ChatGPT after changing a single data privacy setting. The incident, which resulted in the permanent deletion of his entire chat history without warning, has ignited a debate over the reliability of generative AI tools for professional and academic use.
Key Takeaways
- Marcel Bucher, a professor at the University of Cologne, lost extensive academic materials stored in ChatGPT.
- The data was permanently deleted after he temporarily disabled the "data consent" option in the settings.
- There was no warning message or an option to undo the deletion, and OpenAI support could not recover the files.
- The event has drawn sharp criticism online, with many questioning the professor's reliance on AI for core academic tasks.
A Routine Task Turns into a Digital Disaster
For Marcel Bucher, a professor of plant sciences at the University of Cologne, ChatGPT had become an indispensable daily assistant. Since signing up for the paid ChatGPT Plus plan two years ago, he integrated the AI into nearly every facet of his academic life.
He used the tool to draft emails, structure grant applications, revise publications, and prepare course materials. According to an essay he penned for the journal Nature, the AI's ability to remember conversation context provided a stable and continuous workspace for his projects.
However, this reliance came to an abrupt end in August. Curious about the platform's functionality, Bucher decided to test a setting. "I temporarily disabled the ‘data consent’ option because I wanted to see whether I would still have access to all of the model’s functions if I did not provide OpenAI with my data," he wrote.
The result was immediate and catastrophic. "At that moment, all of my chats were permanently deleted and the project folders were emptied — two years of carefully structured academic work disappeared," Bucher explained. "No warning appeared. There was no undo option. Just a blank page."
What Was Lost?
Professor Bucher reported that the deleted data included a wide range of professional materials, such as:
- Drafts for emails and course descriptions
- Structured grant applications
- Revisions for academic publications
- Lecture preparations and exam questions
- Analysis of student responses
No Way Back: The Futility of Data Recovery
Initially believing it to be a temporary glitch, Bucher attempted several standard troubleshooting steps. He reinstalled the application and tried accessing his account through different web browsers, but his extensive archive of conversations remained gone.
His attempts to seek help from the company proved equally fruitless. He first encountered an AI support agent which was unable to assist. After eventually reaching a human representative at OpenAI, the conclusion was the same: the data was irrevocably lost.
While he had saved partial copies of some materials elsewhere, Bucher confirmed that "large parts of my work were lost forever." The incident highlights a critical vulnerability for professionals who may be entrusting significant intellectual property to platforms without robust data protection protocols.
A Cautionary Tale for an AI-Driven World
In his reflection on the event, Bucher pointed to a growing trend within academic and other professional institutions. "We are increasingly being encouraged to integrate generative AI into research and teaching," he noted, explaining that universities are actively experimenting with embedding these tools into their curricula.
However, he argues his experience reveals a fundamental flaw in this rapid adoption. He believes the tools were not developed with the necessary standards for professional work.
"These tools were not developed with academic standards of reliability and accountability in mind. If a single click can irrevocably delete years of work, ChatGPT cannot... be considered completely safe for professional use."
The episode serves as a stark reminder of the risks associated with cloud-based platforms where users have limited control over their data and the underlying infrastructure. It raises important questions about data ownership, backup procedures, and the true cost of convenience offered by AI assistants.
Public Reaction: Scorn Over Sympathy
While the story might have elicited sympathy in a different context, the public response has been largely critical. On social media platforms like Bluesky, users were quick to condemn the professor's extensive use of AI for tasks they felt he should have been doing himself.
One user commented, "Amazing sob story: ‘ChatGPT deleted all the work I hadn’t done’."
Another took a harsher stance, writing, "Maybe next time, actually do the work you are paid to do *yourself*, instead of outsourcing it to the... plagiarism machine."
The Broader Debate
The incident touches on a wider societal skepticism toward generative AI. Concerns range from the technology's potential for generating misinformation and enabling academic dishonesty to its environmental impact and ethical sourcing of training data.
Some online commentators even questioned the authenticity of Bucher's essay in Nature, speculating that it too might have been written with AI assistance. The backlash reflects a growing divide between those who embrace AI as an inevitable future and those who view it with deep suspicion, particularly within creative and academic fields.
As organizations continue to push for greater AI integration, stories like Professor Bucher's underscore the critical need for transparent data policies, robust safety features, and a clear understanding of the limitations and risks involved in outsourcing intellectual work to a machine.





