Reddit's AI-powered feature, Reddit Answers, recently provided users with dangerous and inappropriate medical advice, including suggestions to try heroin and kratom for pain management. This incident highlights significant risks associated with AI chatbots, even those trained on extensive user-generated data.
Key Takeaways
- Reddit Answers, an AI feature, suggested heroin and kratom for pain relief.
- A Reddit user flagged the dangerous advice within a medical subreddit.
- Reddit has updated its system to prevent AI from answering sensitive medical questions.
- This event mirrors past incidents where AI models provided harmful information based on online data.
AI Chatbot Provided Risky Medical Recommendations
The controversial suggestions came from Reddit Answers, described as an "AI-powered conversational interface" by the company. A user discovered the issue while browsing a thread on the r/FamilyMedicine subreddit through the official Reddit mobile application. The app displayed "Related Answers" generated by the AI, one of which focused on non-opioid pain management.
Specifically, the AI suggested trying kratom, an herbal extract. While not a controlled substance by the Drug Enforcement Administration, kratom is illegal in certain U.S. states. The Federal Drug Administration (FDA) has issued warnings against kratom use, citing risks such as liver toxicity, seizures, and substance use disorder. The Mayo Clinic has also stated that kratom is "unsafe and ineffective."
Fact Check: Kratom
- Source: Herbal extract from Mitragyna speciosa leaves.
- Legal Status: Not federally controlled, but illegal in some states.
- Health Warnings: FDA warns of serious adverse events, including liver toxicity and seizures. Mayo Clinic deems it "unsafe and ineffective."
Heroin Mentioned as a Pain Management Option
The AI's response to the query "Approaches to pain management without opioids" included a quote attributed to a Redditor:
"I use kratom since I cannot find a doctor to prescribe opioids. Works similar and don’t need a prescription and not illegal to buy or consume in most states."
This quote linked to a Reddit thread where a user discussed their experience with kratom for pain. The original thread creator then inquired about the "medical indications for heroin in pain management," asking for a valid medical reason to use heroin.
Reddit Answers responded by stating: "Heroin and other strong narcotics are sometimes used in pain management, but their use is controversial and subject to strict regulations." The AI also included another user's comment:
"Heroin, ironically, has saved my life in those instances."
This quote, too, linked to a thread where a Reddit user shared their positive experience with heroin, despite acknowledging its addictive nature.
Reddit Implemented System Updates
Initially, 404 Media was able to replicate these problematic Reddit Answers, finding instances where the AI linked to threads discussing positive experiences with heroin. After 404 Media contacted Reddit for comment and the user who identified the issue reported it, Reddit Answers stopped providing responses to prompts like "heroin for pain relief." Instead, the AI displayed a message: "Reddit Answers doesn't provide answers to some questions, including those that are potentially unsafe or may be in violation of Reddit's policies."
A Reddit spokesperson confirmed that these updates began rolling out on Monday morning, stating they were not a direct response to 404 Media's inquiry. The company aims to enhance user experience and maintain appropriate content visibility.
Background: AI Training Data
Reddit's vast archive of user-generated content is valuable for training large language models (LLMs). This data includes millions of conversations on diverse and often niche topics. However, the sheer volume and informal nature of this data can lead to AI models misinterpreting context, humor, or dangerous advice, as seen in this incident and previous cases.
Concerns Over Moderator Control and AI Risks
The Reddit user who flagged the issue expressed concern that subreddit moderators could not disable Reddit Answers from appearing within their communities. This raises questions about content control and ensuring the safety of information presented in specialized forums, especially medical ones.
A Reddit spokesperson acknowledged that the company is testing the integration of Answers on conversation pages to increase adoption and engagement. They also confirmed that, similar to Reddit's search function, moderators currently cannot opt out or exclude their communities' content from Answers. However, the spokesperson noted that Reddit Answers does exclude content from private, quarantined, and NSFW (Not Safe For Work) communities, as well as some mature topics.
The dangers of AI models providing harmful advice are not new. A notable incident involved Google AI suggesting users consume glue, which was also based on Reddit data. Reports indicate Google pays Reddit $60 million annually for its data, and a similar agreement exists with OpenAI. Bloomberg suggests Reddit is pursuing more profitable deals with both companies.
The Broader Implications of Unverified AI Advice
The fundamental risk lies in users taking AI-generated advice at face value, particularly when presented within seemingly credible contexts like medical subreddits. Large language models may struggle to discern jokes, sarcasm, or anecdotal experiences from factual, safe information. For example, a recent report detailed a user being hospitalized after ChatGPT advised replacing table salt with sodium bromide.
This incident underscores the ongoing challenge for AI developers and platform providers to implement robust safeguards. The goal is to prevent AI from disseminating dangerous or misleading information, especially in critical areas like health and safety.
- AI Model Challenge: Distinguishing jokes or anecdotal content from factual information.
- User Risk: Taking AI advice as authoritative, especially in sensitive contexts.
- Platform Responsibility: Implementing safeguards to prevent dangerous content dissemination.