OpenAI has updated its usage policies to provide clearer guidelines on the use of its AI tools, including ChatGPT, for seeking medical and legal information. The company emphasized that this is a clarification of its long-standing position rather than a new restriction, stating that its services are not a substitute for licensed professional advice.
The update comes amid growing public reliance on AI for sensitive queries and academic studies highlighting the potential for inaccurate or misleading information from large language models.
Key Takeaways
- OpenAI's October 29 policy update specifies that users cannot use its services for "tailored advice that requires a license" without professional involvement.
- The company stated this is not a change in its rules but a clarification of existing policy.
- Academic research has raised concerns about the accuracy and persuasiveness of AI-generated medical advice.
- ChatGPT remains a resource for understanding general health and legal topics, but not for personalized, professional guidance.
A Refined Policy for User Safety
OpenAI has refined the language in its usage policy to more directly address the issue of professional advice. The updated terms, effective October 29, now explicitly prohibit using the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
This is a more specific version of its previous policy from January, which had a broader restriction against activities that could significantly impact the “safety, wellbeing, or rights of others,” including “providing tailored legal, medical/health, or financial advice.”
In a statement, the company sought to dispel any misunderstanding that this was a new ban. An OpenAI spokesperson confirmed that the core principle remains unchanged.
“This is not a new change to our terms. ChatGPT has never been a substitute for professional legal or medical advice, but it will continue to be a great resource to help people understand legal and health information.”
The move is seen as an effort to better educate users on the intended purpose and limitations of AI, ensuring they do not mistake its informational capabilities for professional, licensed consultation.
The Dangers of Unverified AI Health Information
The clarification from OpenAI is timely, as recent studies have highlighted the potential risks of relying on chatbots for health guidance. As tools like ChatGPT become a common first stop for people with medical questions, researchers are examining the quality of the information provided.
Why the Distinction Matters
The difference between general information and tailored advice is critical. An AI can explain what a common legal term means or describe the symptoms of a particular illness. However, it cannot apply that information to an individual's specific, unique circumstances, which is the role of a licensed lawyer or doctor who understands the full context of a situation.
A study led by researchers at the University of Waterloo tested ChatGPT-4 with a series of medical questions adapted from a licensing exam. The results showed a significant gap in reliability.
AI Accuracy Under Scrutiny
According to the University of Waterloo study, when ChatGPT-4 was tested with medical exam questions, its performance was mixed:
- 31% of its answers were deemed entirely correct.
- 34% of its answers were determined to be clear.
These findings underscore the risk that users could receive information that is incomplete, out of context, or factually incorrect, potentially leading to harmful decisions about their health.
The Persuasive Power of Confident AI
Beyond simple accuracy, another concern is the convincing nature of AI-generated text. A separate study from the University of British Columbia, published in October, found that ChatGPT can be so persuasive that it influences patient-doctor interactions.
Researchers noted that the chatbot's language often comes across as more agreeable and confident than that of a human. This can make it difficult for a user to recognize when the information is inaccurate. The AI's confident tone can create a false sense of trustworthiness.
This phenomenon has led to real-world consequences, with some doctors reporting that patients arrive at appointments with preconceived notions based on AI-generated advice, making it more challenging to provide proper care.
Navigating the Future of AI and Information
OpenAI’s policy clarification reinforces a critical message for the age of AI: these powerful tools are assistants, not authorities. They can be incredibly useful for research, summarizing complex topics, and generating ideas.
However, when it comes to high-stakes fields like medicine and law, where personalized advice can have life-altering consequences, the involvement of a qualified human professional remains essential.
The updated policy serves as a formal reminder to users to approach AI-generated information with a healthy dose of skepticism and to always consult with a licensed expert for matters concerning their health, legal rights, and financial well-being. As AI technology continues to evolve, establishing clear boundaries for its use is a crucial step in ensuring it is deployed safely and responsibly.





