A new report from academic publisher Wiley reveals a significant decline in scientists' trust in artificial intelligence, even as its use in research has surged. The preliminary findings for its 2025 report show a notable increase in concerns over AI's reliability, particularly regarding the fabrication of information, a phenomenon known as "hallucination."
Despite the technology becoming more advanced, skepticism within the scientific community is growing. This trend suggests that as researchers gain more hands-on experience with AI tools, they are becoming more aware of their limitations and potential risks, leading to a more cautious and critical perspective.
Key Takeaways
- A 2025 Wiley survey shows scientists' concern over AI hallucinations rose to 64%, up from 51% in 2024.
 - AI adoption among researchers increased significantly, from 45% in 2024 to 62% in 2025.
 - The belief that AI surpasses human abilities in research tasks dropped from over 50% to less than 33% in one year.
 - Concerns regarding security, privacy, and ethics have also grown, with security and privacy worries up by 11 percentage points.
 
A Paradox of Rising Use and Falling Trust
New data indicates a complex relationship between the scientific community and artificial intelligence. While more researchers are integrating AI into their work, their confidence in the technology's capabilities is simultaneously decreasing. According to a preview of Wiley's 2025 report, AI usage among scientists jumped from 45% to 62% over the past year.
However, this increased adoption has been met with growing apprehension. The same report highlights a sharp rise in concerns about the fundamental reliability of AI systems. This suggests that practical application is exposing the technology's flaws, moving the conversation from hype to a more grounded assessment of its current state.
Understanding the Data
The findings are based on a comparison of survey data collected by Wiley for its annual reports in 2024 and 2025. These surveys poll a global community of researchers and scientists on their attitudes, usage patterns, and concerns related to artificial intelligence in an academic context.
The decline in optimism is stark. In 2024, a majority of scientists surveyed believed AI was already outperforming human abilities in more than half of all use cases. A year later, that figure has plummeted to less than a third, marking a significant recalibration of expectations within the research community.
The Persistent Problem of AI Hallucinations
A primary driver of this growing mistrust is the issue of AI "hallucinations," where large language models (LLMs) present fabricated information as factual. The Wiley report found that concern over this issue among scientists surged from 51% in 2024 to 64% in 2025.
For researchers, whose work depends on precision and verifiable data, the tendency of AI to confidently invent facts is a critical flaw. This is not a trivial problem; AI-generated falsehoods have already caused significant disruptions in high-stakes fields.
- Legal System: Lawyers have faced sanctions for citing non-existent legal cases created by AI chatbots.
 - Medical Practice: Inaccurate information from AI tools poses risks to patient diagnosis and treatment.
 - Academic Research: Fabricated citations and data can undermine the integrity of scientific studies.
 
Further complicating the matter, recent tests indicate that as some AI models become more powerful in certain metrics, their rate of hallucination can actually increase. This suggests that simply scaling up existing technology may not solve this fundamental reliability problem.
Commercial Pressures and User Preference
Experts note a difficult commercial dynamic at play. Studies suggest that users generally prefer AI models that provide confident and direct answers, even if those answers are incorrect. An AI that frequently admits uncertainty or an inability to find information may be perceived as less capable, potentially driving users to competitors. This creates a disincentive for companies to prioritize accuracy over the appearance of confidence.
Broader Concerns Beyond Factual Accuracy
While hallucinations are a major point of contention, scientists' anxieties extend to other aspects of AI technology. The Wiley survey shows a broad-based increase in caution.
Concerns related to security and privacy saw an 11 percentage point increase from the previous year. As researchers handle sensitive data, the potential for breaches or misuse through AI platforms is a growing worry. Similarly, questions surrounding ethical AI development and a lack of transparency in how models are trained and operate have also intensified.
This trend aligns with previous research suggesting an inverse relationship between AI knowledge and trust. Several studies have concluded that individuals with a deeper understanding of how AI systems work tend to be more skeptical of their capabilities. In contrast, the most enthusiastic proponents of AI are often those with the least technical knowledge.
"These findings follow previous research which concluded that the more people learn about how AI works, the less they trust it. The opposite was also true — AI’s biggest fanboys tended to be those who understood the least about the tech."
As the initial hype surrounding generative AI begins to fade, it is being replaced by a more nuanced and critical evaluation from professionals who use it daily. The scientific community, by its very nature, is trained to be skeptical and to verify information rigorously. Their growing distrust serves as an important indicator of the technology's current limitations and the challenges that lie ahead in making AI a truly reliable tool for research and other critical applications.





