Justice systems across Latin America are confronting a dual challenge involving artificial intelligence. While courts are increasingly adopting AI to manage overwhelming caseloads, they are simultaneously struggling to prosecute crimes committed using AI-generated content, leaving victims with limited legal options.
This technological gap is exposing significant vulnerabilities in legal frameworks that were not designed for deepfakes and other forms of synthetic media, forcing lawmakers and judges to adapt to a rapidly changing digital landscape.
Key Takeaways
- Courts in Latin America are finding it difficult to secure convictions in cases involving AI-generated deepfakes due to outdated laws.
- While less than 1% of deepfakes originate in the region, countries like Mexico, Brazil, and Colombia are seeing some of the fastest growth rates.
- Simultaneously, justice systems are adopting AI tools to clear case backlogs, with 85% of Colombian judges reportedly using generative AI.
- Experts warn that regional AI regulations are often too abstract and fail to address specific local challenges, such as algorithmic bias in policing.
The Legislative Gap in Prosecuting Deepfakes
The proliferation of AI-generated images and videos has created new avenues for criminal activity, from political disinformation to personal harassment. However, prosecutors in several Latin American nations have found their hands tied by legal codes that do not specifically address these new forms of harm.
In one high-profile example from Colombia, an investigation into a shooting at a political rally for presidential candidate Miguel Uribe Turbay was significantly complicated by a flood of deepfake videos depicting the attack. Law enforcement had to dedicate extensive resources to verify and debunk the fabricated content.
Lucia Camacho, a public policy coordinator at the digital rights group Derechos Digitales, noted that authorities often lack the technical expertise to handle such cases. "Law enforcement doesn’t yet have the capacity to look at these judicial matters beyond just asking whether a piece of evidence is real or not," she stated. This limitation, she explained, can prevent victims from receiving adequate legal protection.
Challenges in Securing Convictions
Recent court cases in Mexico and Argentina highlight the difficulties prosecutors face. In Mexico, a 20-year-old man accused of using AI to create explicit images of more than 1,000 women and minors was acquitted due to a "lack of sufficient evidence to prove his involvement." Although the ruling has been appealed, the initial outcome demonstrates the challenge of linking a suspect to the creation of digital content under current laws.
Similarly, in Argentina, an 18-year-old allegedly created and distributed pornographic deepfake videos of at least 16 female classmates. Since creating such content is not a specific crime, the victims' attorney, José M. D’Antona, had to build a case based on existing digital crimes legislation and the psychological damage inflicted. Despite orders to remove the content, D’Antona confirmed that the victims' names remain associated with the material on some websites. "The damage persists," he said.
Global Deepfake Surge
According to a report by Security Hero, the creation of deepfake videos increased by 550% worldwide between 2019 and 2023. While Asia accounts for over 70% of this content, Latin America is experiencing a rapid increase in its creation and distribution.
A Patchwork of Regulatory Responses
Governments in the region are beginning to respond, but progress is uneven. Brazil has banned the use of deepfakes in election campaigns, while Peru and Colombia have passed laws that classify the use of deepfakes as an aggravating factor in a crime. Argentina is considering a bill that would impose prison sentences of up to six years for creating malicious AI-generated content.
However, digital rights advocates argue that these efforts are not comprehensive enough. Franco Giandana, a policy analyst at Access Now, observed that many Latin American countries use the European Union's AI framework as a model but fail to create equally robust local versions.
"Often, the language is too abstract and there’s still little grasp of the national and regional challenges — not just to regulate AI but to build a coherent development strategy suited to our context," Giandana said.
This lack of context is also evident in the use of other AI technologies, such as facial recognition systems by police forces, which have led to wrongful arrests and raised concerns about racial bias.
The Risk of Algorithmic Bias
Police in Brazil have mistakenly arrested innocent individuals based on flawed facial recognition matches. Experts like Dilmar Villena of Hiperderecho explain that many AI systems are trained on data from predominantly white populations, leading to higher error rates or "false positives" when scanning Indigenous, Afro-descendant, and female faces.
Courts Embrace AI for Efficiency
While prosecutors grapple with AI-driven crime, judges are simultaneously turning to AI to improve the efficiency of the justice system. Facing severe backlogs, courts across the region are implementing AI tools to automate repetitive tasks and classify cases.
In Colombia, Judge Juan Manuel Padilla made headlines in 2023 when he used ChatGPT to assist in drafting a ruling for a case involving an autistic child's medical treatment. This event set a precedent, and Colombia has since established guidelines for AI use in its courtrooms.
A 2025 study from the Universidad de los Andes found that approximately 85% of judges in Colombia now use free versions of AI tools like ChatGPT and Microsoft Copilot. However, the study also noted that most judges receive little to no formal training on the technology, forcing them to learn on their own.
AI Tools in Action
Several countries have deployed specialized AI systems to streamline judicial processes:
- Brazil: Uses an AI tool called SAJ Digital to accelerate case processing and reduce magistrates' workloads.
- Argentina: Implemented a system named Prometea, which has reportedly reduced the time to process legal opinions from 190 days to just one hour.
- Colombia: The Constitutional Court uses PretorIA, based on Argentina's technology, to sort through over 2,700 citizen legal requests daily.
Despite his reliance on AI for administrative tasks, Judge Padilla believes that regulation alone cannot solve the problem of deepfakes. He argues that the technology is evolving too quickly for any legal framework to keep up.
"Self-regulation is the only efficient path," he told Rest of World. "No legal framework will ever keep pace with the speed of AI." This perspective underscores the complex dilemma facing Latin America: how to harness the benefits of AI for justice while simultaneously protecting citizens from its potential for harm.





