Global health organizations and non-governmental agencies are increasingly using artificial intelligence to create images depicting extreme poverty and human suffering. This practice has sparked significant debate among global health professionals, who warn it creates a new form of "poverty porn" that reinforces harmful stereotypes while avoiding the ethical complexities of real-world photography.
Key Takeaways
- NGOs are using AI-generated images for campaigns related to hunger, child marriage, and sexual violence.
- Primary motivations for using AI include lower costs and bypassing the need for obtaining consent from real individuals.
- Critics have labeled the trend "poverty porn 2.0," arguing it perpetuates exaggerated and often racialized stereotypes.
- Stock photo websites now host numerous AI-generated images of poverty, making them easily accessible for campaigns.
- Some organizations, like the UN, have removed content after criticism, while others have updated their internal guidelines on AI use.
A New Era of Digital Depiction
Health and aid organizations are turning to AI image generators for their social media and advocacy campaigns. According to experts in the field, this trend is growing rapidly. Noah Arnold, who works with the Swiss-based ethical imagery organization Fairpicture, stated, "All over the place, people are using it. Some are actively using AI imagery, and others, we know that they’re experimenting at least."
The images often depict scenes designed to evoke strong emotional responses. Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp, has been studying this phenomenon. He reports that these AI creations frequently rely on established visual clichés associated with hardship. "The images replicate the visual grammar of poverty – children with empty plates, cracked earth, stereotypical visuals," Alenichev explained.
Alenichev has collected over 100 such images used in campaigns against hunger and sexual violence. Examples he shared with journalists include exaggerated scenes of children in muddy water and a young African girl in a wedding dress with a tear on her cheek. He argues this represents a modern evolution of an old problem, which he calls "poverty porn 2.0" in a comment piece for The Lancet Global Health.
Why Are Organizations Using AI Images?
The shift towards synthetic imagery is driven by two main factors: cost and consent. Commissioning professional photography in remote or sensitive locations can be expensive. Furthermore, obtaining informed and ethical consent from vulnerable individuals, especially children, is a complex process. AI offers a seemingly simple alternative that is cheap and bypasses these consent procedures entirely. According to Alenichev, budget cuts, particularly from US funding sources, have made this low-cost option more attractive to NGOs.
The Role of Stock Photo Platforms
The proliferation of these images is facilitated by major stock photography websites. Platforms like Adobe Stock Photos and Freepik now return dozens of AI-generated results for search terms like "poverty." Many of these images come with descriptive, and often problematic, captions.
Examples of captions found on these platforms include: "Photorealistic kid in refugee camp" and "Caucasian white volunteer provides medical consultation to young black children in African village." Alenichev noted the deeply ingrained biases in these depictions. "They are so racialised. They should never even let those be published because it’s like the worst stereotypes about Africa, or India, or you name it," he said.
The Market for Synthetic Images
On platforms like Freepik, a global community of users can generate and upload AI images. When a customer licenses one of these images, the creator receives a fee. Adobe sells licenses for some of these stereotypical images for approximately £60, creating a commercial incentive for their production.
Joaquín Abela, the CEO of Freepik, addressed the issue by placing some responsibility on the consumer. He stated that the platform's content is generated by its users and that Freepik has made efforts to "inject diversity" in other areas, such as corporate imagery. However, he suggested that controlling user-generated content that meets market demand is a monumental task. "It’s like trying to dry the ocean," Abela said. "If customers worldwide want images a certain way, there is absolutely nothing that anyone can do."
High-Profile Cases and Industry Response
Several prominent organizations have used AI-generated imagery, drawing both attention and criticism. In 2023, the Dutch branch of Plan International, a UK-based charity, used AI images in a video campaign against child marriage. The visuals included a girl with a black eye and a pregnant teenager. The charity stated its goal was to safeguard "the privacy and dignity of real girls."
In another instance, the United Nations posted a YouTube video featuring AI-generated "re-enactments" of sexual violence in conflict, including synthetic testimony from a Burundian woman. Following an inquiry from The Guardian, the video was removed.
"The video in question... has been taken down, as we believed it shows improper use of AI, and may pose risks regarding information integrity," a UN Peacekeeping spokesperson said. They affirmed the UN's commitment to supporting victims through "innovation and creative advocacy."
The responses from within the sector have been mixed. Following its 2023 campaign, a spokesperson for Plan International confirmed that as of 2024, the NGO has "adopted guidance advising against using AI to depict individual children." In contrast, Adobe declined to comment on the presence of these images on its platform.
Ethical Implications and Future Risks
The debate over AI-generated poverty images comes after years of discussion within the aid sector about ethical storytelling. Many professionals have worked to move away from exploitative imagery toward more dignified representations of people experiencing hardship. Kate Kardol, an NGO communications consultant, expressed her dismay at the new trend.
"It saddens me that the fight for more ethical representation of people experiencing poverty now extends to the unreal," Kardol said, noting that the images frightened her.
A significant long-term risk is the potential for these biased images to pollute future AI models. Generative AI systems learn from vast datasets of existing online content. Alenichev warns that as more stereotypical AI images are published, they will inevitably be absorbed into the training data for the next generation of AI. This could create a feedback loop, amplifying prejudice and making it even harder to generate nuanced, respectful imagery in the future.





