Why AI Reproduces Racist Images in Global Health

An interview with Arsenii Alenichev

Arsenii Alenichev, lead author of “Reflections before the storm: the AI reproduction of biased imagery in global health visuals,” spoke with our editor-in-chief Jason Silverstein about how AI can only create new global health images from the racist and stereotypical ones that already exist.

Image created by Arsenii Alenichev. The prompt for Midjourney, an AI image generator, was “Bill Gates saves children.” Note: the characteristics of the children were not specified.

AA:  I designed a project to look at global health visuals. I interviewed 30 global health photographers about their experiences. You know, like how the images of empowerment are created. There was a lot of staging, actually. The images of suffering, the poverty porn. I was trying to capture the moral navigations of photographers.

AI was one of the concerns. It was just when AI began to explode a year ago. There were already concerns that Amnesty International used an AI-generated image in Norway to depict police violence. 

My mentor, Koen Peeters Grietens, and I had this idea of trying to invert stereotypical global health images to kind of showcase the balances they entail. So we thought, hey, AI could actually do that. 

JS: What do you mean by invert global health images? 

AA: Simply put, there was a lot of images of suffering Black children, along the timeline when the North imposed structural adjustment policies. 

Many countries in the global South declared independence, but the international banks didn't allow those countries to establish robust healthcare systems. As part of that, there was a lot hunger. Then photojournalists came in. 

The attempt to invert those images was specifically to showcase the historical and social dimensions of how those images came into being. 

We thought we can just invert those images to showcase to people for whom it's not obvious — primarily people from the global North, because I think people from the global South understand the problems with those images. 

So we decided to invert those images, right? To create prompts for, like, Black African doctors taking care of a group of white suffering children. 

However, we quickly discovered that it's not possible because AI would always couple provision of care with whiteness and the reception of care with Blackness. 

This is because AI learns from real images, right? And the real images were so problematic. It was the poverty porn images, you know, the malnourished kid images, the suffering subject images. This is the matrix from which AI is replicating the images. And very deeply, inside algorithm there is this kind of dogmatic coupling. 

In all the instances when we ask the AI to produce white suffering children, it would only show the Black kids. It's a bit ridiculous. 

AI would always couple provision of care with whiteness and the reception of care with Blackness.

We also prompted for HIV patients receiving care and 99% of the images showed Black people. Because somewhere deep in the algorithms, they link HIV with Blackness. 

I was trying to create images of like white kids under the mosquito nets and it was just pretty much not possible. It would only show you Southeast Asian kids or someone in like “Africa.”

JS: Where does this algorithm even come from? 

AA: With the rapid emergence and development of generative AI, they pretty much took all the images on the Internet and plugged in into Super AI brain. And now AI brain in a matter of seconds can generate images based on everything that is available online. In the case of global health and like humanitarian imagery, if you ask it to produce something, it would always revert back to really problematic images.

JS: So what you're saying is because of the way that these photographs have been taken for decades, this is the image bank that that AI can use?

AA: And yeah, for sure. 

JS: It can't recombine in any other way than these stereotypes that you're talking about – the suffering African child, the Black child under the mosquito nets. And so the idea of trying to invert those its impossible? 

AA: Yeah, it's well, it's futile. I mean probably it's possible. One thing about the AI is that developers of AI, they don't know what's happening with AI. It's like a living thing in itself.

Someone in real life had to be very, very miserable for AI to reproduce that image on demand in a matter of seconds. 

All those images they're based on text to image prompts right? However, if you look for specifically white saviors type scenarios, it will tell you that, oh, this is against our community standards, we actually try to promote respectful depictions of people and their communities. However, it kind of links back to that coloniality and patriarchy are not just fixable behaviors. They're like in the very fabric of our societies, right? You cannot just ban racism at the level of textual prompts. This is very naive. 

JS: What is the the biggest takeaway? 

AA: We should actively politicize products of generative AI as political agents. WHO is already likely using AI rendered images. I spoke to several professionals in the field and I ran it through a kind of AI to detect AI. 

One of the justifications that I heard is, like, hey, it's neutral. But it's not. Because in order for those images to exist in the first place, someone miserable had to stand and be photographed. 

Someone in real life had to be very, very miserable for AI to reproduce that image on demand in a matter of seconds. 

So I think the one of the biggest takeaway messages here is that we should not pretend that those images are neutral. 

This is the problem that we see in in general with AI, which is an attempt to depoliticize it, almost not engage with like dark history of AI and the kind of social context in which it emerged. And the tangible violence that inflicted on people and communities. 


Previous
Previous

Stuck with Nearly $17k in Unfair Medical Debt, What Could He Do?

Next
Next

Imagine If My Pain Was Taken Seriously