Tech & Software

How we accidentally made AI sexist and racist

Samantha Hawley: Hi, I’m Sam Hawley, coming to you from Gadigal Land. This is ABC News Daily. It can write an essay, generate a fake image, help police catch an offender and even detect cancer. It feels like artificial intelligence is taking over everything humans do. But just like some humans, it can also be incredibly sexist and racist. Today, an AI expert on what to watch out for as the computer technology increasingly shapes our day to day lives.

Meredith Broussard: My name is Meredith Broussard. I’m a data journalism professor at NYU, and I’m the author of a new book called More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech.

Samantha Hawley: All right, Meredith, it seems at the moment like AI is absolutely taking off. It’s being used in ways most of us probably aren’t even aware of.

Meredith Broussard: It’s a very powerful technology.

Reporter: We are in the midst now of a revolution, an artificial intelligence revolution.

Meredith Broussard: It is used in absolutely every sector nowadays, and you’re using AI many times a day without even realising it.

Reporter: Generative artificial intelligence is quickly taking over the internet and it’s becoming a permanent fixture of our everyday lives.

Meredith Broussard: When you do a Google search, you’re using AI. When you record a Zoom call and have it generate a transcript automatically, you’re using AI.

Reporter: Rhonda O’Keefe is about to take an ultrasound picture of this man’s heart with the help of artificial intelligence. Chat GPT. You’ve probably already heard of it, but this chat bot can write you almost anything you want.

Samantha Hawley: So it’s used in a huge amount of areas. But there is a concern, isn’t there, that it has the potential to be really biased?

Meredith Broussard: It really does. We imagine that AI is going to help us escape from human problems. There’s a kind of bias out there that I call techno chauvinism, which is the idea that technological solutions are superior to others. So take the case of facial recognition, for example. Facial recognition works better on light skin than on dark skin. It works better on men than on women. It does not take into account trans and non-binary folks at all. And so when facial recognition is used in policing, it misidentifies people of colour more often. And one of the cases is where a man was misidentified by a facial recognition system and was wrongly arrested just because of an inappropriate match.

Reporter: Robert Williams was wrongfully arrested after the Detroit Police Department’s facial recognition software falsely matched his driver’s license photo to security footage of a shoplifter.

Robert Williams: He showed me a picture. He said, well, first he said, when was the last time you was at the Shinola store? I said a couple of years ago. He said, “So that’s not you?” I said, “No, that’s not me.” He turns over another piece of paper. He says, “So I guess that’s not you either?” I held that piece of paper up to my face. I said, “I hope you don’t think all black people look alike.” He turned over another piece of paper and said, “So I guess the computer got it wrong?” And I’m like, “I guess so. Am I free to go?” And he was like, “We’re not detectives on your case, so we can’t let you go.” So they sent me back to my cell.

Samantha Hawley: That’s a really concerning example. So it can even mislead police to the point where they’re arresting the wrong person. So tell me, how does AI become biased, because it’s not a human.

Meredith Broussard: Yeah, AI is not a human. And it’s also not the case that AI developers have set out to make biased software. You know, most software developers are just going to work trying to do a good job and are not at all malicious. What happens is that we all have unconscious bias. We’re all trying every day to become better people, but we are not yet perfect and we have unconscious bias. So the unconscious bias of the creator ends up in the technological system. And when you have small and homogeneous groups of people making technology as you do in Silicon Valley, for example, those technologies then get the collective blind spots of their creators.

Samantha Hawley: Right. So these human biases, they crop up in AI all the time. But to really understand how that happens, I think I need you to explain how AI works. What is it?

Meredith Broussard: So at the most basic level, AI is math. It’s really complicated, beautiful math. We’re talking about mathematical processes that happen on a computer and you take all of this data, you feed it into the computer and you say to the computer, “Hey, I want you to make a model that shows the mathematical patterns in this data.” And the computer says, “Sure, I can do that.” So it does this, and then you can use that model in order to generate new things. So in the case of generative AI, you feed it with a lot of text data and then it can generate new text and the new text sounds like human generated text because there are mathematical patterns in the human generated text. The problem is that society is not perfect. We don’t live in a perfect world. One of the examples that I turn to a lot is the example of mortgage approval algorithms. So there was an investigation by The Markup recently where they found that automated mortgage approval algorithms are 40 to 80% more likely to deny borrowers of colour compared to their white counterparts. And why is this? Well, because these mortgage approval algorithms were trained on data from the United States about who has gotten mortgages in the past. Well, in the United States, we have a very long history of financial discrimination based on race, ethnicity. We have a long history of red-lining, of residential segregation. And so the people who have gotten mortgages in the past are not people of colour. And so when you feed the computer with data about mortgages in the past, you just get the computer making decisions like that. And people often say, well, maybe we could improve the system by putting in better data. But unfortunately there is no such thing as better data in this case because there is no perfect world where we have a lack of financial discrimination in housing markets.

Samantha Hawley: Gosh. Yeah. So banks are actually using AI to assess who they give loans to. And it sounds like AI can be really sexist and racist.

Meredith Broussard: Yeah, all the problems of the human world are manifest inside AI systems. AI doesn’t get us away from any of the essential problems of being human.

Samantha Hawley: And as the title of your book suggests, this is much more, isn’t it, Meredith, than a glitch? Are we too far down the road now with AI to fix this?

Meredith Broussard: I am ultimately hopeful. I think that we can understand how racism, how gender bias, how ability bias, how all of these biases work, and we can start to recognise when our technological systems are working against people. But there are a couple of things that make me really optimistic. One is the field of public interest technology. This is a new field. It’s just what it sounds like: It’s about making technology that’s in the public interest. You can investigate algorithms to find out whether their decisions are just or fair. And then if I can get very nerdy for a moment, there is something called algorithmic auditing. You know, it might make people cringe to hear the word audit, but algorithmic auditing is really interesting because it allows us to open up the black box of an algorithm and test it to find out if it’s being fair. The easiest way to explain it is you look at what goes into an algorithm. You look at what comes out of an algorithm and you evaluate how well it works for different subgroups of people. So if it’s a mortgage approval algorithm, well, is it offering mortgages to different gender groups at the same rate?

Samantha Hawley: Mm. So, Meredith, you’re hopeful, I guess, that this inherent bias in AI can be fixed? There is a solution to that, but if it can’t, what does that mean? I guess, you know, we can’t even really imagine, can we, where this technology is going to end up.

Meredith Broussard: We we do tend to fear the unknown. We also tend to anthropomorphise things. We really want to attribute human like qualities to AI, just like we really want to attribute human like qualities to our pets. But AI is just math. Like it’s not an entity, like it’s just, it’s really cool new math.

Samantha Hawley: Meredith Broussard is a data journalism professor at New York University and the author of More Than a Glitch. If you want to know more about how schools and universities are dealing with Chat GPT, have a listen back to This Episode Was Written By An AI Bot from January the 27th. This episode was produced by Flint, Duxfield, Veronica Apap, Sam Dunn and Chris Dengate, who also did the mix. Our supervising producer is Stephen Smiley. I’m Sam Hawley. Thanks for listening.

Be known by your own web domain (en)

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *