If you think about artificial intelligence and the core components of what makes an AI, you have the fact that it’s given a goal and then will find a way to reach that goal. It’s then often very un-transparent as to how it reached that goal. If you’re building into a machine some unconscious bias, you might not know that it’s there; the output could be detrimental to women and it’s very tough to work out exactly why that has happened.
In traditional technology, you can see [what has happened]: women dying in car crashes because the crash-test dummies were the shape of a man rather than the shape of a woman. [With AI there could be] similar life-or-death situations, in drug trials or in autonomous vehicles and things like that.
There are some examples of [gender bias in AI today]: Google ads displaying higher-earning [job] ads to men than women. We can hypothesize other types of situations that would happen—what if women weren’t as able to get loans or mortgages or insurance?
I don’t have a dystopian view of AI. I don’t see killer robots. I’m so much more focused on the narrow applications, and I think that if you look at every single one of those narrow applications there is a chance that it negatively affects women. I don’t think artificial intelligence is the issue here; it’s the additional issue rather than the cause. We’re talking about the risk that our unconscious sexism or unconscious racism seep into the machines that we’re building.
How do we get anyone who’s building AI to think about these things? We need to have consumers demand ethical AI. Not enough people are seeing this as more than just a gender issue; this is an actual, fundamental product issue.
—as told to Rachel Metz