top of page

Debiasing AI systems: A Conversation with Robyn Speer, Luminoso's Head of Science

One of the most-discussed topics in AI this summer has been the growing realization that AI-based systems absorb human biases and prejudices from training data. While this has only recently become a hot news topic, AI organizations (including Luminoso) have been focused on this issue for a while. Denise Christie sat down with Luminoso's chief science officer, Robyn Speer, to talk about how AI becomes biased in the first place, the impact such bias can have, and - more importantly - how to mitigate it.

Can you explain what is meant when we talk about bias in AI?


The short answer is that AI makes decisions based on the data it's been trained on, and that data comes from people. All too often, the training data contains built-in biases that come from the way that people produce that data. For instance, a Microsoft Research team shocked everyone by showing that its NLP system, word2vec, came up with analogies like man is to woman as programmer is to homemaker. These analogies highlight what word vectors ended up learning from big data. In general, men end up associated with powerful positions and high-earning jobs. Women end up associated with assistant or domestic roles.


Some might argue, though, that suggestions like the homemaker example just reflect the most likely occurrence based on the world as it is. How does one decide what bias really is in an AI system?


Of course there are cases where this is a really tough call, where you have to decide if you want to model the world as it is or the world as it should be. But there are many cases that are more clear-cut than this.


One example is bias amplification, as described by Jieyu Zhao of the University of Virginia. Here's what it looks like. Suppose you're training an AI to recognize what someone is doing in a photo. In the training data, cooking may be the correct answer for photos with women 33% more often than for photos with men. But a common approach to machine learning (maximizing its number of correct answers) ends up amplifying this difference and will end up predicting cooking for photos with women 68%, not just 33%, more often than photos with men.


This is clearly an issue from an ethical perspective, but from a business perspective, what are some of the practical implications of having a biased AI?


There are some really high-stakes areas where machine learning is being used without oversight, including commercial systems. This includes hiring decisions and sentencing decisions in court. We may not even know when they're being used. We can't deploy computer systems that create discrimination, so that's why it's so important to be aware of this issue.


There's something going on right now, where we can see the problem. Perspective API is a Google product that's currently used to filter comments on Disqus, and ends up silencing people who sound non-white based on their name or word usage. It was supposed to fight toxicity, but its model ended up biased.


How do these assumptions get built into an AI system, and where do they ultimately come from?


The desire for big data has led to NLP systems that are trained on absolutely as much text as possible. It doesn't matter what the text says, just get more of it. Google frequently trains their NLP on their Google News archives, and lots of other systems are trained on the Common Crawl— a data set of multiple terabytes of webpages. But the question is, should a computer believe everything that it reads? The Common Crawl contains lots of hate sites, porn sites, vicious arguments, and cyberbullying.

An NLP system gets to learn from the best and the worst of what people say online, but the thing about the worst is that there's more of it.

Backing up a little bit, you just said that Google trains its own NLP on Google News archives. Wouldn't that be free of the hate and porn sites you mention being an issue with, say, the Common Crawl?


Yeah, you would think that Google News would be safer, but actually, when you look at how it affects an end-to-end system, it comes out quite similarly. Maybe it's because the news conveys all the biases of our society, but it's probably also because sensational journalism amplifies those biases.


How does a company or organization that wants to tackle this issue figure out whether or not its system is biased?


Well, one thing is to pay attention to your data, your training, and test data. Look at where it came from, if there's something not represented in where it came from, and dig into the results that seem off.


I encountered this issue here at Luminoso when I was experimenting with simple sentiment analysis systems. I looked into what the system considered positive and negative in a set of restaurant reviews. The weird and surprising thing was that it was ranking all of the Mexican restaurants lower than the others. When I looked at the cause, it wasn't anything about the quality that was mentioned in the reviews. It was just the presence of the word, Mexican‚ that was making the reviews come out with lower sentiment. It's not that people don't like Mexican food, but systems that have input from the whole Web have heard a lot of people associating the word‚ Mexican‚ with negative words like, illegal. That shows up in sentiment analysis.


Is there a way for other companies to test for bias in their own AI system?


Researchers from Princeton came up with the Word Embedding Association Test. They looked at how much different people's names are associated with positive and negative words, and particularly what happens when you compare predominantly white names like, Emily‚ with predominantly black names like, Shaniqua. Some systems that failed this test (ones that displayed racial bias just based on people's names) are very popular as starting points for semantics, including Google's word2vec and Stanford's GloVe.


So when we saw that Mexican restaurants were ending up tagged with lower sentiment in this simple system, we ran the word embedding association test on our own system of state-of-the-art word vectors, ConceptNet Numberbatch. When I saw that it also failed this test, I knew that I needed to act on this.


What did you do to act on that bias in Luminoso's AI system?


These biases come from big data and machine learning, but with additional data about what bias looks like specifically and some more machine learning, you can undo them.

Microsoft came up with a process for identifying and undoing a bias, and that process would be pretty great if you have unlimited computing power. I came up with a more feasible version and I used it on Numberbatch. Now I've released a new version on ConceptNet Numberbatch that minimizes some forms of gender bias and ethnic bias, without losing its state-of-the-art performance.


How does this work?


First you need to make a decision about what's the right stage in the learning process to fight bias. I decided that that point is where we distribute the word embeddings, because that can be used for many different purposes where bias would manifest in many different ways. But if we can identify the bias at the source, we can fight it there. So we need to change it in the computer's representation of what words mean in general. Since we're the ones that provide that representation, it's on us to make sure it leads to fairer results.


You talk about adjusting the system from the source to make sure that it has those fair results to begin with. But how much do you adjust the system? I'm thinking back to earlier in our conversation about building an AI that reflects the current world versus an idealized world. In your opinion, towards which target should we be adjusting?


That's the big tricky question, and I really think it depends what the decision is ultimately being used for. If it's for something like ad targeting, then you probably want to reflect the world as it is just to make the ads the most effective. But if it were any kind of decision that non-discrimination laws apply, such as hiring, I'd say you need to go farther. You need to make the system's predictions fairer than the real world,

... because if you're just trying to match the data there, you're just aiming to keep the amount of discrimination in the world constant.

How can other companies apply what you've done with Luminoso to their own systems to make them less biased?


I can't say that I know a silver bullet for all applications, but again, it's important to question your data, to look at where it came from and what its effects are.


So one thing is to not use word2vec or GloVe just because everyone else does. If you want pre-computed vectors, here at Luminoso we are providing, free, fairer, and more accurate word vectors in ConceptNet Numberbatch.


And in general, don't just use that first thing that works. You need a testing process where you can identify a problem. One way to test this is to explicitly try to make the system misbehave.


As far as your own research, what's next for your research on this topic? And where is there still room for further de-biasing in AI?


People describe current deep learning systems as being metaphorically made of Lego bricks, where you have a selection of effective techniques that you snap together into a complete system.

I want de-biasing to become one of those Lego bricks. I want there to be no reason not to do it.

Final question for you today, Robyn. Given the challenges that you've outlined with flawed training data, do you think that a truly unbiased AI can ever be built? Or is it just an unattainable goal we should strive for anyway?


When I describe my work on de-biasing, I make sure to clarify that I'm only solving part of the problem, and it's only a part that I know about. We may never get a completely unbiased AI, just like we never get a system that makes completely accurate predictions. But we can try to asymptotically approach zero bias, to keep decreasing the bias as much as we can.


Learn more about Robyn's work over at conceptnet.io

Comments


bottom of page