In 2016, Microsoft, in collaboration with Twitter, released an AI chatbot called Tay, which tweeted based on interactions with human users. What was meant to be a light-hearted PR stunt turned sour for the company when the chatbot began to post inflammatory and offensive tweets, mimicking right-wing internet trolls who bombarded it with racist and sexist phrases.

What may have been an embarrassing faux pas for the company, in fact highlights an underlying feature of AI: its results are highly dependent on the data that it learns from.


How easily a system can be swayed by the data it is fed is known as AI bias. This issue, as AI is deployed in finance, recruitment, and the criminal justice system, is one of increasing importance, and one that the technology industry is waking up to.


Last month, the UK government announced that the Centre for Data Ethics and Innovation would be conducting an investigation into the algorithms used in finance and criminal justice, focusing on whether unintentional bias could be at play.


But what is AI bias, and can it ever be stamped out?

The root of AI bias: bad data or bad people?

One person striving to eliminate unintended bias is Alyssa Rochwerger. A former IBM employee, Rochwerger is now VP of Product at Figure Eight. The company uses ‘human-in-the-loop’ intelligence to train machine learning algorithms.


According to MIT research, there are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices. Using the example of facial recognition software, Rochwerger explains that when a system is fed data, in this case images of faces, they are categorised, and from this it then learns to identify future images based on common characteristics.


If fewer examples are taken from a particular category then a system will find it harder to recognise this category in the future, unintentionally creating a system that favours certain characteristics over others. Therefore, it is not the AI itself that causes bias, but rather the data that it is trained on.

“AI is biased based on the data it was trained on, and based on the humans that created those systems.”

This is how bias, often unintentionally, can creep in. As Rochwerger explains, “AI is biased based on the data it was trained on, and based on the humans that created those systems. They encode their own biases and their own world views into those systems so it is inherently reflective of those biases.”


Beyond facial recognition, this unintended bias has real-world consequences and may mean systems favour certain groups over others when assessing who should be approved for a loan or who should be hired for a job.


It can also have far more severe consequences. According to CNBC, in 2016, it was reported that the Correctional Offender Management Profiling for Alternative Sanctions, used by judges in some states to help decide parole and other sentencing conditions, had racial biases. Because it was trained on historical data, existing biases that certain groups were more likely to offend were amplified.

The struggle to teach AI nuance without infringing on privacy 

Although it may be easy for bias to enter an AI system, it is far harder to remove it. Rochwerger explains that eliminating unintended bias poses a significant challenge due to the way in which AI and machine learning operate. By looking to fit images into discreet categories, systems struggle with nuance.


She says, “What do you mean by human? Do you mean adult, child, or infant? Do you mean every single type of ethnicity and background? What categories are you trying to adhere to? Are you differentiating between male and female which again has all sorts of nuances to it if you’re thinking about non-binary gender. With race as well, it’s a very grey area and there’s a lot of continuum, it’s not simple.


“What these computer systems are best at is around classifying things into discreet categories and they’re not particularly good at grey. So you’re trying to apply a very rigid decision tree to a very grey area and systems do not have the benefit of context often.

“What these computer systems are best at is around classifying things into discreet categories and they’re not particularly good at grey.”

She explains that rectifying the situation often requires large volumes of data:


“Collecting large volumes of data, so 100,000 examples from every category you’re trying to recognise...for example 100,000 examples of African American males between the age of 30 and 35. That data collection task is very difficult. Where are you going to get all of those images at scale and appropriately categorised in a way that you are not taking someone’s data that they did not give you the rights to and you can be sure that this picture is truly of the category.”


Along with this comes the problem of collecting the data in a way that does not infringe on individuals’ privacy.


Rochwerger explains that organisations approach this in a variety of ways:


“Sometimes companies have access to data. For example when you upload data to Facebook retains rights to it, and they own that data in a way that allows them to learn from it and label it and use it in a way that allows them to create their system. Other companies take different approaches. IBM takes the approach of ‘we don’t own your data, you need to explicitly give us the rights to it’ so they use open source or they will purchase data from photographers.”

Limited people, limited data: challenging a lack of diversity in technology

Rochwerger explains that the issue stems in part from a lack of diversity in the technology itself. With women of colour particularly under-represented, it is not surprising that bias creeps in. When a particular ethnicity, social group or world-view is overrepresented in those that are responsible for sourcing the data and designing the systems that use it, it is far easier for underrepresented groups to be missed.


She says, “We interact with artificial intelligence all the time, and many of them were designed by very limited groups of people using very limited datasets and they aren’t serving the population in the best way possible.”

“In general, most organisations are not taking this as seriously as it is a business risk to their enterprise.”

Although some companies are doing good work to tackle unintended bias, a lack of awareness is leading many to continue to use inherently biased systems, often without realising.


Amazon faced criticism over Rekognition, its facial recognition system, with some claiming that it is less accurate at recognising the faces of women with darker skin tones.


However, Rochwerger believes that a lot of good work is being done to raise awareness of this issue: “Some [organisations] are starting to have really good conversations about it. I know that IBM is very forward thinking around creating products that are more transparent and fair in understanding and measuring bias in systems, and I know Google is doing a lot as well. But in general, most organisations are not taking this as seriously as it is a business risk to their enterprise.”

Harnessing the human element to widen the training pool 

One way in which Figure Eight is approaching the problem is through harnessing the human element. The company works with other organisations to utilise the ability humans have to spot subtleties that AI systems may miss, widening the pool of people used to train an algorithm.


Rochwerger explains, “We try and help our clients deal with that ambiguity…one way to do this is for any image that is ambiguous, you can get five people to rate it or ten people, and then take the image. Or you can take any disagreement and create a new category because you didn’t think of this category when you were starting out and re-label the data with this in mind.”


Rochwerger believes that it is important to carefully examine datasets for bias: “If you collect a million images, and you see that only 5% of the dataset has Asians in it and 80% of the dataset has people of Caucasian descent, that’s a problem and I need to get more data to represent the Asian population better. There’s a lot of choices you can make to guard against bias in the dataset.”


It is possible that the problem will be better addressed as the wider issue of diversity in the technology is also rectified. Rochwerger is optimistic that the situation will continue to improve: “I think that our industry is capable of evolving to be more inclusive. The stats are hard to look at but if you take a long view approach it is improving over time.”

Share this article