When Joshua Bada tried to renew his passport online, an automatic facial detection system told him there was a problem.

According to the system, his mouth was open. Strict passport rules require an applicant’s mouth to be closed. But Bada, a 28 year old black man, had done as instructed. It was the facial detection system that had got it wrong.


In the comment box, he wrote: “My mouth is closed, I just have big lips.”


Bada is not alone in his experience. In recent years, there have been numerous accounts where AI systems have demonstrated bias – be it in facial recognition systems or automated CV filters – against women and ethnic minorities.


AI bias stems from a lack of diversity in the datasets used to train the systems. In other words, if an algorithm is trained using images that are predominantly of white men, the algorithm – unsurprisingly – will be more accurate at recognising white men. Inversely, this means the error rate for recognising women with darker skin is far higher.


In Bada’s case, it was an inconvenience that shouldn’t have happened. But when facial recognition is increasingly being used in areas such as policing, inconvenience could turn into injustice.


The recent history of AI is full of such examples where algorithms designed to make our lives easier are instead making them more unfair.


AI ethics is by no means a new field, but its role in addressing this problem has increasingly become a key focus for businesses that employ AI systems.


But the topic is broad in its scope – applying to anything from autonomous drones to driverless cars – sometimes making it a daunting for an organisation to know where to start.

Companies need to “look in the mirror”

A good starting point for an organisation to ensure their AI is ethical is for them to “look in the mirror and decide what kind of company they want to be,” says robot ethicist Dr Aimee van Wynsberghe, who is an assistant professor at Delft University of Technology in the Netherlands.


“Do they want to be a differentiator in this space, where they are really trying to do this right, or are they going to sort of follow other companies that are doing it irresponsibly?”


It’s a sentiment shared by John Harvie who, as director at global consultancy firm Protiviti, advises businesses in areas such as robotic process automation.


“Increasingly, the idea of shareholder value being the only measure of a board or a firm’s success is becoming outdated,” he says.


“And I think some will be looking to be judged against a much more balanced set of metrics that take into account the firm's contribution to society at large, both from an environmental perspective, from a customer perspective and from a society in general perspective.”


Many large technology companies, such as Google, IBM and Microsoft, have published their own ethical AI principles. The idea is for these publically available principles to act as a statement – an anchor – for the company’s AI aims, and as a promise for consumers to hold them to.

“The idea of shareholder value being the only measure of a board or a firm’s success is becoming outdated.”

However, not everyone has the same resources as these tech giants to carry out in-depth research into AI ethics. At the vanguard of AI ethics in Europe – and arguably the world – is the European Commission's High-Level Expert Group on AI, an independent group of 52 experts who have created Ethics Guidelines for Trustworthy Artificial Intelligence.


Currently in a pilot phase until December this year, the AI ethics framework contains seven “key requirements that AI systems should meet in order to be deemed trustworthy”.


They are:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental well-being
  • Accountability

Wynsberghe, who sits on the High-Level Expert Group, says: “The idea here was, how could we create a tool that would be useful for businesses if they want to do it right, if they want to do it in an ethical way, to establish trust on the part of customers.”

Setting up an ethicist team

Once a company has published its own AI ethics principles, perhaps drawing from the European Commission's guidelines, Wynsberghe recommends setting up or modifying internal processes to make those principles a reality.


A key mechanism for that is to set up an ethics board that works alongside the technical team. Under this "co-development" approach, the technical people will focus on how the algorithm is trained and built, while the ethicists will explore the unintended consequences of the AI.


That person – or persons – does not need to be a fully qualified ethicist, says Wynsberghe. The key thing is to have someone with an “ethicist’s lens” who can look at the quantifiable risks in a scientific manner.

“Instead of fairness just with the algorithm, a company needs to think about fairness on a much broader scale.”

The concept is similar to that of a red team – professionals tasked with stress testing a company’s security – from the cybersecurity world, only applied to AI development.


“But if you really want to get at what it means to create AI in a fair way, it means that you don't just think about it for the one application you're talking about,” says Wynsberghe.


“The company has to have these other mechanisms in place to make sure that they are aligned with their vision, with what they want to do at every step of the way. So I would say, instead of fairness just with the algorithm, a company needs to be thinking about fairness on a much broader scale.”

Understanding when human oversight is needed

Human agency and oversight, one of the seven key requirements, says that “proper oversight mechanisms need to be ensured”. That can be achieved with a human-in-the-loop approach, to make sure that AI systems “empower human beings”.


But the level of oversight depends on the size and type of the company, as well as the type of AI involved, and what the purpose of the AI is, says Wynsberghe.


For example, human oversight is not a “necessary requirement” for an algorithm that helps to build a PowerPoint presentation, she explains.


Where human oversight is needed, however, is when AI is involved in deciding a life changing decision.

“You may have to defend the fact that you have rejected that loan and there has to be an explanation as to why.”

“The human oversight is that no AI is making the decision on its own,” adds Wynsberghe. “It's meant to provide an additional source of information to the human.”


Linked to this is making sure AI is accountable by ensuring it is auditable.


If AI is being used to make a life affecting decision, Article 22 of GDPR stipulates that the organisation must explain how an automated system arrived at that conclusion.


“You may have to defend the fact that you have rejected that loan and there has to be an explanation as to why,” says Harvie.


“So in that case you would have to choose a particular toolset within the AI environment that had that capability to provide that audit trail as to how it arrived at that conclusion.”

Societal and environmental impact

Complex AI systems require a lot of computing power, which in turn requires a lot of energy. A recent study found that one common model for training AI can emit as much carbon dioxide as is emitted over the average lifetime emissions of five cars in the US.


The EU framework recommends that "AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly".


This raises an ethical question for businesses, who must ask themselves whether the carbon footprint of their AI is worth the end result.


“We should be continuing to do that kind of [environmental] research, and then with that in mind, ask what are the appropriate uses of these algorithms,” says Wynsberghe.


“Do we think it's appropriate to have that kind of carbon footprint if we're training an algorithm to read poetry, and to be able to emulate that poet? It doesn't feel like the risks outweigh the benefits from that algorithm.”


Creating AI that benefits “all human beings” is, perhaps at first glance, a subjective guideline. But Wynsberghe points out that when talking about human rights there’s “absolutely an objective truth”.

Tackling the diversity problem

Diversity in the technology sector has been a longstanding problem. According to PwC, just 5% of technology leadership roles are held by women. Meanwhile, Silicon Valley workers are still predominantly white and Asian.


The moral argument for being inclusive and non-discriminatory should be enough on its own. But there is also a clear economic case for promoting a diverse environment where different ideas can cross-pollinate, perhaps leading to the next commercial breakthrough.


And when an all-male team designs an AI that unintentionally creates a worse outcome for women, it is isolating half of its customer base.


Wynsberghe says that there are “small” and “concrete” steps that academics and industry leaders can do to help improve diversity from the ground up. One is to refuse to appear on so-called ‘manels’ – panels consisting solely of men.


Corporations should also have a “vested effort” to hire people from a diverse range of backgrounds, whether that’s cultures, gender, race and so on.


“The idea is to make sure that the corporations have a policy in place where they're going to actively try to create a different kind of corporate climate,” she says.

Technical robustness

Trustworthy AI should also be robust and secure, with sufficient failsafes in place to step in when things go wrong.


The ultimate aim of this principle is to ensure that unintentional harm can be “minimised and prevented”.


Harvie gives the example of human pilots in planes to explain the principle.


“We've had auto pilots and planes for a long time,” he explains. “They are AI, they are automation. And they've proven to be very effective. But we still have pilots on planes.


“Why? They're the control mechanism, they're the backstop. At the point at which the AI isn't able to cope, the human can step in. If you look at that analogy, then you need the same idea, the same concept, across the use of AI.


“I think we need to take that energy and apply that to the business environments in which we're applying AI.”

Do we need more regulation for ethical AI?

The business world – particularly the tech industry – is keen to remind everyone that too much regulation stifles innovation.


While that is true, regulation is necessary to stop corporations from wielding too much power and curtailing consumer rights.


 Current laws, such as the General Data Protection Regulation (GDPR), anti-discrimination directives, consumer law and health at work directives are already in place to offer protections for citizens. But do we need new, AI-specific regulations?


“I don't think that AI really challenges that,” says Harvie. “Whether it's from basic advice or human advice, relating to a loan or relating to a pension or relating to a mortgage, or whatever it might be, those same rules apply that that customer has to be treated fairly.”

“Vulnerable demographics require extra protection.”

Wynsberghe agrees that current regulations, when enforced, are an adequate safeguard from irresponsible AI systems, but adds that some demographics require more protections.


“But what we do need to perhaps be explicit about is, if we agree that this is an experiment, and [ask] what are the conditions of that experiment? So vulnerable demographics require extra protection, meaning children, the elderly, people who might not have the same mental capacity," she says.


“Or vulnerable demographics are either not allowed to be a part of the experiment or companies have to have specific oversight boards in place to make sure that they are being protected.”


Wynsberghe would also like to see companies be more transparent about how they are using AI. That doesn’t mean handing over intellectual property, but providing a clearer picture as to the issues that arise so that future regulations are best suited.


“We don't know all the places that we need to regulate, because companies are not obliged to tell us what they're doing,” she says.

AI ethics in a globalised world

In a globalised, hyper-connected world, the flow of data knows no borders – at least not in the traditional sense. With AI’s dependence on lots of quality data, how do AI ethics apply at a global scale?


“If data is the new oil, then countries around the world are going to have to compete for the use of that resource to gain advantage for their economies,” says Harvie. “And the standards being applied globally to that issue are different.”


China, for example, is a world leader in AI largely because of its abundance of data generated by its 1.4 billion citizens. As well as quantity, the privacy of China’s data does not meet the standards of other parts of the world, with systems such as the social credit system collecting personal data that would be deemed intrusive in the West.

“There’s a lot of talk about an AI race. I don’t agree with that metaphor.”

Wynsberghe, however, is not keen on describing the situation as an ‘AI race’.


“There's a lot of talk about the fact that we're in this AI race,” she says. “I don't agree with that metaphor. I think it's important for countries to share similar ethics and work together and to create international cooperation. And by that I mean liberal democratic society.


“And if that is our qualifier, then that excludes the European Commission from creating a kind of global alliance with another country that does not have the same kind of upholding of basic human rights.”


The worry with a global AI ethics agreement is that it could mean liberal democratic countries “watering down their values” to comply with countries who may be have lower ethical standards, she says.


“If we decide that we don't need to protect the European values, then I think we're in a really dangerous situation,” she adds.


“And I think right now we can see what's happening when money talks and when one country provides a lot of money to another country, then gets to dictate the terms of free speech. And we really need to be careful about that.”

Share this article