On Monday 16th April the UK’s House of Lords Select Committee on Artificial Intelligence published its hotly anticipated report on the future of AI in the country.

Encompassing all aspects of the technology, the report, AI in the UK: Ready, Willing and Able? is the result of a multitude of hearings and careful consideration of the technology’s potential.

For businesses either working with AI, or considering using the technology, however, it can also be used as a roadmap for how to proceed, and, if embraced, could ultimately produce the harmonisation and respect of an industry that at present remains fractured, and at times even viewed with active suspicion.

Rule Britannia: Can the UK be a leader in AI?

The report paints the UK as a strong figure in artificial intelligence, and not without reason.

“The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths,” wrote Lord Clement-Jones, the chairman of the committee that penned the report.

The UK certainly is home to some AI giants, with DeepMind, arguably the leader in AI research, based in London.

However, while the UK may see itself as an AI leader, it is not without competition. The US remains the giant in this space, with China close behind, and in Europe France is aggressively positioning itself as an AI hub for the continent. In fact, in a Fortune list of the 100 most promising AI startups around the world published at the start of this year, just four were based in the UK.

The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem.”

But with the UK keen to shore up its technology industries ahead of its departure from the European Union, it does have the potential to cement itself as a key player in this space, and the recommendations put forward in the report could play a key role in making this happen.

Vital to this, though, will be for the UK to continue to grow its base of smaller AI-focused companies, in part by providing investment opportunities and business development support, but also by increasing the opportunities for AI contracts in the country.

The Lords report does support this, with recommendations for government agencies to contract AI companies to improve operations, but there is also space for companies to look at whether contracting third-party AI providers is a better option for them than building expertise in-house.

Ethics at the forefront: making AI work

It is perhaps no surprise that the report included extensive recommendations for the inclusion of ethics, stressing their importance as core component of the technology.

“It is essential that ethics take centre stage in AI’s development and use,” said Clement-Jones. “AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.

“The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

As part of this, the committee has recommended the establishment of a cross-sector AI code, which would be adopted nationally and, it hopes, internationally. This would form a framework to ensure best practices AI and would be based on five principles that the committee has put forward:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

The Lords AI code is something we should contribute to. No one entity or company can make all the decisions.”

While such a code could have the potential to stifle innovation, if companies embrace its potential it could prove to be a valuable turning point in the development of artificial intelligence, not only improving public perception and acceptance, but driving greater adoption and thus growing business opportunities.

It would be wise, then, for companies to play a strong and active role in the development of this code, something that Intel has already indicated a strong interest in doing.

“The Lords AI code is something we should contribute to. No one entity or company can make all the decisions. We have to collaborate with other players in the industry, government bodies, human rights organisations, and charities to make this work. It requires different opinions and perspectives on things that we might not have seen,” said Chris Feltham, industry technical specialist at Intel, following the report. 

“There is huge appetite for the rapid acceleration in AI development, but as we work to develop new methods for integrating AI capabilities into the fabric of society, a public policy conversation is essential more than ever. The industry must work together to regulate and send a clear message in committing to ethical deployment of AI.”

Shaping public perception of AI

While industry support for AI frameworks is clearly important to its future development, there is also a need for public perceptions to improve. However, this an immensely complex challenge that will require companies to work tirelessly on several fronts to achieve.

One part of this is increasing the understanding of what AI is and what it isn’t, moving ideas about the technology away from science fiction and into reality. This can in part be achieved through clear communication about what particular AI technologies do rather than simply presenting them as mysterious, quasi-magical products here to solve all our problems.

“As AI now begins to appear in our day-to-day lives and work, there is an increasing desire to comprehend how autonomous technologies make decisions, act upon these decisions, and the subsequent impact they have on society and the individual. Whilst AI’s rationale for making individual decisions may be complex and opaque, it’s clear that AI is so much more than a technical topic, and has socio-political dimensions too,” said Matt Walmsley, EMEA director at AI cybersecurity company Vectra.

“AI is yet to become truly autonomous in the workplace, but its deployment is already increasingly common in areas like decision support. For example, AI is currently being used to combat cybersecurity adversaries by analysing digital communications in real time and spotting the hidden signals to identify nefarious behaviour. A task that is simply beyond humans alone.  AI augments the human capabilities and security analysts to quickly identify, understand and respond in the case of a data breach. Here, AI focuses on performing a particular set of tasks and is overseen by a human in the decision loop for many of the remedial interventions.

“Our tendency to anthropomorphise AI technology perhaps comes from the widespread influence of science fiction. AI in today’s workplace is more ‘Robocop’ than ‘Terminator’s SkyNet’. It augments human capabilities so that systems can operate at speeds and scales that humans alone cannot. In this context, moral risk is extremely low.”

“Our tendency to anthropomorphise AI technology perhaps comes from the widespread influence of science fiction.”

It’s clear that transparency is essential to the effective communication of AI – something that was echoed in the Lords report, although the committee did acknowledge that complete transparency is not always practical.

However, what is important in particular is the care companies take over the handling of personal data. A central concern for many members of the public – particularly in light of the Cambridge Analytica scandal – the mishandling of data by an AI company could prove particularly damaging to the industry given the already heightened suspicions around the technology.

In some cases the solution may be to publish transparency reports on what data is being used in an AI context, but it will also require companies to think carefully about what types of data they are making use of, beyond the requirements of GDPR.

Plugging the AI skills gap for the long haul

One of the most immediate issues facing AI at present is the significant skills gap that it faces at present: simply put, there are far more people looking for quality AI experts than currently exist.

“We’re now talking to companies across all sectors about the impact of technology on their businesses – and it’s clear they need specialist skills in key areas like AI, machine learning and also digital ethics to drive and implement change,” said Mike Drew, global head of the technology practice at executive search firm Odgers Berndtson. “Seeing opportunities to digitally transform but not possessing the people to implement change will hold businesses back and create a competitive void.”

At the same time, there is deep concern about the jobs being lost in the wake of the AI revolution, something that the Lords report acknowledged as a certain reality, and has urged the government to invest in to avoid.

“It is vital that there is further investment in education around AI and robotics.”

“It is vital that there is further investment in education around AI and robotics, and the benefits this tech will bring such as increased employment opportunities,” said Guy Kirkwood, chief evangelist at UiPath, a company specialising in robotic process automation. “These jobs may look different than before and require other skillsets and knowledge. For example, individuals would be tasked with developing new automation technologies and managing the implementation of these technologies within our business environment.”

It is clear that far more work needs to be undertaken to ensure greater expertise in AI, perhaps even seeking to retrain those who have lost their jobs to it to be experts in the technology. Furthermore, as the Lords report recommended, there needs to be integration of AI training at a far younger age, so that in the long-term the AI industry can remain supplied with strong candidates.

However, while many recommendations in this area have been directed at governments, it is important that businesses take an active and innovative approach to solving their AI skills shortage, perhaps in part through new training approaches that ensure their needs are truly met.

There is incredible potential for AI in the UK and beyond, and if businesses take an active role they can ensure that the technology benefits them in the long term. If not, they risk being carried along in directions that while well meaning, may not best suit their needs. 

Share this article