Share this article

A Shift to Behavioural Data

First things first, data will still, one hundred percent, guide marketing decisions and remain central in driving customer journeys and experiences. One only need look at the number of data scientists currently being recruited and the ever-present need to quantify campaign success to be convinced. But the use of personal data will, and has to change.

Under the new legislation, consumers have to proactively opt-in and have the right to be forgotten. There is no doubt that this will immediately reduce the marketer’s active audience universe and the longevity of the data that they hold.

As a result, there will be a bigger percentage of an existing or potential target audience for which an organisation does not hold personal data. But these consumers will still demand a great customer experience. So how will marketers look to meet this need?

In a February article on Big Data, Forbes tells us that 84% of marketing organisations are implementing or extending AI usage in 2018. Whilst AI, remains a mystery to some, it is already playing a significant role in marketing and delivering experiences. Adobe Sensei is helping with anomaly detection in Adobe analytics, ASOS is using deep learning to create a visual search that matches customer’s pictures of clothes to similar products, chatbots are using AI to answer questions and Amazon Alexa and Google Home use natural language processing algorithms to turn speech into intents and answer our questions. 

Currently, personal data is predominately used in targeting, segmentation, personalising and tailoring the user experience. In the absence of personal data, marketers will look to real-time data that can be gleaned as consumers move through the customer journey. The focus will shift to behavioural data which, when quickly analysed and acted upon, gives experience makers replacement cues to segment, personalise and optimise. This analysis and activation is a key area of activity for machine learning which, according to Forbes, with 57% of enterprise executives believing the most significant growth benefit of AI and machine learning will be in the area of improving customer experience and support.

Sam Miller, Consultant, Cognifide

The End of the Data Wild West

The advent of the GDPR will certainly end the Wild West era of big data, but if organisations are rigorous in following compliance requirements, the development and deployment of artificial intelligence solutions should not be a problem.

Understandably, the GDPR gives rise to questions about collecting review feedback. AI applications are, for example, increasingly deployed to enhance advanced online review systems, providing users with more personalised and rapid access to what most interests them. From hundreds of reviews, consumers rapidly get to the nub of what they want, while businesses can spot trends and analyse results with great speed and precision.

Customers will not need to be asked permission before they can be sent a feedback request, since customer feedback is considered to be market research, which allows a business to contact a customer about a specific sale or service without consent as long as it is directly linked to that transaction. Being open and honest about how an individual’s data will be used is vital in the deployment of AI to provide hyper-personalised services or experiences. Consumers who trust that their data will be used to benefit them are more willing to share personal information.

Businesses relying on customer feedback will nonetheless still have to ensure that their data processors are compliant with GDPR. A reviews provider will need to maintain a record of their processing operation and disclose any breaches.

AI is essential to the future of business after all. According to recent Feefo research, two-thirds of senior IT-decision-makers said that failure to adopt AI will have a catastrophic effect on competitiveness.

Yet despite the new regulation, businesses will still be able to use feedback to improve services and enhance the customer experience.

Neil Mcilroy, Head of Product, Feefo

The Cybersecurity Perspective

While GDPR is predominantly about process and people, our contribution is able to reduce the incidence of data breaches that could be affected by malicious software. The knock-on implications of not being able to stop a breach can be, as we know, are very large and the business implications could easily outweigh even the hefty fines GDPR can impose on organisations. Uniquely, in the AV field, our AI/ML based solutions can predictively prevent even zero day malware effectively closing a route to data exfiltration and also changing the investment balance organisations can take advantage off. By focusing on prevention, the investment profile in both products and services to pursue breaches into the network looks very different.

Dr Anton Grashion, manager – security practice, Cylance

Challenges That Can Be Overcome

The real value of AI, especially in the B2C realm, is how it can help create a truly personalised experience for people.  To do that, however, AI implementations generally depend on access to customers and their data.  If not properly implemented, companies may run the risk of exposing potentially sensitive data through a variety of venues.

As we look to the upcoming in-force deadline for the GDPR, it probably isn’t much of a surprise that these regulations will present challenges to companies providing AI products and services. But these challenges are not insurmountable. Successful integration of AI and machine learning are based on two fundamental privacy principals—transparency and the ethically sound use of personal data. Data scientists and AI/ML implementers can avoid complex issues by anonymising or pseudonymising the data — separating personal data from actual AI models in independent tables and indexes. By keeping the PII separate, data subject rights can be realised without affecting the integrity of the model.

It becomes a little more complex in the case of profiling which, in general, is a commonly accepted practice for delivering personalisation in customer experience, but does have limits of what is acceptable. Under GDPR, an individual has the right to not be subject to a decision determined solely on automated decision-based processing, when such a decision could have a legal or an otherwise significant impact on the individual. As a result, using automated profiling for activities such as setting insurance rates or determining credit card eligibility will generally be subject to certain mitigating factors and require a valid exception such as ‘explicit consent’. 

Ultimately, GDPR will affect AI in similar ways to other technologies. It’s important for a vendor to know what data they can use and how they can use it at the outset of any AI project to ensure compliance with GDPR and to gain longstanding.

Gerald Beuchelt, CISO, LogMeIn

Impacts Across Artificial Intelligence

We have seen an escalating trend toward brands developing retail, e-commerce and digital marketing strategies that tap AI and machine learning technologies to drive relevant, in-moment business moments and customer experiences.

While we don’t expect GDPR to completely eradicate this trend — the potential benefits of these technologies are simply too big to ignore for marketers — we do anticipate that the way these techniques are employed must and should change, for some important reasons.

Firstly, there’s the GDPR’s requirements that consumer consent be verifiably captured for each processing purpose, managed across the full lifecycle of each customer, proven to auditors at any point upon request, and that customers are able to easily understand and control the processing and accuracy of their personal data.

These rules, of course, apply to consumers’ personally identifiable information (PII). However, the GDPR expands the scope of ‘personal data’ to include processing of any attributes that can be used to directly or indirectly identify an individual within the regulations’ jurisdiction. This most definitely applies to behavioural data-hungry personalisation and analytics programs running at blistering speeds and driven by sophisticated, constantly evolving algorithms. AI-based marketing, service, sales and product development initiatives that ignore or downplay this reality will soon be at risk of incurring damage in the form of massive penalties issued by the EU Commission as of May 25th.

Beyond punitive risks, though, is the reality that these technologies, to be effective, rely on massive amounts of data collected from consumers, who are losing trust in business, media and government organisations’ ability to secure their data and respect their privacy. If organisations are unable to establish trust with their audience, their datasets will shrink through attrition: a situation even the most sophisticated algorithms cannot overcome.

But, organisations that have the ability to capture data from customers through transparent, value-for-information exchanges — where consent is explicitly granted and maintained — can, under GDPR, build a truly accurate view of customers that can fuel more effective personalisation. Whether a machine or a human is powering the customer experience that experience should be built on a basis of trust.

Peter Trend, Head of UK&I, SAP Hybris

Significant Hurdles for Certain AI Applications

Whilst the obligations of GDPR fall similarly on most industries that use personally identifiable data, there is a specific impact on AI and machine learning technologies as a result of the legislation coming into effect. In fact, certain applications might find that they face considerable hurdles. The regulation states that individuals ‘shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’.

So, an AI solution that uses solely automated processing comes under scrutiny, but a solution with meaningful human oversight of the machine’s output is acceptable. There are exceptions that allow automated decision making where they are provided by law, as in the case of fraud prevention or money laundering checks, or where it is necessary for the performance of or for entering into a contract, or of course, in line with the general spirit of GDPR, it is based on the individual’s prior consent.

The situation is relatively clear-cut for the use of AI in marketing for example, as customers will of course be required to grant educated and explicit consent for any data use and profiling by automation. A problem arises when automated decision systems are used to process special categories of personal data, like health. It may be difficult to allow users to withhold consent and still receive any service, for example, in insurance.

To comply with the GDPR businesses who provide AI products, services and applications must bear in mind that under the regulation, individuals must be allowed to secure human intervention or to object to an automated decision made using their data. In complex cases, the logic followed by the machine must be explainable and detailed enough to justify answers to complaints raised by individuals or regulatory bodies.”

Ian Woolley, CRO, Ensighten

Contradiction Between AI and GDPR

There is an essential contradiction between GDPR and the greater use of AI. Whilst the purpose of the GDPR is to protect personal data and ensure it is being processed legally for specific use, AI autonomously learns about individuals by processing this same data.

Consent is not a simple question. Consent cannot be assumed. Consent is not a guarantee for GDPR compliance.

Banks will need to balance an overwhelming number of customers and the validity of their consent. Pre-existing or generic consents may preclude the data from being used in future.”

The GDPR will have an impact on automated decision making, for example credit decisioning for a loan. AI works by automatically developing profiles from scratch, rather than by using pre-determined profile patterns. Part of the GDPR outlines that individuals should not be subject to a decision based purely on automated processing, and this includes profiling. This means if a bank is using AI form such profile assumptions without human intervention.”

Customers will also be able to say if they don’t want automated decisions to be made about them, meaning that banks will have to find other means to do so.

Fundamentally, it’s difficult to measure the true impact these changes will have until the first case of breaches go to court and clear legal parameters are put in place. Until then, questions still remain.

Dharmesh Mistry, Chief Digital Officer, Temenos

AI-Driven Personalisation Under Scrutiny

Under the GDPR, practices that rely heavily on AI, such as website personalisation, will be put under more scrutiny than before. The GDPR has not been drafted to stop companies from taking advantage of technology. In fact, the GDPR embraces technology, providing a much-needed update to the outdated Data Protection Directive. Companies need to be vigilant, but this narrative pitting GDPR against data-driven technologies is misleading.

The GDPR requires companies to determine and document which lawful basis for processing is appropriate for each processing activity they undertake. For many, consent will be the default choice, but the issue of online consent has always been contentious. There is seemingly a paradox between the (quite reasonable) requirement for consent to be specific, informed and freely given and the practical reality that none of us have time to read the privacy policies or cookie notices for every website we visit. Can it be possible to truly consent to something without knowing what you’re consenting to? This issue of transparency is particularly challenging for processes that are reliant on AI, where algorithms used are likely complex and difficult to understand.

For this reason, many companies may want to think about an alternative basis for processing. Legitimate interests is one option, and the GDPR clearly states that companies may have a legitimate interest in getting to know their customers’ preferences, as this enables them to better personalise their offers and better meet the needs of their customers. However, having a legitimate interest is not enough. After identifying such an interest, a company must then assess whether their interests are overridden by the data subject’s interests or fundamental rights and freedoms. This is not a straightforward process and will need to be examined carefully. But doing so is likely to be far easier than summarising complex AI algorithms to the everyday visitor of your products, services or applications.

Jack Carvel, General Counsel, Qubit

Right To Be Forgotten Poses Challenges for Some AI Applications

The introduction of the GDPR is very specifically targeted towards data privacy, the storage of personal information, and identity data within IT systems. This is an interesting focus, as this convergence of IT systems and personal information is also an area where dramatic accelerations in the field of AI are transforming the IT industry.

In very simple terms, new AI initiatives and capabilities enable the processing of very large data sets in order to achieve conclusions that previously could not be reached either by a computer or human. Although some very large data sets are person-independent—such as processing anonymous large scale medical data, or machine telemetry across monitoring tools—many of the applications coming from AI innovations are directly relevant to individual identity data.

The interesting thing is that the GDPR at its most basic level requires all individual user-identifiable data to be accountable for and removable on request, known as the ‘right to be forgotten’. Any names, email addresses, phone numbers, or other personal data, must be known within an organisation, and removable on request.

For some AI networks this will not be a problem, as some AI will help us reach conclusions on non-individual data, including patterns and trends. As long as this analysis does not hold individual account names and other identifiable data, this is not a GDPR issue.

But, if that data either holds individual details, or can be re-identified to a specific individual, then the GDPR is applicable. Any interaction with Alexa, Siri, and even Google search results that are recorded from ‘John Smith’ need to track and remove all existence of John. Any AI conclusions that says John is likely to enjoy this movie, or that food, needs to track every recommendation along with every advert and must be able to remove any record of John from that data.

So, regardless of the GDPR, if AI is analysing anonymous data then this is fine. Where any AI work is applied to individual, non-anonymous data, a whole new layer of data security best practice and complexity is legally required. But that’s not a bad thing—that’s the point of GDPR. It brings the reassurance that our personal data is not being abused and misused. This process still isn’t perfect, but it is better than no regulation at all.

Mind you, the complexity of finding and extracting personal data from AI data lakes and cloud parallel processing algorithms is so complicated that there may now be a good need for an AI-centric GDPR system that can monitor AI and check that it is in line with the regulation.

Ian Aitchison, Senior Product Director, ITSM, Ivanti

GDPR Will Push Companies to Use AI

High-quality customer data is the fuel that will enable AI features to deliver the next generation of experiences. Brands that aspire to differentiate in this way must build trust with customers, and provide frictionless ways for them to give (and remove) consent.

If this can be achieved, the data collected can be used to create highly tailored, and contextual experiences. These ‘new nudges’ will initially enhance recommendation engines before being used to craft fully rounded creative ideas that affect behaviour change.

Ensuring compliance with GDPR will catalyse many companies to use AI. Businesses are under huge pressure to maintain data across multiple disjointed IT systems. They are required to support customers who wish to exercise their right to withdrawal of consent, and the right to be forgotten. Many businesses simply won’t have the human resources to ensure the successful management of consent and compliance.

Robotic process automation (RPA) and intelligent AI systems have the potential to remove much of this headache. Compared to resource-intensive manual processing, systems of this nature are highly efficient and cost effective. And because they have access to customer data, they will be well placed to support the front end applications using this data to enhance customer experiences.

Alastair Cole, Chief Innovation Officer, Partners Andrews Aldridge

Minimal Impact to Artificial Intelligence

I don't think the GDPR is going to have a massive impact on this at all. I think everybody's got very focused on keeping personal data and using personal data, but I think what will become clear as the GDPR comes into force is that really it is coveting the use of data for marketing; the GDPR is fundamentally an anti-marketing initiative.

So yes, it covers the whole spectrum of data and yes it changes some of the parameters around holding data, but fundamentally if you are using personal data to deliver a service to your customers then the GDPR doesn't change anything at all.

When you look at where AI is being used it’s bang in that scope. Amazon, a big user of AI, gives customers a completely individualised homepage and every product that you're shown, every recommendation - all of that is done by AI.

That's not marketing, because you've gone to Amazon because you want to buy stuff and they're improving your experience of the sales journey. They're not using it to send you emails, they're not using it to push for services that you haven't asked for. If you want them to stop processing your data it's dead simple: close your Amazon account.

The only area where the GDPR adds something to the current legislation is the right to opt out of automated decision making, one of the fundamental GDPR rights. Some people interpret it as the right to object to any kind of automated decision making, such as the right to say to Amazon I don't want automated product recommendations.

However, if you read the regulations that is not actually true. You only have the right to object to automated decision making where the decision has a serious and fundamental impact, such as mortgage and insurance decisions. The GDPR isn't saying that your data can't be used for that because data has been used in those decisions for years. However, if a company makes a fully automated decision about whether you're going to get a mortgage or insurance policy, you have the right to ask. If you disagree with the decision you have the right to ask for a human to review that decision. That's fundamentally what GDPR says on AI.  

Tom Martin, retail intelligence and GDPR expert, OmniCX

Opportunity to Build Customer Trust

Allowing organisations to access our data has been aiding us for years and denying them access would also mean saying goodbye to a plethora of benefits, such as searches while shopping online.

What the Cambridge Analytica scandal has highlighted is that companies must make sure that information about individuals is held and utilised in a safe and responsible manner. In fact, if an organisation can demonstrate to its customers that their data is valued and important, and used in the right way, they can improve levels of trust and develop more meaningful relationships with their customers.

The GDPR will force organisations to be more open and considerate in their use of customer data, so it becomes vital that businesses employ over-arching technology that can ensure compliance with data regulations and transparency when AI is used to drive customer interaction.

As part of the GDPR there will be a lot of pressure on organisations to prove why providing information or making a product offer is in the best interests of the customer and is defensible.

John Everhard, Director, Pegasystems

AI Marketing Technology Rarely Interested in Individuals 

There are a wide range of products and services out there that use machine learning to help cluster or identify behaviours.


Whilst GDPR extends the definition of personal data and can potentially include things like IP addresses, machine learning and AI marketing technology is rarely interested in identifying people individually.


In most cases, the aim of this technology is to identify behaviours or market segments to create and target more relevant advertisements or content to consumers.


As long as businesses are clear in describing how they are using people’s data, and are getting the necessary consent to do so, AI and machine learning will be widely used in marketing and continue to transform the space.


Other technologies such as blockchain also offer businesses the opportunity to create transparent systems, which give consumers confidence about how their data is being used. In the future we will definitely see more businesses using technology like blockchain as they look to build trust with these transparent approaches.


As long as there’s a value exchange between business and consumers and evidence that businesses are handling personal data with care, consumers will continue to be happy with their data being used to offer them bespoke marketing content.

Jim Bowes, CEO and Founder, Manifesto