The concept of a machine that can understand what humans are feeling is a common feature of the sci-fi genre, but the ability to unlock how a customer or user feels towards a product or service is now within reach, thanks to advances in biometrics and artificial intelligence.

According to a study by CloudIQ, 69% of consumers want an individualised experience, and on paper, comprehending the emotional state of individuals at a specific point in time may be a way of achieving the highest level of personalisation. However, it brings with it a myriad of ethical considerations.

Emotional analytics

Emotional analytics, also known as affective computing, tries to understand and analyse human emotion to quantify how someone feels about a particular product or service, be it an advert, a new piece of tech or a type of food or drink.


According to Forbes the affective computing market could grow to around $41bn by 2022, with Tacit providing services to household names including Mars, Hasbro, Adidas, Unilever, Dove and Magnum.


Josipa Majic, CEO and founder of Tacit, explains how quantifying human emotions takes place: “Affective computing is the field of identifying and analysing emotional behaviour, but through objective parameters such as biometric data.


“So not relying on self-reporting, because that was traditionally very challenging for us humans; this is where affective computing steps in and provides a data-driven approach in quantifying and analysing these things.”


Emotional analytics works by analysing how certain biological cues are linked to particular emotions such as fear, happiness or anger, which are then interpreted by machine learning algorithms. Over time, machines can learn to identify trends in biometric data and match this to particular emotions.

“Affective computing is the field of identifying and analysing emotional behaviour, but through objective parameters such as biometric data.”

Currently the technology is being utilised in the area of market research. Companies such as Tacit gather biometric data by measuring a range of factors, from electrical signals in the brain, electro-cardio signals, and electro-thermal activity, and also speech sentiment analysis and facial expressions captured by a smartphone camera. This is all done in a controlled setting, with participants’ responses to a particular product monitored, providing a number of insights that can be valuable to organisations.


“We collect biometric data in order to know exactly how you feel with millisecond precision but without asking you any question. And we crunch [the data]to get the three most important pieces of information,” Majic explains.


“One is emotional classification. So essentially, what is the emotion that you're feeling? The second is cognitive workload, meaning the level of mental effort you have to put in in order to comprehend what's being shown to you, whether that's an ad or solving anything like an algebra equation. And the third one is stress. So the level of discomfort that you feel when faced with that specific experience, or stimuli.”


Once biometric data has been collected, AI steps in for the purpose of prediction.


“Predicting consumer preferences – the perfect example for this is analysing the customer experience of mobile banking apps and after obtaining enough data and samples,” she says.


“You can successfully predict what would be the perfect product feature combination and how does it differentiate and evolve if you are trying to cater different user demographics. Detecting differences between what was explicitly said vs what was implicitly detected, for example detecting if there are significant discrepancies between what the person says and is feeling.”

“A negative perception”

The ability to accurately pinpoint the emotion evoked by something offers many insights, helping companies identify and iron out particular areas of frustration when testing new products, allowing more natural interactions between humans and voice assistants and chatbots, as well as having potential applications in psychological research.


Software company Affectiva has demonstrated how detecting user emotions can have beneficial, and even life-saving impacts after developing a product that can detect signs that a driver may be falling asleep at the wheel from facial cues.


Another company, Brain Power, has developed a tool that identifies emotions in others, designed to help those who may have difficulties detecting emotion, such as those on the autistic spectrum.


However, the concept of artificial intelligence understanding human emotion may be unsettling for some, with the technology having scope for misuse. Capitalising on customer emotions in order to market particular products, or using the technology for the purpose of surveillance and profiling, have been raised as potential concerns as AI’s ability to recognise human emotion improves.

“One of the common misconceptions that was present in the past is that we can essentially read your mind, and no company wants to be perceived as a company that is hacking the brains of consumers or employees or partners.”

With the ongoing debate surrounding the deployment of facial recognition without the public’s consent, the collection of data related to emotion is often met with hostility. According to The Guardian, a survey conducted in 2015 found that 50.6% of UK citizens are “not okay” with their emotions being detected.


It is evident that the technology could easily become an avenue for companies to gather an even greater volume of data on users. According to Forbes, Facebook acquired FacioMetrics, a facial analysis software company, in 2016, and in the same year Apple also acquired Emotient, a company that uses AI to detect emotions.


Majic acknowledges that the idea of companies having an even greater insight into customers’ thoughts and behaviours, especially as they are waking up to the sheer volume of data the likes of Facebook and Google have on them, may be something not all are comfortable with.


“I think one of the common misconceptions that was present in the past is that we can essentially read your mind, and no company wants to be perceived as a company that is hacking the brains of consumers or employees or partners. That's quite a negative perception,” she says.


“We think that with further education and as time goes by, and there's more advancements and a more mature public discourse of what the tech is and what it can do, these types of prejudice will just disappear. But at the time being, there is a lack of understanding and because of that, certain kind of Black Mirror type use cases are usually the ones that surface to the mainstream press.”

“The most important thing is privacy”

Majic points to several principals that must be upheld to ensure that the analysis of emotions does not infringe on privacy. Unsurprisingly, consent is key.


“The most important thing from a data protection privacy perspective is for you to be aware, as a respondent, what is being collected. When the does the collection start? When does it finish? And what is the single purpose of what the data will be used for?” she asks.


Majic points to Amazon’s trials of Rekognition, its facial recognition software that was trialled in some public places in the US, as an example of how they technology should not be used, with long-term trials that customers may not have knowledge of raising concerns.


“Companies like Amazon which run what is essentially an unlimited trial unlimited pilot project and really a test where they, for example, scan anyone who enters an airport just to collect the data indefinitely, so in perpetuity,” she says.


“These types of projects tend to be detrimental for a range of reasons.”

“The most important thing from a data protection privacy perspective is for you to be aware, as a respondent, what is being collected.”

With any form of customer data, consent should be a primary concern. Majic explains why this is particularly crucial when handling biometric data.


“From biometric data, we can often detect any health or mental health problems. And our outcomes are designed in a way so everything is anonymised, randomised, so we never share raw data,” she says.


“No one can ever mine the data in order to extract that type of information. So our algorithms are essentially blind for that. And we never ever share raw data with any clients just analyse and randomise metadata.”


Overall, she believes that greater awareness of both the benefits and scope for misuse is key to the ethical deployment of emotional AI.


“I think educating the market and educating the wider media in what this is, how this will most likely be deployed, both by tech companies, public institutions, health companies and anything in between [is important],” she says.


“What are the possibilities and what's the most likely path to getting the most out of this tech in an ethical and acceptable way.”

Share this article