BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Are We Ready To Get Emotional With AI?

Forbes Technology Council
POST WRITTEN BY
Carlos Melendez

Robots with a heart and soul have been the stuff of science fiction for years, but as artificial intelligence (AI) and software robots become a part of our daily lives, the lines between science fiction and reality are beginning to blur.

Yet, as we interact with virtual assistants or speak to call center chatbots we understand the limitations of today’s AI.

AI can sift through lots of data to tell you George Washington’s date of birth or the status of your last mortgage payment, but when it comes to sensing how you feel, progress is still being made. As AI becomes increasingly smarter, it is beginning to move on from the bare facts to identifying subtle nuances in emotions, speech and expressions.

Also known as "affective computing," it’s really nothing new; it dates back to 1995, when MIT Media lab professor Rosalind Picard published the book “Affective Computing,” which explores the development of systems and devices that recognize, interpret, process and simulate human affects. But why the sudden interest in emotion AI?

As we get more comfortable with AI, we’re seeing all kinds of new possibilities. In fact, Gartner predicts that by 2022, personal devices will know more about an individual's emotional state than his or her own family. Once emotion AI truly takes the hold, the possibilities are endless. Consider the following scenarios:

• In health care, it will be able to spot mental and physical ailments in patients based on how they look or sound.

• In customer service, it can flag customers that are getting annoyed and elevate the conversation to a human.

• In marketing, it can determine consumers’ reactions to television commercials to determine their effectiveness.

• In insurance, it can be used to detect when a customer is not telling the truth or submitting a false claim.

Rapid advances in technology are enabling data scientists to program voice assistants. With a better understanding of the emotions in someone’s voice and the behaviors associated with those emotions, and the more we understand these nuances, the more agile and emotionally intelligent our AI systems will become. But, despite its growing awareness, we still have a long way to go before emotion AI becomes an everyday thing; and I suspect it will be long past 2022.

Chatbots Lead The Way In Emotion AI

Leading the way in emotion AI, however, is detecting emotions in voice using chatbots or Natural Language Processing (NLP).  AI can listen to voice inflections and start to recognize when those inflections correlate with stress or anger (since emotion less about what we say and more about how we say it). The challenge is in identifying the many ways words are used and training the chatbots or virtual assistants to know them.

While detecting emotion in voice may be the forerunner of emotion AI, the challenge is that these AI-enabled devices must be trained with huge datasets (just think of the many ways you display your annoyance when you are in a call center phone loop), and that simply takes tons of time.

There are also a number of other obstacles that must be overcome first before emotion AI becomes a way of life:

Immaturity of computer vision. Computer vision is accomplishing many things, helping AI spot specific items within images faster than humanly possible. Taking it to a more granular level, however, is a far way off. For example, today AI can detect men with blonde hair from thousands of images of people; yet it’s not possible for it to detect images of blonde-haired men with sad expressions.  That’s something that will require thousands of examples of what a sad face looks like — and that can vary from man to man, culture to culture and region to region. Even with the datasets required, right now computer vision technology is simply not advanced enough to address it.

Biased perception. It’s one thing when AI is used to simply relay facts, but when it’s tasked with identifying very subtle micro-expressions or intonations in speech, it’s very difficult to explain how decisions are made. Since it’s humans who ultimately train AI algorithms how to respond in a given situation, it’s up to humans to train them fairly — giving them diverse datasets that address a variety of cultures, races, countries, genders and other factors, while also avoiding stereotypes. This is no easy task, and simply requires tons of data aggregated over many years — there’s no rushing it.

Data privacy concerns. Emotion AI must overcome another hurdle before it goes mainstream — gaining the trust of people. In a Gartner survey, 52% of more than 4,000 respondents said they didn’t want their facial expressions to be analyzed by AI, and 63% did not want AI to take an "always-on" listening approach to get to know them better. In order to build smart emotion AI, gathering this data will be crucial, yet today there is clear mistrust.

There’s no question that emotion AI will one day become an everyday reality, and despite claims to the contrary among some industry providers, we’re not quite there yet in a pragmatic way. There’s still a lot of work that needs to be done for algorithms to accurately interpret data, for technology to evolve and for misperceptions to shatter, but once that’s accomplished, we’ll be on whole new level of human-to-machine communication — and maybe even human-to-human understanding.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?