首页 资讯正文




816个词,by John Thornhill

Rana el Kaliouby has spent her career tackling an increasingly important challenge: computers don’t understand humans. First, as an academic at Campidge university and Massachusetts Institute of Technology and now as co-founder and chief executive of a Boston-based AI start-up called Affectiva, Ms el Kaliouby has been working in the fast-evolving field of Human Robot Interaction (HRI) for more than 20 years.

拉纳埃尔卡利乌比(Rana el Kaliouby)在职业生涯中一直在应对一项日益重要的挑战:计算机不理解人类。她在快速发展的人机互动(HRI)领域工作已有20多年,先是作为剑桥大学(University of Campidge)和麻省理工学院(MIT)的学者,现在是波士顿人工智能初创企业Affectiva的联合创始人和首席执行官。

“Technology today has a lot of cognitive intelligence, or IQ, but no emotional intelligence, or EQ,” she says in a telephone interview. “We are facing an empathy crisis. We need to redesign technology in a more human-centric way.”



That was not much of an issue when computers only performed “back office” functions, such as data processing. But it has become a bigger concern as computers are deployed in more “front office” roles, such as digital assistants and robot drivers. Increasingly, computers are interacting directly with random humans in many different environments.


This demand has led to the rapid emergence of Emotional AI, which aims to build trust in how computers work by improving how computers interact with humans. However, some researchers have already raised concerns that Emotional AI might have the opposite effect and further erode trust in technology, if it is misused to manipulate consumers.

这种需求促使情感人工智能(Emotional AI)快速兴起。这种人工智能旨在通过改善电脑与人类的互动方式,建立对电脑工作方式的信任。然而,一些研究人员已提出担忧:如果企业为了操纵消费者而滥用情感人工智能,那就可能会适得其反,进一步破坏人们对科技的信任。


In essence, Emotional AI attempts to classify and respond to human emotions by reading facial expressions, scanning eye movements, analysing voice levels and scouring sentiments expressed in emails. It is already being used across many industries, ranging from gaming to advertising to call centres to insurance.


Gartner, the technology consultancy, forecasts that 10 per cent of all personal devices will include some form of emotion recognition technology by 2022.


Photo credit: Getty Images

Amazon, which operates the Alexa digital assistant in millions of people’s homes, has filed patents for emotion-detecting technology that would recognise whether a user is happy, angry, sad, fearful or stressed. That could, say, help Alexa select what mood music to play or how to personalise a shopping offer.

Affectiva has developed an in-vehicle emotion recognition system, using cameras and microphones, to sense whether a driver is drowsy, distracted or angry and can respond by tugging the seatbelt or lowering the temperature.


Photo credit: Getty Images

And Fujitsu, the Japanese IT conglomerate, is incorporating “line of sight” sensors in shop floor mannequins and sending push notifications to nearby sales staff suggesting how they can best personalise their service to customers.


A recent report from Accenture on such uses of Emotional AI suggested that the technology could help companies deepen their engagement with consumers. But it warned that the use of emotion data was inherently risky because it involved an extreme level of intimacy, felt intangible to many consumers, could be ambiguous and might lead to mistakes that were hard to rectify.



Photo credit: Getty Images

The AI Now Institute, a research centre based at New York University, has also highlighted the imperfections of much Emotional AI (or affect-recognition technology as it calls it), warning that it should not be used exclusively for decisions involving a high degree of human judgment, such as hiring, insurance pricing, school performance or pain assessment. “There remains little or no evidence that these new affect-recognition products have any scientific validity,” its report concluded.

In her recently published book, Girl Decoded, Ms el Kaliouby makes a powerful case that Emotional AI can be an important tool for humanising technology. Her own academic research focused on how facial recognition technology could help autistic children interpret feelings.


Photo credit: Getty Images

But she insists that the technology should only ever be used with the full knowledge and consent of the user, who should always retain the right to opt out. “That is why it is so essential for the public to be aware of what this technology is, how and where data is being collected, and to have a say in how it is to be used,” she writes.


The main dangers of Emotional AI are perhaps twofold: either it works badly, leading to harmful outcomes, or it works too well, opening the way for abuse. All those who deploy the technology, and those who regulate it, will have to ensure that it works just right for the user.


本文6月10日发布于FT中文网,英文原题为 How AI is getting an emotionally intelligent reboot