Can a robot read your emotions? Apple, Google, Facebook and other technology companies seem to think so. They are collectively spending billions of dollars to build emotion-reading devices that can interact meaningfully (and profitably) with humans using artificial intelligence.机器人能背诵你的情绪吗?苹果(Apple)、谷歌(Google)、Facebook等科技公司的答案或许是认同的。它们共计耗资数十亿美元用作研发能背诵情绪的设备,让设备利用人工智能与人类展开有意义(并可带给利润)的对话。These companies are banking on a belief about emotions that has held sway for more than 100 years: smiles, scowls and other facial movements are worldwide expressions of certain emotions, built in from birth. But is that belief correct? Scientists have tested it across the world. They use photographs of posed faces (pouts, smiles), each accompanied by a list of emotion words (sad, surprised, happy and so on) and ask people to pick the word that best matches the face. Sometimes they tell people a story about an emotion and ask them to choose between posed faces.这些企业于是以期盼于一种风行了100多年的有关情绪的观点:微笑、气愤和其他面部活动都是在传达某种情绪,这是与生俱来的,而且是全球相连的。
但这种观点准确吗?科学家在全球各地展开了实验。他们利用面部动作的图片(噘嘴、微笑),每张图片的后面都所列一些叙述情绪的词汇(哀伤、吃惊、高兴等等),然后拒绝实验对象自由选择与面部动作最给定的词汇。有时,他们不会描写一个有关情绪的故事,然后让实验对象在有所不同的面部表情中作出自由选择。Westerners choose the expected word about 85 per cent of the time. The rate is lower in eastern cultures, but overall it is enough to claim that widened eyes, wrinkled noses and other facial movements are universal expressions of emotion. The studies have been so well replicated that universal emotions seem to be bulletproof scientific fact, like the law of gravity, which would be good news for robots and their creators.西方人约有85%自由选择了预期词汇。
东方人的分数较低,但总的来说,这不足以解释眼睛睁大、皱鼻和其他面部动作都是全球通用的情绪表达方式。这些研究反复了多次,结果都一样,标准化的情绪或许出了刀枪不入的科学事实,就像重力法则一样,这对于机器人和他们的创造者来说是个好消息。But if you tweak these emotion-matching experiments slightly, the evidence for universal expressions dissolves. Simply remove the lists of emotion words, and let subjects label each photo or sound with any emotion word they know. In these experiments, US subjects identify the expected emotion in photos less than 50 per cent of the time. For subjects in remote cultures with little western contact, the results differ even more.然而,如果你略为调整一下这些情绪给定实验,表情具备普适性的证据就消失了。
如果去除情绪词汇列表,让实验对象用他们告诉的情绪词汇来叙述图片或声音。在这些实验中,美国实验对象的正确率将近50%,对于与西方认识不多的很远文化的实验对象而言,结果就更加有所不同了。
Overall, we found that these and other sorts of emotion-matching experiments, which have supplied the primary evidence for universal emotions, actually teach the expected answers to participants in a subtle way that escaped notice for decades — like an unintentional cheat sheet. In reality, you’re not “reading” faces and voices. The surrounding situation, which provides subtle cues, and your experiences in similar situations, are what allow you to see faces and voices as emotional.总的来说,我们找到,这些实验以及其他各种情绪给定实验(获取了情绪具备普适性的主要证据)以一种错综复杂的方式把预期的答案教给了实验参与者,而这是几十年来人们不曾注意到的——就像无意中的打小抄。在现实中,你并没在“读者”面部和声音。获取微小提醒的周围环境以及你在类似于情境下的经验,让你把面部活动和声音视作是情绪的传达。
A knitted brow may mean someone is angry, but in other contexts it means they are thinking, or squinting in bright light. Your brain processes this so quickly that the other person’s face and voice seem to speak for themselves. A hypothetical emotion-reading robot would need tremendous knowledge and context to guess someone’s emotional experiences.双眉紧锁住有可能意味著一个人生气了,但在其他背景下,这有可能意味著他们在思考问题或因为光照反感而眯着眼。你的大脑处理速度迅速,以至于别人的面部和声音或许在传达一种情绪。假想中的能背诵情绪的机器人必须大量科学知识和背景来猜测一个人的情绪体验。So where did the idea of universal emotions come from? Most scientists point to Charles Darwin’s The Expression of the Emotions in Man and Animals (1872) for proof that facial expressions are universal products of natural selection. In fact, Darwin never made that claim. The myth was started in the 1920s by a psychologist, Floyd Allport, whose evolutionary spin job was attributed to Darwin, thus launching nearly a century of misguided beliefs.那么标准化情绪的观点从何而来?多数科学家所述查尔斯?达尔文(Charles Darwin)1872年的著作《人与动物的情绪传达》(The Expression of the Emotions in Man and Animals)作为证据,证明面部表情是自然选择的标准化产物。
实质上,达尔文未曾这么说道过。这种众说纷纭源自上世纪20年代的心理学家弗洛伊德?奥尔波特(Floyd Allport),他的进化论说明工作被指出是出自于达尔文,这导致错误的观点沿袭了近一个世纪。
Will robots become sophisticated enough to take away jobs that require knowledge of feelings, such as a salesperson or a nurse? I think it’s unlikely any time soon. You can probably build a robot that could learn a person’s facial movements in context over a long time. It is far more difficult to generalise across all people in all cultures, even for simple head movements. People in some cultures shake their head side to side to mean “yes” or nod to mean “no”. Pity the robot that gets those movements backwards. Pity even more the human who depends on that robot.机器人不会显得充足简单以至于夺去必须解读情绪的工作吗?例如销售人员或护士。我指出,这不太可能迅速经常出现。你也许可以生产一台需要在特定环境下经过长年自学从而解读人类面部表情的机器人。
但把所有文化中所有人的面部表情总结出来就艰难多了,即便是非常简单的头部动作。在一些文化中,大笑的意思是“是”,低头的意思是“不”。把这些动作搞反的机器人不会很真是。
那些倚赖这些机器人的人类就更加真是了。Nevertheless, tech companies are pursuing emotion-reading devices, despite the dubious scientific basis There is no universal expression of any emotion for a robot to detect Instead, variety is the norm.尽管如此,科技公司于是以谋求研发能背诵情绪的设备,尽管其科学基础怀疑。
任何情绪都没标准化的表达方式来供机器人辨识,多样性才是常态。
本文关键词:26888开元棋官方网站,机器人,能理解,喜怒哀乐,吗,【,26888,开元,棋
本文来源:26888开元棋官方网站-www.hbcbwdz.cn