(单词翻译:单击)
Can a Machine Know That We Know What It Knows?
机器能知道我们知道它知道什么吗?
Some researchers claim that chatbots have developed theory of mind. But is that just our own theory of mind gone wild?
一些研究人员声称,聊天机器人已经发展出了心智理论
Mind reading is common among us humans. Not in the ways that psychics claim to do it, by gaining access to the warm streams of consciousness that fill every individual's experience, or in the ways that mentalists claim to do it, by pulling a thought out of your head at will.
读心术在我们人类中很常见
Everyday mind reading is more subtle: We take in people's faces and movements, listen to their words and then decide or intuit what might be going on in their heads.
日常读心术更为微妙:我们观察人们的面部表情和动作,倾听他们的话语,然后决定或凭直觉感知他们的脑袋里可能在想些什么
Among psychologists, such intuitive psychology -- the ability to attribute to other people mental states different from our own -- is called theory of mind, and its absence or impairment has been linked to autism, schizophrenia and other developmental disorders.
在心理学家中,这种直觉心理学--理解他人与我们自己不同的心理状态的能力--被称为心智理论,缺失心智理论或心智理论受损与自闭症、精神分裂症和其他发育障碍有关
Theory of mind helps us communicate with and understand one another; it allows us to enjoy literature and movies, play games and make sense of our social surroundings. In many ways, the capacity is an essential part of being human.
心智理论帮助我们彼此交流和理解彼此,它让我们能够享受文学和电影、玩游戏、理解我们所处的社会环境
What if a machine could read minds, too? Recently, Michal Kosinski, a psychologist at the Stanford Graduate School of Business, made just that argument: that large language models like OpenAI's ChatGPT and GPT-4 — next-word prediction machines trained on vast amounts of text from the internet — have developed theory of mind.
如果机器也会这种读心术,那会怎么样呢?最近,斯坦福大学商学院的心理学家米哈尔·科辛斯基就提出了这种观点:像OpenAI的ChatGPT和GPT-4这样的大型语言模型--接受互联网上大量文本的训练,并能预测下一个单词的机器 -- 已经发展出了心智理论
His studies have not been peer reviewed, but they prompted scrutiny and conversation among cognitive scientists, who have been trying to take the often asked question these days — Can ChatGPT do this? — and move it into the realm of more robust scientific inquiry.
他的研究还没有经过同行评审,但引发了认知科学家的审视和对话,认知科学家们最近一直在试图回答一个经常被问到的问题 -- ChatGPT能做到这一点吗?-- 并将这一问题至于更严谨可靠的科学探索领域
What capacities do these models have, and how might they change our understanding of our own minds?
这些模型有哪些能力?它们会如何改变我们对自己的心智的理解?
"Psychologists wouldn't accept any claim about the capacities of young children just based on anecdotes about your interactions with them, which is what seems to be happening with ChatGPT," said Alison Gopnik, a psychologist at the University of California, Berkeley and one of the first researchers to look into theory of mind in the 1980s. "You have to do quite careful and rigorous tests."
"心理学家不会只是根据你与幼儿互动时发生的奇闻轶事,就接受关于这个幼儿的能力的任何说法,而ChatGPT现在就是这种情况
Dr. Kosinski's previous research showed that neural networks trained to analyze facial features like nose shape, head angle and emotional expression could predict people's political views and sexual orientation with a startling degree of accuracy (about 72 percent in the first case and about 80 percent in the second case).
科辛斯基博士之前的研究表明,接受了鼻子形状、头部角度和情绪表情等面部特征分析训练的神经网络,能以惊人的准确率预测人们的政治观点和性取向(前者的准确率约为72%,后者约为80%)