Dr. Ullman noted that machine learning researchers have struggled over the past couple of decades to capture the flexibility of human knowledge in computer models. This difficulty has been a "shadow finding," he said, hanging behind every exciting innovation.
Researchers have shown that language models will often give wrong or irrelevant answers when primed with unnecessary information before a question is posed; some chatbots were so thrown off by hypothetical discussions about talking birds that they eventually claimed that birds could speak.
Because their reasoning is sensitive to small changes in their inputs, scientists have called the knowledge of these machines "brittle."
Dr. Gopnik compared the theory of mind of large language models to her own understanding of general relativity.
"I have read enough to know what the words are," she said. "But if you asked me to make a new prediction or to say what Einstein's theory tells us about a new phenomenon, I'd be stumped because I don't really have the theory in my head."
By contrast, she said, human theory of mind is linked with other common-sense reasoning mechanisms; it stands strong in the face of scrutiny.
In general, Dr. Kosinski's work and the responses to it fit into the debate about whether the capacities of these machines can be compared to the capacities of humans -- a debate that divides researchers who work on natural language processing.
Are these machines stochastic parrots, or alien intelligences, or fraudulent tricksters?
A 2022 survey of the field found that, of the 480 researchers who responded, 51 percent believed that large language models could eventually "understand natural language in some nontrivial sense," and 49 percent believed that they could not.
Dr. Ullman doesn't discount the possibility of machine understanding or machine theory of mind, but he is wary of attributing human capacities to nonhuman things.
He noted a famous 1944 study by Fritz Heider and Marianne Simmel, in which participants were shown an animated movie of two triangles and a circle interacting. When the subjects were asked to write down what transpired in the movie, nearly all described the shapes as people.
"Lovers in the two-dimensional world, no doubt; little triangle number-two and sweet circle," one participant wrote. "Triangle-one (hereafter known as the villain) spies the young love. Ah!"
It's natural and often socially required to explain human behavior by talking about beliefs, desires, intentions and thoughts. This tendency is central to who we are -- so central that we sometimes try to read the minds of things that don't have minds, at least not minds like our own.