Anyone can sit down in an artificial intelligence (AI) program such as ChatGpt to write poetry, children’s stories, or scripts. It’s creepy: the results can seem very “human” at first glance. But don’t expect much of depth or “richness” of sensation, as researchers explain in new research.
They discovered that the large-scale language mode (LLM) in which the currently generated AI tools are currently occurring cannot represent the concept of flowers in the same way as humans.
In fact, researchers suggest that LLMs are less good at expressing “things” that have sensory or motor components.
“The big language model cannot smell roses, touch daisy petals, or walk through wild flower fields. Without these sensory and motor experiences, it cannot truly represent what a flower is in other human concepts either.
This study also suggests that AI is less capable of expressing flower-like sensory concepts.
“AI does not have a rich sensory experience, so AI often produces things that meet the minimum definition of creativity, but it is hollow and shallow,” says Mark Ranko, a cognitive scientist at the University of Oregon, USA, who is not involved in the study.
This study was published in the Nature Human Behavior Journal on June 4, 2025.
It’s poor to express the concept of sensation
The more scientists investigate the inner workings of AI models, the more they can find out how different their “thinks” are compared to human “thinks.” Some say that AIS is so different, that it resembles the form of alien intelligence.
However, it is difficult to objectively test the conceptual understanding of AI. When a computer scientist opens the LLM and looks inside, he doesn’t necessarily understand what millions of numbers that change every second really mean.
Xu and her colleagues aimed to test that LLMS can “understand” things based on sensory traits. They did this by testing how well LLM expresses words with complex sensory meanings, whether it makes things emotionally excited, whether it can mentally visualize things, and how it expresses measurement factors such as movement and action-based representation.
For example, they analyzed the extent to which humans experience flowers by sniffing them, by sniffing actions from the torso, such as reaching out to touch the petals. These ideas are easy to grasp as they have knowledge of the nose and body.
Overall, LLM describes words well, but these words lack connections with the sensory and motor behaviors we experience and feel as human beings.
However, when it comes to words that connect with what we see, taste, and interact with using our bodies, AI cannot convincingly capture human concepts.
Meaning of “ai art is hollow”
AI creates concepts and word representations by analyzing patterns from the dataset used to train it. This idea is at the root of any algorithm or task, from writing a poem to predict whether the facial image is you or your neighbor.
While most LLMs are trained with textual data scrapped from the Internet, some LLMs are still trained in visual learning from images and videos.
Xu and colleagues found that LLM with visual learning exhibits similarity to human representation in visual-related dimensions. These LLMs have typed other LLMs trained only with text. However, this test was limited to visual learning. Other human senses, such as touch and hearing, were excluded.
This suggests that AI models receive more sensory information as training data and can better represent aspects of sensation.
AI continues to learn and improve
The authors said LLM is constantly improving and that it is likely that AI will be better able to capture future human concepts.
Xu said that if future LLMs are augmented with sensor data and robotics, they can infer and act on the physical world.
However, independent experts spoke to suggest that the future of sensory AI remains unknown.
“AI trained with multisensory information could potentially address aspects of multimodal sensation without any problems,” said Mirco Musolesi, a computer scientist at the University of London, London, UK, who was not involved in the study.
However, Runco said that even with more advanced sensory abilities, AI still understands that it is a flower-like thing that is completely different from humans.
Our human experiences and memories are closely related to our senses. It is the interaction of the brain body that extends beyond the moment. For example, the smell of roses and the silky atmosphere of petals can cause happy childhood memories and greedy excitement in adults.
AI programs do not have a body, memory, or “self.” They lack the ability to experience and interact with the world, as do animals and human animals. This is “the creative production of AI remains blank and shallow,” Runco said.
Editor: Zulfikar Abbany