A silhouette of a human head made up of circuitry, with the letters AI in the center of the skull

iStock, Chor muang

Towards a New Generation of Human-Inspired Language Models

Researchers advocate for a fundamental revision of how artificial intelligence acquires and processes language

| 2 min read
Register for free to listen to this article
Listen with Speechify
0:00
2:00

Can a computer learn a language the way a child does? A recent study published in the leading journal Computational Linguistics by professors Katrien Beuls (University of Namur) and Paul Van Eecke (AI Lab, Vrije Universiteit Brussel) sheds new light on this question. The researchers advocate for a fundamental revision of how artificial intelligence acquires and processes language.

"Children learn their native language by communicating with the people around them in their environment. As they play and experiment with language, they attempt to interpret the intentions of their conversation partners. In this way, they gradually learn to understand and use linguistic constructions. This process, in which language is acquired through interaction and meaningful context, is at the core of human language acquisition," says Katrien Beuls.

Lab manager academy logo

Get training in Risk Communication and Decision-making and earn CEUs.

One of over 25 IACET-accredited courses in the Academy.

Certification logo

Risk Communication and Decision-making course

"The current generation of large language models (LLMs), such as ChatGPT, learns language in a very different way," adds Paul Van Eecke.

"By observing vast amounts of text and identifying which words frequently appear together, they generate texts that are often indistinguishable from human writing. This results in models that are extremely powerful in many forms of text generation -- such as summarizing, translating, or answering questions -- but that also exhibit inherent limitations. They are susceptible to hallucinations and biases, often struggle with human reasoning, and require enormous amounts of data and energy to build and operate."

The researchers propose an alternative model in which artificial agents learn language as humans do -- by engaging in meaningful communicative interactions within their environment.

Through a series of experiments, they demonstrate how these agents develop linguistic constructions that are directly linked to their surroundings and sensory perceptions.

This leads to language models that:

Want to stay up to date on the latest lab management news?

Subscribe to our free Lab Manager Monitor Newsletter.

Is the form not loading? If you use an ad blocker or browser privacy features, try turning them off and refresh the page.

  • Are less prone to hallucinations and biases, as their language comprehension is grounded in direct interaction with the world.
  • Use data and energy more efficiently, resulting in a smaller ecological footprint.
  • Are more deeply rooted in meaning and intention, enabling them to understand language and context in a more human-like manner.

"Integrating communicative and situated interactions into AI models is a crucial step in developing the next generation of language models. This research offers a promising path toward language technologies that more closely resemble how humans understand and use language," the researchers conclude.

-Note: This news release was originally published by Vrije Universiteit Brussel. As it has been republished, it may deviate from our style guide.

Loading Next Article...
Loading Next Article...

CURRENT ISSUE - January/February 2025

Energizing Leadership in Action

The science-backed behaviors that help leaders inspire thriving teams and organizations

Lab Manager January/February 2025 Cover Image