ASTOUND

A EC funded project aimed at improving social competences of virtual agents through artificial consciousness based on the Attention Schema Theory

30 March 2025
by rdonepudi
0 comments

Bias, Subjectivity and Norm in Large Language Models

At the Aequitas Workshop on Fairness and Bias in AI (Oct 2024, Saint Jacques de Compostelle, Spain), Thierry Poibeau, research partner in the ASTOUND consortium, presented a thought-provoking position paper on bias, subjectivity, and normative frameworks in Large Language Models (LLMs). The paper, titled “Bias, Subjectivity and Norm in Large Language Models,” challenges the prevailing assumptions around bias mitigation in AI and proposes a more nuanced, transparent approach.

In current AI discourse, there is a tendency to treat bias as an error to be eliminated. However, this paper argues that biases in LLMs are not only inevitable but also reflective of broader societal values and norms. Attempts to retroactively “de-bias” models often overlook the deeper question: whose norms and whose fairness are being encoded?

Rather than striving for an elusive neutrality, the authors advocate for increased transparency in how models are filtered and aligned with use-case-specific expectations. This means recognizing that LLM outputs are shaped by multiple layers of subjective choices—from training data curation to deployment context—and that responsible AI should foreground these complexities.

This paper underscores ASTOUND’s broader mission: to develop contextually aware AI systems that are both socially intelligent and ethically grounded. By engaging in the ethics of model development, we move closer to creating AI that not only performs well, but does so in ways that are understandable, fair, and accountable.

Read the full article here: https://cnrs.hal.science/hal-04838836v1

14 March 2025
by rdonepudi
0 comments

Keynote Talk: Do Large Language Models Have Any Relevance for Linguistic Theory?

On February 28, 2025, at the Embed-Days Colloquium: From Theory to Applications, held at École Normale Supérieure in Paris, ASTOUND’s research partner Thierry Poibeau delivered a compelling presentation titled “Do Large Language Models Have Any Relevance for Linguistic Theory?”

This talk tackled a central question in contemporary AI and linguistics: Can Large Language Models (LLMs), despite their impressive performance in natural language tasks, offer meaningful contributions to linguistic theory? Poibeau examined the intricate relationship between abstract, non-grounded representations in LLMs and traditional linguistic frameworks.

While LLMs often challenge the assumptions of classical linguistics, they also open new doors for empirical exploration, raising both opportunities and concerns for how we conceptualize language, meaning, and cognition. This ongoing dialogue between AI and linguistics is at the heart of the ASTOUND project’s interdisciplinary approach to advancing socially aware, cognitively inspired AI systems.

Learn more about the event here: https://embedded-days.bunka.ai/

30 January 2025
by rdonepudi
0 comments

Myth and AI: Emergence as an Example of a Typical Myth in AI

On January 21, 2025, Thierry Poibeau (École Normale Supérieure), research partner of the ASTOUND project, participated in a thought-provoking event hosted by the Centre for Digital Humanities at the University of Cambridge: “Myth and AI: Emergence as an Example of a Typical Myth in AI.”

The presentation critically examined the popular yet misleading narrative of “emergence” in artificial intelligence — the idea that intelligent behavior arises spontaneously from complex computational systems. This notion, often rooted in science fiction, can obscure critical discussions about how AI systems actually learn and function, leading to confusion about their real capabilities and limitations.

Thierry Poibeau explored how such myths not only anthropomorphize AI, but also distract from pressing ethical and societal challenges — from opacity in decision-making to the impact of biased training data. By demystifying these concepts, the ASTOUND project continues its commitment to fostering transparent, context-aware, and socially responsible AI.

Explore the full event details here: https://www.cdh.cam.ac.uk/events/39431/

17 December 2024
by rdonepudi
0 comments

Awarded an EIC Booster Grant – ASSIST

ASSIST (Aware Systems for Smart Interactive Museums and Art) builds on the advancements of the EIC Pathfinder project ASTOUND to bring contextually aware AI chatbots to the tourism sector. The project has been awarded an EIC Booster Grant to support its commercial validation and market readiness. By leveraging artificial awareness, these chatbots enhance visitor engagement in museums, providing real-time, personalized interactions that enrich the cultural experience. The project focuses on commercial validation, market analysis, and fine-tuning chatbot technology to redefine the future of museum interactions.

29 May 2024
by fjmirandav
0 comments

Testing theory of mind in Large Language models and humans

In a recent groundbreaking study published in the renowned journal Nature, a team of researchers from the ASTOUND project consortium, explored the theory of mind capabilities in humans and large language models (LLMs) such as GPT-4 and LLaMA2. This study, central to the ASTOUND project (GA 101071191) dives into how well these AI models can track and interpret human mental states, an ability central to social interactions and communication.

Our team, alongside other prominent researchers, embarked on a comprehensive examination of theory of mind in both humans and AI. The study involved a series of tests designed to measure various aspects of theory of mind, including understanding false beliefs, interpreting indirect requests, and recognizing irony and faux pas.

We tested two families of LLMs (GPT-4 and LLaMA2) against a battery of measurements, comparing their performance with a sample of 1,907 human participants. This rigorous approach ensured a fair and systematic comparison between human and artificial intelligences.

The findings highlight that while AI models can mimic human-like reasoning in several theory of mind tasks, they also reveal distinct limitations and biases. For instance, GPT models often adopt a hyperconservative approach, hesitating to commit to conclusions without sufficient evidence, which contrasts with human tendencies to make more definitive judgments.

This study was a collaborative effort involving experts from various institutions, including our own team. Our involvement was crucial in designing and conducting the experiments, analyzing the data, and interpreting the results.

The insights gained from this research are invaluable for future developments in AI. Understanding the nuances of how AI models process social information can guide the creation of more sophisticated and human-like AI systems. It also opens avenues for further research into mitigating biases and improving the robustness of AI’s social reasoning abilities.

Read the full article here: https://www.nature.com/articles/s41562-024-01882-z

ASTOUND
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.