ASTOUND

A EC funded project aimed at improving social competences of virtual agents through artificial consciousness based on the Attention Schema Theory

ETHICS AND HUMAN RIGHTS MATTER IN ASTOUND PROJECT

| 0 comments

Improving social competences of virtual agents through artificial consciousness based on the Attention Schema Theory

On May 15 and 16 2023, a congress about Public Law and Algorithms was held in the University  of Valencia. Professor Celia Fernández Aller and prof. Jesús Salgado Criado were invited to contribute with a presentation: An interdisciplinary conversation: how to integrate ethics and people’s rights into ASTOUND project.

https://esdeveniments.uv.es/97884/detail/public-law-and-algorithms-towards-a-legal-framework-for-the-fair-and-non-discriminatory-use-of-ai-w.html

The starting point was that a human rights law approach to algorithmic accountability is something crucial. Ethics is relevant, and the rights approach is a complementary and a essential framework. 

Ethics is currently at the heart of ASTOUND project. At this stage, the project has checked available multimodal datasets for training and evaluation and has carried out an analysis of current approaches for dataset curation for bias and toxicity. While designing our chatbot architecture, many discussions during monthly general meetings have taken place​ around ethical aspects. Apart from that, a small ethics group has been organized, and the External Ethical Board has been selected. 

ASTOUND project is based on the assumption that Ethics will not create, but avoid, future risk for the project’s success, as it will bring trust. There is an opportunity to offer guidelines to contribute to other future projects with similar challenges. 

The most pressing issue is related to selecting risks to avoid. During the first phase of ethical analysis, a thorough list of potential issues or topics has been identified, such as: a) fairness (no discrimination against any group of persons); b) dignity (not impersonating, making all the time clear that the person is chatting with a machine); c) autonomy (potential influence/manipulation from chatbot side, exercising influence to persuade on a specific position); d) explainability/interpretability of chatbot responses; e) observability, auditability and monitoring: how is the system planned to be audited and monitored in terms of performance; what performance variables can be collected; f) safety: controllability – how can the system be controlled in case of degradation of performance?; g) security/privacy in order not to use personal data without knowdlege of the data subject; h) transparency; accountability/responsibility assigned in case of malfunction (legal and moral), errors, inaccuracies or dangerous suggestions; i) long term impacts of the technology. 

Some tools which are being used are the Assessment List for Trustworthy Artificial Intelligence (ALTAI) to develop procedures to detect, assess the level and address potential risks (https://futurium.ec.europa.eu/en/european-aialliance/pages/altai-assessment-list-trustworthy-artificial-intelligence), as well  the Ethics By Design and Ethics of Use Approaches for Artificial Intelligence (https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/ethics-by-design-and-ethics-of-use-approaches-for-artificial-intelligence_he_en.pdf).

As Nissenbaum says, “we cannot simply align the world with the values and principles we adhered to prior to the advent of technological challenges. Rather, we must grapple with the new demands that changes wrought by the presence and use of information technology have placed on values and moral principles1. We must bring attention to the values that are unconsciously built into technology. 

Although the ethical framework is something crucial, it has limitations, as ethics is not compulsory, its principles are not universal and do not have consensus (hundreds of ethical codes are available). The human rights approach offers a complementary vision, based on common principles as universality of human rights, participation, transparency, accountabiliy and non discrimination. AI governance is needed and legal instruments as the UE Artificial Intelligence Act can be very useful. The regulator will make clear which are the responsibilities of each of the actors involved in the life of an AI system. Supervisory bodies will be able to remove from the loop those who seek to make irresponsible use of this technology. The AI Act  is based on a system of risk analysis and provides for a series of requirements applicable to high-risk AI systems, in particular for system providers, such as the obligation to draw up an EU declaration of conformity and to affix the CE conformity marking2. The human rights approach will help adapting and overcoming limitations of the text3

General-purpose artificial intelligence (AI) technologies are now included in the AI Act, so ASTOUND project will have to analyse carefully the legal implications and impacts on human rights of future conscious chatbots .

References:

1 C. Allen, W. Wallach and I. Smit, “Why Machine Ethics?,” in IEEE Intelligent Systems, vol. 21, no. 4, pp. 12-17, July-Aug. 2006, doi: 10.1109/MIS.2006.83.

2 Leonardo Cervera Navas. Por qué hay que abordar la regulación de la inteligencia artificial como la de la aviación comercial. EL PAIS, 25-05-2023.

 3 J. Salgado-Criado and C. Fernández-Aller, “A Wide Human-Rights Approach to Artificial Intelligence Regulation in Europe,” in IEEE Technology and Society Magazine, vol. 40, no. 2, pp. 55-65, June 2021, doi: 10.1109/MTS.2021.3056284;  Vinodkumar Prabhakaran, Margaret Mitchell, Timnit Gebru, Iason Gabriel. “A Human Rights-Based Approach to Responsible AI”. arXiv:2210.02667 [cs.AI]

Leave a Reply

Required fields are marked *.