2025 starts with good news! We have been awarded an internal innovation project to educate engineering students in the field of AI Ethics using hands-on practical use cases.
CAMELIA (
CAsos prácticos sobre el Marco Ético y Legal de la Inteligencia Artificial, which translates to
Practical Cases on the Ethical and Legal Framework for Artificial Intelligence), will leverage challenge-based learning to create material for Bachelor’s and Master’s students. This initiative aims to help students develop a critical and responsible perspective on the use of AI in diverse contexts.
The materials will include practical use cases with context, datasets, code, and research questions, enabling students to build and use AI/ML models with an ethical and human-centric approach. Students will engage with real moral dilemmas that arise during AI system design, including issues related to privacy and personal data protection, bias, interpretability and algorithmic transparency, fairness, and the value of personal data.
This material will be incorporated into seven (7) subjects and four (4) study programs across four (4) centers within UPM. It will be a pleasure working with five (5) colleagues on this exciting topic. Not only will this initiative generate valuable teaching resources, but it will also strengthen multidisciplinary collaboration between departments at UPM and, we hope, inspire new research in the field and trigger future joint initiatives. As a first step, we are creating a webpage and a public repository for the project.
Image generated by Dall-E based on the description of the CAMELIA project
This is such a cool project! Teaching students about AI ethics with real-life examples sounds super important. I love that it will help them think about fairness and privacy when making AI. Great job to everyone involved!