Invited Symposium: Exploring Trustworthy Artificial Intelligence

Exploring Trustworthy AI. Invited Symposium at the 19th Annual Conference of the Italian Association of Cognitive Sciences (AISC2023

 

Friday, 15 December 2023. 

14.00 - 16.00. 

Via Balbi 4, Genova 

AULA G


Technologies based on Artificial Intelligence (AI) are now ubiquitous in real-life applications and their pervasiveness continues to increase with the growing industrialisation of AI systems. AI-based technologies have achieved remarkable success in numerous domains, including natural language processing, translation, image and video analysis, large-scale data collection and analysis, as well as recommender and decision systems. However, alongside these remarkable achievements, concerns have arisen regarding several aspects of the AI pipeline development, such as the potential opacity of algorithmic operations, the handling of sensitive data, and the lack of transparency in automated decision-making processes.
To address the challenges posed by AI-based systems, the endeavour to understand and develop Trustworthy AI (TAI) has become an urgent priority in both research and legislation. In a nutshell, TAI refers to the development and deployment of AI systems that prioritize ethical, responsible, interpretable, and transparent practices.
The description of TAI often encompasses delicate concepts such as fairness, interpretability, transparency, reliability, and accountability, necessitating clarification to foster consensus among stakeholders, a crucial step in justifying the TAI project.
The goal of this symposium is to convene AI experts for a discussion on the foundational aspects of the TAI project, with a particular focus on how the main concepts of TAI are conceptualised and integrated into AI systems. 

 

Programme

 

AULA G

 

14.00 -14.05: Welcome by Daniele Porello and Marcello Frixione.

14.05-14.25: Roberta Calegari. Breaking the bias: Trustworthy AI towards an inclusive and equitable AI ecosystem.

Abstract

In a world inherently shaped by biases, the inevitability of human bias persists. However, the same biases need not be mirrored in AI systems. This talk delves into the imperative to take multifaceted action, ranging from fostering awareness and education to endorsing diversity across various dimensions such as gender, age, and religion. By re-evaluating the design and implementation of AI, we have the potential to exploit its power for the good of a global society. Through an examination of some examples, this discussion aims to illustrate the path forward, exploring concrete steps to ensure the development of an inclusive and equitable AI ecosystem.


14.25-14.45: Luca Oneto. Artificial Intelligence: Lights and Shadows of a Shortcut.

Abstract

Artificial Intelligence (AI) has emerged as a transformative force in various domains, promising remarkable advancements and efficiencies. However, as AI technologies continue to proliferate, it becomes imperative to explore both the bright prospects and the potential pitfalls associated with this powerful tool. This abstract delves into the dual nature of AI, shedding light on its luminous accomplishments while also illuminating the shadows of ethical, social, and technical challenges that lurk beneath the surface. We navigate through the captivating achievements of AI, from its applications in various domains to its role in reshaping industries and enhancing human creativity. Simultaneously, we delve into the ethical dilemmas surrounding AI, including concerns related to bias, privacy, job displacement, and the potential for misuse.


14.45-15.05: Giuseppe Primiero. The role of verification in Trustworthy AI.

Abstract

While a variety of tools are being developed to make ML systems more transparent, logical methods are coming back to play an increasingly important role: by their nature, they may help building and verifying transparent models of computations. A major aim is therefore to develop formal methods that will help the verification of these systems’ trustworthiness and fairness. In this lecture, I will present some recent work in this direction. I will overview the methodology and sketch some principles of the logics developed within the BRIO (Bias, Risk and Opacity in AI) Project (sites.unimi.it/brio) to reason within and about programs with probabilistic outputs obtained under possibly opaque distributions. Under this approach trustworthiness can be formally verified as an admissible distance from the behavior of a fair and transparent counterpart.


15.05-15.25: Armando Tacchella. Computer-Aided Verification of Neural Networks.

Abstract

The purpose of this talk is to give a quick overview of the current state of the art in verification of neural networks by means of computer programs. Given the “black-box” nature of neural models, visual inspection to ascertain their correctness is unfeasible. Computer-aided verification holds the promise of confirming or disconfirming that desired properties hold beyond the reach of a human designer. Unfortunately, given the size and complexity of neural models useful for practical applications, also computer-aided verification is struggling to obtain significant results.

 

15.25-15.40 Discussion

Speakers

Roberta Calegari (University of Bologna)
https://apice.unibo.it/xwiki/bin/view/RobertaCalegari/WebHome

Roberta Calegari is an assistant professor at the Department of Computer Science and Engineering and at the Alma Mater Research Institute for Human-Centered Artificial Intelligence at the University of Bologna. Her research field is related to trustworthy and explainable systems, distributed intelligent systems, software engineering, multi-paradigm languages and AI & law. She is the coordinator of the project Horizon Europe 2020 (G.A. 101070363) about Assessment and engineering of equitable, unbiased, impartial and trustworthy AI systems. The project aims to provide an experimentation playground to assess and repair bias on AI. She is part of the EU Horizon 2020 Project “PrePAI” (G.A. 101083674) working on the definition of requirements and mechanisms that ensure all resources published on the future AIonDemand platform can be labelled as trustworthy and in compliance with the future AI regulatory framework. Her research interests lie within the broad area of knowledge representation and reasoning in AI for trustworthy and explainable AI and in particular focus on symbolic AI including computational logic, logic programming, argumentation, logic-based multi-agent systems, non-monotonic/defeasible reasoning. She is member of the Editorial Board of ACM Computing Surveys for the area of Artificial Intelligence. She is author of more than 80 papers in peer-reviewed international conferences and journals. She is leading many European, Italian and regional projects and she is responsible for collaborations with industries.


Luca Oneto (University of Genoa)
https://www.lucaoneto.com/

Luca Oneto is an Associate Professor at DIBRIS (Dipartimento di Informatica, Bioingegneria, Robotica, e Ingegneria dei Sistemi) of the University of Genoa. His first main topic of research is the Statistical Learning Theory with particular focus on the theoretical aspects of the problems of Trustworthy AI, Model Selection, and Error Estimation. His second main topic of research is Data Science with particular reference to the solution of real-world problems by exploiting and improving the most recent Learning Algorithms and Theoretical Results in the fields of Machine Learning and Artificial Intelligence.

 

Giuseppe Primiero (University of Milan)
https://sites.unimi.it/gprimiero/)

Giuseppe Primiero is an Associate Professor at the Logic, Uncertainty, Computation and Information Group (LUCI) in the Department of Philosophy of the University of Milan. His research focuses on formal models of dynamic rationality for intelligent mechanical and natural information systems, the semantics and proof theory of computational systems, the foundations of computing.


Armando Tacchella (University of Genoa)
https://rubrica.unige.it/personale/UkNHW1tq

Armando Tacchella is a Full Professor at DIBRIS (Dipartimento di Informatica, Bioingegneria, Robotica, e Ingegneria dei Sistemi) of the University of Genoa. His research interests are mainly in the field of Automated Reasoning, Modeling and Verification of Cyber-Physical Systems (CPSs), Machine Learning. He likes to solve problems at the crossroads of Automated Reasoning and Machine Learning with applications to CPSs for monitoring, control and diagnosis of complex automation systems, including, but not limited to, robots, factory and home automation.

Organisation

Daniele Porello (University of Genoa)
Marcello Frixione (University of Genoa)

 

 

Contact: Daniele Porello -- daniele.porello@unige.it
 

Last update 7 December 2023