HLDM'23

The First Workshop on
Hybrid Human-Machine Learning and Decision Making

ECMLPKDD Workshop

September 22, 2023

Turin, Italy

Overview

In the past, machine learning and decision-making have been treated as independent research areas. However, with the increasing emphasis on human-centered AI, there has been a growing interest in combining these two areas. Researchers have explored approaches that aim to complement human decision-making rather than replace it, as well as strategies that leverage machine predictions to improve overall decision-making performance.

Despite these advances, our understanding of this topic is still in its infancy and that there is much to be learned about the interplay between human and machine learning and decision making. To facilitate this exploration, there is a need for interdisciplinary events where researchers from multiple fields can come together to share their perspectives and insights.

The goal of this workshop is to bring together researchers with diverse backgrounds and expertise to explore effective hybrid machine learning and decision making. This will include approaches that explicitly consider the human-in-the-loop and the downstream goals of the human-machine system, as well as decision making strategies and HCI principles that promote rich and diverse interactions between humans and machines. Additionally, cognitive and legal aspects will be considered to identify potential pitfalls and ensure that trustworthy and ethical hybrid decision-making systems are developed.


Key Dates
  • Paper Submission Deadline: 12 June 2023 23 June 2023

  • Paper Author Notification: 12 July 2023

  • Workshop date: 22 September 2023

Program (PoliTo Room 2T)

Workshop opening                                                                                                                                                                                                                                                               

9:00 - 9:10 Welcome, General Overview, Supporting projects presentation: TANGO, TAILOR, XAI, PNRR-FAIR, PNRR-SoBigData.it

Keynote 1                                                                                                                                                                                                                                                              

9:10 - 9:50 Prof. Marco Zenati Title: AI Coach/AI for Decision Making in Surgery (slides available)
Abstract: There are 310 million surgical procedures performed worldwide every year, with almost 50 million complications (16%) and 1.4 million deaths (0.4%). A significant proportion of these adverse events are preventable. The science of surgery is one of the most complex and least transparent and understood. Digital surgery, encompassing use of AI, robotics, computer vision, cloud-based solutions etc., aims to bring a new level of scientific rigor and transparency by providing tools that augment the surgeon and team with better perception and judgement.

Keynote 2 (Online)                                                                                                                                                                                                                                                                 

9:50 - 10:30 Prof. Manuel Gomez Rodriguez Title: Improving Decision Making with Machine Learning, Provably (slides available)
Abstract: Decision support systems for classification tasks are predominantly designed to predict the value of the ground truth labels. However, these systems also need to make human experts understand when and how to use these predictions to update their own predictions. Unfortunately, this has been proven challenging. In this talk, I will introduce an alternative type of decision support systems that circumvent this challenge by design. Rather than providing a single label prediction, these systems provide a set of label prediction values, namely a prediction set, and forcefully ask experts to predict a label value from the prediction set. Moreover, I will discuss how to use conformal prediction, online learning and counterfactual inference to efficiently construct prediction sets that optimize experts’ performance, provably. Further, I will present the results of a large-scale human subject study, which show that, for decision support systems based on prediction sets, limiting experts’ level of agency leads to greater performance than allowing experts to always exercise their own agency.

Paper Session 1                                                                                                                                                                                                                                                                 

10:30 - 10:45 Towards synergistic human-AI collaboration in hybrid decision-making systems by Clara Punzi, Mattia Setzu, Roberto Pellungrini, Fosca Giannotti, and Dino Pedreschi. pdf
10:45 - 11:00 On the Challenges and Practices of Reinforcement Learning from Real Human Feedback by Timo Kaufmann, Sarah Ball, Jacob Beck, Frauke Kreuter, and Eyke Hüllermeier. pdf

Coffee break (11:00 - 11:30)                                                                                                                                                                                                                                                                   

Keynote 3                                                                                                                                                                                                                                                                 

11:30 - 12:10 Prof. Sylvie Delacroix Title: Ensemble Contestability: Incentivizing not just individual but collective contestation. (slides available)
Abstract: This talk delves into concrete ways of incentivising the contestability over the longer term of augmentation tools deployed in morally-loaded contexts (Such as justice, health and education). As such, these techniques are designed to point at the need to go beyond the current focus on contestability mechanisms that are only designed to help individual x at time t.

Paper Session 2                                                                                                                                                                                                                                                                 

12:10 - 12:25 AUC-based Selective Classification by Andrea Pugnana and Salvatore Ruggieri. pdf
12:25 - 12:40 Moral Responsibility in Complex Hybrid Intelligence Systems by David Lyreskog, Hazem Zohny, Edmond Awad, Ilina Singh and Julian Savulescu. pdf
12:40 - 12:55 TCuPGAN: A novel framework developed for optimizing human-machine interactions in citizen science by Ramanakumar Sankar, Kameswara Bharadwaj Mantha, Lucy Fortson, Helen Spiers, Myat Mo, Thomas Pengo, Trace Christensen, Douglas Mashek, Mark Sanders, Jeffrey Salisbury, Martin Jones, Lucy Collinson and Laura Trouille. pdf
12:55 - 13:10 Personalized Algorithmic Recourse with Preference Elicitation by Giovanni De Toni, Paolo Viappiani, Stefano Teso, Bruno Lepri and Andrea Passerini. pdf

Lunch break (13:10 - 14:30)                                                                                                                                                                                                                                                                   

Keynote 4                                                                                                                                                                                                                                                                 

14:30 - 15:10 Prof. Alessandro Bozzon Title: Contestability in AI-powered decision-making systems
Abstract: The General Data Protection Regulation (GDPR) establishes the right of decision subjects to contest decisions taken by automated (AI) systems. Contestability is a _due process_ provision that aims to increase AI systems' transparency by making them open and responsive to human intervention throughout their lifecycle. A contestable system could be perceived as more legitimate. But how does contestability relate to the perception of fairness in automated decision-making? Is contestability a necessary property for the successful uptake of AI-powered solutions? Where could contestability play a role in AI systems' design, implementation, and evolution? In this talk, I will address some of these questions through the results of three recent works developed in the _Knowledge and Intelligence Design_ group of the Delft University of Technology.

Poster session                                                                                                                                                                                                                                                                 

15:10 - 16:00 Towards a hybrid human-machine discovery of complex movement patterns by Natalia Andrienko, Gennady Andrienko, Alexander Artikis, Periklis Mantenoglou and Salvatore Rinzivillo. pdf
Trustworthy Hybrid Decision-Making by Krishna Sri Ipsit Mantri and Nevasini NA Sasikumar. pdf
Optimizing delegation between human and AI collaborative agents by Andrew S Fuchs, Andrea Passarella and Marco Conti. pdf
Rethinking and Recomputing the Value of Machine Learning Models by Burcu Sayin, Jie Yang, Xinyue Chen, Andrea Passerini and Fabio Casati. pdf
Exploring the Risks of General-Purpose AI: The Role of Nearsighted Goals and the Brain's Reward Mechanism in Processes of Decision-Makings by Deivide G.S. Oliveira. pdf
Conversational XAI: Formalizing its Basic Design Principles by Marco Garofalo, Alessia Fantini, Roberto Pellungrini, Giovanni Pilato, Massimo Villari, Fosca Giannotti. pdf
A Crossroads for Hybrid Human-Machine decision-making by Ben wilson, Matt Roach, Kayal Lakshmanan, Alma Rahat and Alan Dix. pdf
Enhancing Fairness, Justice and Accuracy of hybrid human-AI decisions by shifting epistemological stances by Peter Daish, Matt Roach and Alan Dix. pdf
Interpreting Dynamic Causal Model Policies by John M Agosta, Robert Horton and Maryam Tavakoli. pdf
Learning to Guide Human Experts via Personalized Large Language Models by Debodeep Banerjee, Stefano Teso and Andrea Passerini. pdf
On the Challenges and Practices of Reinforcement Learning from Real Human Feedback by Timo Kaufmann, Sarah Ball, Jacob Beck, Frauke Kreuter, and Eyke Hüllermeier. pdf

Coffee break (16:00 - 16:20)                                                                                                                                                                                                                                                                   

Keynote 5 (Online)                                                                                                                                                                                                                                                                 

16:20 - 17:00 Prof. Ece Kamar Title: In the Pursuit of Responsible AI: Developing AI Systems for People with People
Abstract: Wide-spread adoption of AI systems in the real-world has brought to light concerns around biases hidden in these systems and reliability and safety risks. Addressing these concerns in real-world applications through tools, guidance and processes has paramount importance. This endeavor also introduces new research directions for our community. In this talk, I'll discuss why taking a human-centric view into the development and deployment of AI systems can help to overcome the shortcomings of AI systems and lead to better outcomes in the real-world. I'll share several directions of research we are pursuing towards effective human-AI partnership through combining the complementary strengths of human and machine reasoning.

Panel on Challenges in Human-Machine Learning and Decision Making                                                                                                                                                                                                                                                                 

17:00 - 18:00 Andrea Passerini (Moderator), Rosanna Fanni, Luna Bianchi, Sylvie Delacroix, Alessandro Bozzon, Marco Zenati, and Dino Pedreschi.
Invited Speakers and Panelists
Prof. Marco Zenati

Chief of Cardiac Surgery and Director of the Medical Robotics & Computer-Assisted Surgery Lab at Harvard University . He pioneered the introduction of AI technology in surgical operating rooms. The research he leads has a direct impact on the clinical care he delivers on a daily basis.

See Marco's Webpage
Prof. Ece Kamar

Partner Research Area Manager at Microsoft Research and Affiliate Faculty with the University of Washington . She oversees the research area on human-centered AI, where she advance the state-of-the-art in Responsible AI, human-AI collaboration, sensing, signal processing, productivity, future of work and mental well-being.

See Ece's Webpage
Prof. Manuel Gomez Rodriguez

Tenured faculty at the Max Planck Institute for Software Systems in Germany. He is conducting cutting-edge research on improving decision-making through machine learning, and developing large-scale data mining methods for the analysis and modeling of large real-world networks and processes taking place over them.

See Manuel's Webpage
Prof. Sylvie Delacroix

Professor in Law and Ethics at the Birmingham Law School and Fellow of the Alan Turing Institute . She combines research on philosophy, ethics, law and regulation to address fundamental questions on acceptance and use of technology for individuals and society.

See Sylvie's Webpage
Prof. Alessandro Bozzon

Professor of Human-Centered Artificial Intelligence and Head of the Department of Sustainable Design Engineering at the Delft University of Technology. He is an expert on human-computer interaction, human computation, user modeling, and machine learning.

See Alessandro's Webpage
Luna Bianchi

CEO & co-founder of Immanence, a multinational company listed on the NYSE. She is also a member of the World Economic Forum Working Group for Metaverse Governance, and, in the role of Advocacy & Policy Officer.

See Luna's Webpage
Rosanna Fanni

Researcher and Trade and Technology Dialogue Coordinator in the Global Governance, Regulation, Innovation and the Digital Economy (GRID) unit at CEPS. She holds an MA in Digital Communication Leadership.

See Rosanna's Webpage
Prof. Dino Pedreschi

Professor of Computer Science at the University of Pisa, and a pioneering scientist in mobility data mining, social network mining and privacy-preserving data mining. He received a Google Research Award for his research on privacy-preserving data mining.

See Dino's Webpage
Call for Papers

The HLDM 2023 workshop aims at gathering together a diverse set of researchers addressing the different aspects that characterize effective hybrid decision-making. These range from machine learning approaches that explicitly account for the human-in-the-loop and the downstream goal of the human-machine system, to decision-making strategies and HCI principles encouraging a rich and diverse interaction between the human and the machine, to cognitive aspects pinpointing potential pitfalls, misunderstandings, and sub-optimal behavior, legal and regulatory aspects highlighting requirements and constraints that trustworthy and ethical hybrid decision making systems should satisfy.

We invite submissions on a broad range of topics revolving around hybrid human-machine learning and decision-making, including but not limited to:

The goal of the workshop is to foster discussion on the most promising research directions and the most relevant challenges revolving around hybrid human-machine learning and decision making. We thus accept the following types of submissions:

  1. Extended abstracts (4 pages + references) presenting work-in-progress, position papers, or open problems with clear and concise formulations of current challenges. Extended abstracts should be anonymized (double-blind review process) and formatted according to the ECMLPKDD 2023 guidelines. Accepted extended abstracts (no less than four pages overall) will be included in the Workshop proceedings of ECMLPKDD 2023, unless authors request otherwise.

  2. Research papers (14 pages + references) presenting novel original work not published elsewhere. Research papers should be anonymized (double-blind review process) and formatted according to the ECMLPKDD 2023 guidelines. Accepted research papers will be included in the Workshop proceedings of ECMLPKDD 2023 unless the authors explicitly check the opt-out option upon submission. Double-submission of research papers is forbidden unless the opt-out option is checked. In this latter case, the submission is considered non-archival.

  3. Resubmission of already accepted papers. The camera-ready version of the paper should be submitted (including author information), enriched with a cover page reporting information on where the paper has been accepted, and why it is of interest for the workshop. These submissions are non-archival.

We encourage all qualified candidates to submit a paper regardless of age, gender, sexual orientation, religion, country of origin, or ethnicity. All accepted papers will be presented as posters and linked to the workshop page. The best contributions will be allocated a 15 min presentation during the workshop to maximize their visibility and impact. Submitting a paper to the workshop means that if the paper is accepted at least one author should present the paper at the workshop.

Key Dates:

How to submit:

Go here and create a new submission for the “Towards Hybrid Human-Machine Learning and Decision Making (HLDM)” workshop.

Workshop Chairs
Andrea Passerini

Associate Professor at the Department of Information Engineering and Computer Science (DISI) of the University of Trento and Adjunct Professor at Aalborg University. He is director of the Structured Machine Learning Group and coordinator of the Research Program on Deep and Structured Machine Learning, both at DISI. His research interests include structured machine learning, neuro-symbolic integration, explainable and interactive machine learning, preference elicitation and learning with constraints.

See Andrea's Webpage
Fabio Casati

Principal Machine Learning Architect at Servicenow as well as technical lead for the AI Trust and Governance group in Servicenow research. Fabio focuses on designing, architecting and deploying AI-powered workflows for enterprise customers. He is working on AI applied to workflows and on quality in AI. He is also Professor at the University of Trento, working on crowdsourcing and hybrid human-machine computations, focusing on applications that have direct positive impact on society through tangible artefacts adopted by the community.

See Fabio's Webpage
Burcu Sayin

Postdoctoral Researcher at the Department of Information Engineering and Computer Science (DISI) of the University of Trento. Her research interests include hybrid intelligence, trustworthy AI, cost-sensitive machine learning, and active learning. Specifically, she works on cooperative human-machine intelligence.

See Burcu's Webpage
Anna Monreale

Associate Professor at the Department of Computer Science of the University of Pisa and an Adjunct Professor at the Faculty of Computer Science of the Dalhousie University. She is vice-coordinator of the National PhD in Artificial Intelligence for the Society of the University of Pisa. Her research interests include Big Data Analytics, Artificial Intelligence, Privacy-by-Design in big data and AI, and Explainable AI.

See Anna's Webpage
Roberto Pellungrini

Assistant Professor at Scuola Normale Superiore, Classe di Scienze. His research interests include Big Data Analytics, Data Privacy, and Explainable AI and currently working on Hybrid Decision Making algorithms.

See Roberto's Webpage
Paula Gürtler

Research Assistant in the Global Governance, Regulation, Innovation and Digital Economy (GRID) unit at CEPS. She has a background and research interest in Applied Ethics. In particular, she works on the Ethics of AI and the challenges of regulating it effectively.

See Paula's Webpage
Program Committee
Charalampos Z. Patrikakis

University of West Attica

Agathe Balayn

Delft University of Technology

Yong Wang

Singapore Management University

Addison Lin Wang

Hong Kong University of Science and Technology

Stefano Teso

University of Trento

Soumya Banerjee

University of Cambridge

Roberto Dessi

Meta AI / Universitat Pompeu Fabra

Artur Bogucki

Centre for European Policy Studies

Giovanni De Toni

Fondazione Bruno Kessler, University of Trento

Carlo Metta

ISTI CNR

Andrea Bontempelli

University of Trento

Salvatore Rinzivillo

KDDLab - ISTI - CNR

Mattia Setzu

University of Pisa

Wolfgang Stammer

Technical University of Darmstadt

Matt Roach

Swansea University

Daniele Regoli

Intesa Sanpaolo

Tommaso Turchi

University of Pisa

Gaole He

Delft University of Technology

Gizem Gezici

Scuola Normale Superiore

Contact

For any information please contact hldm-2023@unitn.it

Acknowledgments