HLDM'24

The Second Workshop on
Hybrid Human-Machine Learning and Decision Making

ECMLPKDD Workshop

September 9, 2024

Vilnius, Lithuania

Overview

In the past, machine learning and decision-making have been treated as independent research areas. However, with the increasing emphasis on human-centered AI, there has been a growing interest in the understanding of how these two research fields interplay and can be jointly addressed to propose novel technological solutions having humans at their center. Recently, indeed, scholars started exploring how to enable the synergistic cooperation of humans and machines by conceiving hybrid approaches that aim to complement human decision-making rather than replace it, as well as strategies that leverage machine predictions to improve overall decision-making performance.

Despite these advances, we believe that our understanding of this topic is still in its infancy and that there is much to be learned about the interplay between human and Artificial Intelligence. To facilitate this exploration, there is a need for interdisciplinary events where researchers from multiple fields can come together to share their workflows, perspectives and insights.

The goal of our workshop is to bring together researchers with diverse backgrounds and expertise to explore effective hybrid machine learning and decision making. This will include approaches that explicitly consider the human-in-the-loop and the downstream goals of the human-machine system, as well as decision making strategies and HCI principles that promote rich and diverse interactions between humans and machines. Additionally, cognitive and legal aspects will be considered to identify potential pitfalls and ensure that trustworthy and ethical hybrid decision-making systems are developed.


Key Dates
  • Paper Submission Deadline: 15 June 2024 22 June 2024

  • Paper Author Notification: 15 July 2024

  • Workshop date: 9 September 2024

Program

Workshop opening (9:00 - 9:10)                                                                                                                                                                                                                                                               

Keynote 1 (9:10 - 9:50)                                                                                                                                                                                                                                                              

Prof.Catholijn M. Jonker Title: Hybrid Intelligence in Negotiation Processes
Abstract: Hybrid Intelligence is the augmentation of human intelligence by means of collaborating with artificial intelligence. I will show how this paradigm can be used to improve the effectiveness of complex negotiations. The combination of types of intelligence proposed is that of human intelligence, symbolic machine learning and large language models.

Paper Session 1 (09:50 - 10:50)                                                                                                                                                                                                                                                                 

09:50 - 10:05 A Causal Framework for Evaluating Deferring Systems by Filippo Palomba, Andrea Pugnana, Jose M Alvarez and Salvatore Ruggieri (pdf)
10:25 - 10:20 Scenario-based Automatic Testing of a Machine Learning Solution with the Human in the Loop by Maxence Demougeot, Sylvie Trouilhet, Jean-Paul Arcangeli and Françoise Adreit (pdf)
10:20 - 10:35 Discussion
10:35 - 10:50 Interpretable and Efficient Counterfactual Generation with Disentangled Variational Autoencoders by Cesare Barbera and Andrea Passerini (pdf)

Coffee break (10:50 - 11:20)                                                                                                                                                                                                                                                                   

Keynote 2 (11:20 - 12:00)                                                                                                                                                                                                                                                                 

Prof. Nick Chater Title: Interactive explainability: Black boxes, mutual understanding and what it would really mean for AI systems to be as explainable as people
Abstract: Black box AI systems often based on deep neural networks (DNNs) are being developed with astonishing speed. But in critical real-world contexts, such as providing legal, financial or medical advice or assistance, their deployment faces formidable practical and legal barriers. Users, and especially regulators, will demand explainability: that AI systems can provide justifications for their statements, recommendations, or actions. Typically, both regulators and AI researchers have adopted an internal view of explainability: the emerging field of X-AI aims to ‘open the black box’, designing systems whose workings are transparent to human understanding. We argue that, for DNNs and related methods, this vision is both unachievable and misconceived. Instead, we note that AI need only be as explainable as humans and the human brain is itself effectively a black box in which a tangle of 10 11 neurons connected by 10 14 synapses carry out almost unknown computations. We propose a very different notion, InterActive Explainability (IAE): the ability of a system, whether human or AI, to coherently justify and defend its statements and actions to a human questioner. IAE requires local, contextually-specific responses built on mutual understanding between an AI system and the questioner (e.g., of commonly held background beliefs and assumptions). We outline what mutual understanding involves, and why current AI systems seem far from achieving such understanding. We propose that IAE should be a key criterion for regulators, and a central objective for AI research.

Paper Session 2 (12:00 - 12:45)                                                                                                                                                                                                                                                                 

12:00 - 12:15 Can LLMs Correct Physicians, Yet? Investigating Effective Interaction Methods in the Medical Domain by Burcu Sayin, Pasquale Minervini, Jacopo Staiano and Andrea Passerini (pdf)
12:15 - 12:30 Developing Human-centric Machine Learning Models for Temporal Data by Bahavathy Kathirgamanathan, Eleonora Cappuccio, Salvatore Rinzivillo, Gennady Andrienko and Natalia Andrienko (pdf)
12:30 - 12:45 Improving Bias Correction Standards by Quantifying its Effects on Treatment Outcomes by Alexandre Abraham and Andrés Hoyos Idrobo (pdf)

Lunch break (12:45 - 14:00)                                                                                                                                                                                                                                                                   

Keynote 3 (14:00 - 14:40)                                                                                                                                                                                                                                                                 

Prof. Nuria Oliver Title: On humans, biases and algorithms

Poster Spotlights (14:40 - 14:50)                                                                                                                                                                                                                                                                 

Disagreement-based Active Learning for Robustness Against Subpopulation Shifts by Yeat Jeng Ng, Viktoriia Sharmanska, Thomas Kehrenberg, Anastasia Pentina and Novi Quadrianto (pdf)
Discussion
Human heuristic based Drop-out Mechanism for Active Learning by Sriram Ravichandran, Nandan Sudarsanam and Konstantinos Katsikopoulos (pdf)

Poster session (14:50 - 15:50)                                                                                                                                                                                                                                                                 

All contributions

Coffee break (15:50 - 16:20)                                                                                                                                                                                                                                                                   

Keynote 4 (16:20 - 17:00)                                                                                                                                                                                                                                                                 

Dr. Stephan Alaniz Title: Explainability in the Era of Multimodal Large Language Models
Abstract: Large Language Models (LLMs) have emerged as versatile tools with a rapidly expanding range of applications, particularly when augmented by multimodal extensions that enable reasoning about visual data. This development naturally raises the question of whether LLMs can provide effective explanations for computer vision tasks. In this talk, we will present our research on leveraging LLMs to generate natural language explanations for computer vision tasks and discuss strategies for aligning the language of LLMs with the specific needs and preferences of individual communication partners, which plays an important role when deploying LLMs to interact with a diverse population of humans.

Final Discussion (17:00 - 17:30)                                                                                                                                                                                                                                                                 

Invited Speakers
Prof. Nick Chater

Full Professor of Behavioural Science at the Warwick Business School, co-founder of Decision Technology Ltd and author of the Mind is Flat. His research focuses on the cognitive and social foundations of rationality, with applications to business and public policy.

See Nick's Webpage
Prof. Nuria Oliver

Co-founder and vice-president of ELLIS, Chief Data Scientist at Data-Pop Alliance and Chief Scientific Advisor at the Vodafone Institute. Her research work focuses on the computational modelling of human behaviour using Artificial Intelligence techniques, human-computer interaction, mobile computing and Big Data analysis.

See Nuria's Webpage
Dr. Stephan Alaniz

Post-doctoral researcher in the Explainable Machine Learning group at Helmholtz Munich, led by Prof. Zeynep Akata. His research focuses on cutting-edge advancements in explainable AI and multimodal learning, particularly at the intersection of vision and language.

See Stephan's Webpage
Prof. Catholijn M. Jonker

Full Professor of Interactive Intelligence at the Delft University of Technology. She is an expert on negotiation, teamwork, and the dynamics of individual agents and organizations.

See Catholijn's Webpage
Call for Papers

Following the success of the first edition, the HLDM 2024 workshop aims at gathering together a diverse set of researchers addressing the different aspects that characterize effective hybrid decision making. These range from machine learning approaches that explicitly account for the human-in-the-loop and the downstream goal of the human-machine system, to decision making strategies and HCI principles encouraging a rich and diverse interaction between the human and the machine, to cognitive aspects pinpointing potential pitfalls, misunderstandings and sub-optimal behaviour, legal and regulatory aspects highlighting requirements and constraints that trustworthy and ethical hybrid decision making systems should satisfy. The workshop will feature invited talks, a poster session, presentations of the best contributions and a final discussion.

We invite submissions on a broad range of topics revolving around hybrid human-machine learning and decision making, including but not limited to:

The goal of the workshop is to foster discussion on the most promising research directions and the most relevant challenges revolving around hybrid human-machine learning and decision making. We thus accept the following types of submissions:

  1. Short papers (6 pages + references) presenting work-in-progress, position papers or open problems with clear and concise formulations of current challenges. Short papers should be anonymized (double-blind review process) and formatted according to the ECMLPKDD 2024 guidelines (see here). Accepted short papers will be included in the Springer Workshop proceedings of ECMLPKDD 2024.

  2. Regular papers (14 pages + references) presenting novel original work not published elsewhere. Regular papers should be anonymized (double-blind review process) and formatted according to the ECMLPKDD 2024 guidelines (see here). Accepted regular papers will be included in the Springer Workshop proceedings of ECMLPKDD 2024. Double-submission of research papers is forbidden.

  3. Non-archival submissions presenting relevant work recently accepted or currently under submission/review at other venues. The original work should be submitted (free format), enriched with a cover page reporting information on why the manuscript is of interest for the workshop. These submissions will not be included in the Springer Workshop proceedings. Non-archival submissions do not require anonymization unless the authors choose to do so because the paper is currently under review at another venue.

We encourage all qualified candidates to submit a paper regardless of age, gender, sexual orientation, religion, country of origin, or ethnicity. All accepted papers will be presented as posters and linked to the workshop page. Submitting a paper to the workshop means that if the paper is accepted at least one author should present it at the workshop. The best contributions will be allocated a 15 min presentation during the workshop to maximize their visibility and impact.

Key Dates:

How to submit:

Go here and create a new submission for the “HLDM: Towards Hybrid Human-Machine Learning and Decision Making” track.

Workshop Chairs
Andrea Passerini

Associate Professor at the Department of Information Engineering and Computer Science (DISI) of the University of Trento and Adjunct Professor at Aalborg University. He is director of the Structured Machine Learning Group and coordinator of the Research Program on Deep and Structured Machine Learning, both at DISI. His research interests include structured machine learning, neuro-symbolic integration, explainable and interactive machine learning, preference elicitation and learning with constraints. He co-authored over 140 refereed papers, and he regularly publishes at top ML and AI conferences and journals like NeurIPS, ICLR, ECMLPKDD, IJCAI, AAAI, MLJ, AIJ and DAMI. He co-organized ECMLPKDD in 2016, AIxIA in 2018, PAIS in 2022 and several workshops and tutorials at top machine learning and AI conferences.

See Andrea's Webpage
Burcu Sayin

Postdoctoral Researcher at the Department of Information Engineering and Computer Science (DISI) of the University of Trento. Her research interests include hybrid human-machine intelligence, natural language processing, trustworthy AI, cost-sensitive machine learning, and active learning. She serves as a reviewer for top ML and AI conferences like ICML, AAAI, ACL, ECAI, and The WebConf. She contributed to organizational roles in international conferences and workshops, such as HCOMP 2023, CI 2023, and ECMLPKDD 2023. She co-organized the first edition of HLDM workshop.

See Burcu's Webpage
Anna Monreale

Associate Professor at the Department of Computer Science of the University of Pisa and an Adjunct Professor at the Faculty of Computer Science of the Dalhousie University. She is vice-coordinator of the National PhD in Artificial Intelligence for the Society of the University of Pisa. Her research interests include Big Data Analytics, Artificial Intelligence, Privacy-by-Design in big data and AI, and Explainable AI. She co-authored over 140 refereed papers published at top ML and AI conferences and journals like ECMLPKDD, SIGKDD, AAAI, DAMI, Artificial Intelligence, and Intelligent Systems. She co-organized several workshops and tutorials at top machine learning and AI conferences.

See Anna's Webpage
Giovanna Varni

Associate Professor at University of Trento, where she is with the Department of Information Engineering and Computer Science (DISI). Previously she was an Associate Professor at LTCI, Télécom Paris, Institut polytechnique de Paris, France. Her activities mainly cover social Signal Processing (SSP), Affective Computing (AC), and Human Computer Interaction (HCI). She was involved in several EU FP7-FP6 projects and she was PI of the national French project ANR JCJC GRACE (2019-2022). She contributes regularly to organizational roles in international conferences and workshops for which she also serves as a Program Committee member.

See Giovanna's Webpage
Novi Quadrianto

Professor of Machine Learning at the University of Sussex, UK. He is also an Adjunct Professor (Data Science) at Monash University Indonesia, leads a BCAM Severo Ochoa Strategic Lab on Trustworthy Machine Learning in Spain, is a scholar at the ELLIS Human-centric Machine Learning programme, and the recipient of 2 ERC grants. His research lies in the area of machine learning, with an emphasis in algorithmic fairness, transparency and robustness. He contributes regularly to organizational roles in top machine learning and AI conferences.

See Novi's Webpage
Artur Bogucki

Part of the Global Governance Regulation and Innovation Unit at the Centre for European Policy Studies (CEPS) in Brussels, where he specializes in digital and technology law. He also holds the position of Assistant Professor in Law & Economics at the Warsaw School of Economics (SGH) and serves as a Lecturer in the European Master in Law and Economics program. Additionally, he coordinates the CIVICA Europe Revisited initiative at SGH. As a member of the SGH AI Lab and the Economic Theory department, Artur primarily focuses his research on behavioral regulation theory as it applies to AI and the digital economy, particularly emphasizing trustworthy AI and data governance. He has been involved in several Horizon innovation projects in the field of AI.

See Artur's Webpage
Contact

For any information please contact hldm-workshop@unitn.it