Program

Tentative Program

Thursday, 4th July

    Opening (09.00 - 09.10 CEST)
    Session Chair: Cataldo Musto

    Invited Talk (09.10 - 9.45 CEST)
    Session Chair: Cataldo Musto
    •    09.10 - 09.45     Prof. Cristina Conati - Towards Personalized Explainable AI

    Session 1 (09.45 - 10.33 CEST)
    Session Chair: Cataldo Musto
    •    09.45 - 09.57    Jovan Jeromela and Owen Conlan. Devising Scrutable User Models for Time Management Assistants
    •    09.57 - 10.09    Gaetano Dibenedetto, Marco Polignano, Pasquale Lops and Giovanni Semeraro. Human Pose Estimation for Explainable Corrective Feedbacks in Office Spaces
    •    10.09 - 10.21    Yuan Ma and Jürgen Ziegler. The Effect of Proactive Cues on the Use of Decision Aids in Conversational Recommender Systems
    •    10.21 - 10.33    Felix Scholz, Thomas Elmar Kolb and Julia Neidhardt. Classifying User Roles in Online News Forums: A Model for User Interaction and Behavior Analysis

    Break (10.35 - 11.00 CEST)

    Invited Talk (11.00 - 11.35 CEST)
    Session Chair: Marco Polignano
    •    11.00 - 11.35     Prof. Katrien Verbert - Human-centered Explainable AI

    Session 2 (11.35 - 12.23 CEST)
    Session Chair: Marco Polignano
    •    11.35 - 11.47    Ibrahim Al Hazwani, Tiantian Luo, Oana Inel, Francesco Ricci, Mennatallah El-Assady and Jürgen Bernard. ScrollyPOI: A Narrative-Driven Interactive Recommender System for Points-of-Interest Exploration and Explainability
    •    11.47 - 11.59    Rully Agus Hendrawan, Peter Brusilovsky, Arun Balajiee Lekshmi Narayanan and Jordan Barria-Pineda. Explanations in Open User Models for Personalized Information Exploration
    •    11.59 - 12.11    Alain D. Starke, Anders Sandvik Bremnes, Erik Knudsen, Damian Trilling and Christoph Trattner. Perception versus Reality: Evaluating User Awareness of Political Selective Exposure in News Recommender Systems
    •    12.11 - 12.23    Sebastian Lubos, Thi Ngoc Trang Tran, Alexander Felfernig and Seda Polat Erdeniz. LLM-generated Explanations for Recommender Systems

    Closing (12.23 - 12.30 CEST)
    Session Chair: Marco Polignano





    Invited Talk





    Towards Personalized Explainable AI

    Prof. Cristina Conati, Dept. of Computer Science, University of British Columbia, Vancouver, Canada


    Abstract
    The AI community is increasingly interested in investigating explainability to foster user acceptance and trust in AI systems. However, there is still limited understanding of the actual relationship between AI explainability, acceptance and trust, and which factors might impact this relationship. I argue that one such factor relates to user individual differences, including long-term traits (e.g., cognitive abilities, personality, preferences) and short-term states (e.g., cognitive load, confusion, emotions). Namely, given a specific AI application, different types and forms of explanations may work best for different users and even for the same user at different times, depending to some extent on their long-term traits and short-term states. As such, our long-term goal is to develop personalized XAI tools that adapt dynamically to the relevant user factors. In this talk, I will focus on research investigating the relevance of long-term traits in XAI personalization, with results showing the value of personalized explanations of AI-driven decisions in the context of Intelligent Tutoring Systems.

    Bio
    Dr. Conati is a Professor of Computer Science at the University of British Columbia, Vancouver, Canada. She received a M.Sc. in Computer Science at the University of Milan, as well as a M.Sc. and Ph.D. in Intelligent Systems at the University of Pittsburgh. Cristina has been researching human-centered and AI-driven personalization for over 25 years, with contributions in the areas of Intelligent Tutoring Systems, User Modeling, Affective Computing, Information Visualization and Explainable AI. Cristina's research has received 10 Best Paper Awards from a variety of venues, as well as the Test of Time Time Award 2022 from the educational data mining society. She is a Fellow of AAAI (Association for the Advancement of AI) and of AAIA (Asia-Pacific Artificial Intelligence Association), an ACM Distinguished Member, and an associate editor for UMUAI (Journal of User Modeling and User-Adapted Interaction), ACM Transactions on Intelligent Interactive Systems and the Journal of Artificial Intelligence in Education. She served as President of AAAC, (Association for the Advancement of Affective Computing), as well as Program or Conference Chair for several international conferences, including UMAP, IUI, and AI in Education.






    Human-centered Explainable AI

    Prof. Katrien Verbert - Dept. of Computer Science, KU Leuve, Belgium


    Abstract
    Despite the long history of work on explanations in the Machine Learning, AI and Recommender Systems literature, current efforts face unprecedented difficulties: contemporary models are more complex and less interpretable than ever. As such models are used in many day-to-day applications, justifying their decisions for non-expert users with little or no technical knowledge will only become more crucial. Although several explanation methods have been proposed, little work has been done to evaluate whether the proposed methods indeed enhance human interpretability. Many existing methods also require significant expertise and are static. Several researchers have voiced the need for interaction with explanations as a core requirement to support understanding. In this talk, I will present our work on explanation methods that are tailored to the needs of non-expert users in AI. In addition, I will present the results of several user studies that investigate how such explanations interact with different personal characteristics, such as expertise and need for cognition.

    Bio
    Katrien Verbert is professor at the Augment research group of KU Leuven. She obtained a doctoral degree in Computer Science in 2008 at KU Leuven, Belgium. She was a postdoctoral researcher of the Research Foundation – Flanders (FWO) at KU Leuven. She was an Assistant Professor at TU Eindhoven, the Netherlands (2013 –2014) and Vrije Universiteit Brussel, Belgium (2014 – 2015). Her research interests include visualisation techniques, recommender systems, explainable AI, and visual analytics. She has been involved in several European and Flemish projects on these topics, including the EU ROLE, STELLAR, STELA, ABLE, LALA, PERSFO, Smart Tags and BigDataGrapes projects. She is also involved in the organisation of several conferences and workshops (program chair ACM RecSys 2024, general chair IUI 2021, program chair LAK 2020, general chair EC-TEL 2017, program chair EC-TEL 2016, workshop chair EDM 2015, program chair LAK 2013 and program co-chair of the EdRecSys, VISLA and XLA workshop series, DC chair IUI 2017, DC chair LAK 2019).