Motivation

Background and Topic Relevence

Our interactions with adaptive and personalized systems have become ubiquitous, leveraging personal data to enhance various aspects of our lives. These systems have seamlessly integrated into our daily routines, acting as personal assistants that proactively aid us in complex decision-making tasks, such as recommending music or movies that align with our preferences. Regrettably, most of these systems rely on black box models, whose internal mechanisms remain opaque to end-users. While users appreciate the personalized suggestions and decision-making support, they are often unaware of the underlying rationale that guides the adaptation and personalization algorithms. Furthermore, the evaluation metrics commonly employed to assess algorithm effectiveness tend to favor opaque methodologies.

This issue is particularly significant in light of the EU General Data Protection Regulation (GDPR), which underscores the necessity and right for transparent and interpretable methodologies. The GDPR emphasizes the users' need to comprehend these systems' information and the personalization algorithms' internal behavior.

This pivotal need engenders several crucial research directions, including the construction of scrutable user models and transparent algorithms, analysis of the impact of opaque algorithms on end-users, exploration of explanation strategies, and investigation into empowering users with greater control in personalization and adaptation processes. In response to these challenges, the ExUM workshop seeks to provide a platform for in-depth discussions, addressing the problems, challenges, and innovative research approaches. By bringing together researchers and practitioners, the workshop aims to foster collaboration, knowledge exchange, and the development of novel solutions. Through this collective effort, we can advance the understanding and implementation of transparent and interpretable approaches in adaptive and personalized systems, including integrating Language Models (LLMs) to enhance transparency and enable users to comprehend the internal mechanisms guiding these systems.