Regarding the relevance to UMAP, this question triggers several important research lines: building scrutable user models and transparent algorithms, analyzing the impact of opaque algorithms on end users, studying the role of explanation strategies, investigating how to provide users with more control in the personalization and adaptation problems. 1. How can we build transparent user models? Can we design transparent data extraction strategies? 2. Can we propose new strategies to consider explainable models? 3. What is the role of novel algorithms in light of transparent and explainable personalization pipelines? 4. Can we introduce explanation strategies in opaque models such as neural networks? 5. What kind of novel metrics can go beyond accuracy? 6. Can we think about novel personalization paradigms (e.g., chatbots, conversational recommender systems, LLMs) that enable a more transparent interaction? 7. What is the role of end-users in personalization and adaptation algorithms?
Our interactions with adaptive and personalized systems have become ubiquitous, leveraging personal data to enhance various aspects of our lives. These systems have seamlessly integrated into our daily routines, acting as personal assistants that proactively aid us in complex decision-making tasks, such as recommending music or movies that align with our preferences. Regrettably, most of these systems rely on black box models, whose internal mechanisms remain opaque to end-users. While users appreciate the personalized suggestions and decision-making support, they are often unaware of the underlying rationale that guides the adaptation and personalization algorithms. Furthermore, the evaluation metrics commonly employed to assess algorithm effectiveness tend to favor opaque methodologies. This issue is particularly significant in light of the EU General Data Protection Regulation (GDPR), which underscores the necessity and right for transparent and interpretable methodologies. The GDPR emphasizes the users' need to comprehend these systems' information and the personalization algorithms' internal behavior. This pivotal need engenders several crucial research directions, including the construction of scrutable user models and transparent algorithms, analysis of the impact of opaque algorithms on end-users, exploration of explanation strategies, and investigation into empowering users with greater control in personalization and adaptation processes. In response to these challenges, the ExUM workshop seeks to provide a platform for in-depth discussions, addressing the problems, challenges, and innovative research approaches. By bringing together researchers and practitioners, the workshop aims to foster collaboration, knowledge exchange, and the development of novel solutions. Through this collective effort, we can advance the understanding and implementation of transparent and interpretable approaches in adaptive and personalized systems, including integrating Language Models (LLMs) to enhance transparency and enable users to comprehend the internal mechanisms guiding these systems.