Workshop Details

By bringing together researchers and practitioners, the workshop aims to foster collaboration, knowledge exchange, and the development of novel solutions. Through this collective effort, we can advance the understanding and implementation of transparent and interpretable approaches in adaptive and personalized systems, including integrating Language Models (LLMs) to enhance transparency and enable users to comprehend the internal mechanisms guiding these systems.


Adaptive and personalized systems have become pervasive technologies, gradually playing an increasingly important role in our daily lives. Indeed, we are now accustomed to interacting with algorithms that leverage the power of Language Models (LLMs) to assist us in various scenarios. From services suggesting music or movies to personal assistants proactively supporting us in complex decision-making tasks. As these technologies continue to shape our everyday experiences, it becomes imperative to ensure that the internal mechanisms guiding these algorithms are transparent and comprehensible. The EU General Data Protection Regulation (GDPR) recognizes the users' right to explanation when confronted with intelligent systems, highlighting the significance of this aspect. Regrettably, current research often prioritizes the maximization of personalization strategy effectiveness, such as recommendation accuracy, at the expense of model explainability. To address this concern, the workshop aims to provide a platform for in-depth discussions on challenges, problems, and innovative research approaches in the field. The workshop will specifically focus on investigating the role of transparency and explainability in recent methodologies for constructing user models and developing personalized and adaptive systems.



Topics of interests include but are not limited to:


Transparent and Explainable Personalization Strategies

o   Scrutable User Models

o   Transparent User Profiling and Personal Data Extraction

o   Explainable Personalization and Adaptation Methodologies

o   Novel strategies (e.g., conversational recommender systems) for building transparent algorithms

o   Transparent Personalization and Adaptation to Groups of users


Transparent personalization based on Large Language Models


Designing Explanation Algorithms

o   Explanation algorithms based on item description and item properties

o   Explanation algorithms based on user-generated content (e.g., reviews)

o   Explanation algorithms based on collaborative information

o   Building explanation algorithms for opaque personalization techniques (e.g., neural networks, matrix factorization, deep learning approaches)

o   Explanation algorithms based on methods to build group models


Designing Transparent and Explainable User Interfaces

o   Transparent User Interfaces

o   Designing Transparent Interaction methodologies

o   Novel paradigms (e.g. chatbots, LLMs) for building transparent models


Evaluating Transparency and Explainability

o   Evaluating Transparency in interaction or personalization

o   Evaluating Explainability of the algorithms

o   Designing User Studies for evaluating transparency and explainability

o   Novel metrics and experimental protocols


Open Issues in Transparent and Explainable User Models and Personalized Systems

o   Ethical issues (fairness and biases) in user / group models and personalized systems

o   Privacy management of personal and social data

o   Discussing Recent Regulations (GDPR) and future directions