Regarding the relevance to UMAP, this question triggers several important research lines: building scrutable user models and transparent algorithms, analyzing the impact of opaque algorithms on end users, studying the role of explanation strategies, investigating how to provide users with more control in the personalization and adaptation problems. 1. How can we build transparent user models? Can we design transparent data extraction strategies? 2. Can we propose new recommendation and personalization strategies to consider transparency and explainability? 3. What is the role of explanation algorithms in light of transparent and explainable personalization pipelines? 4. Can we introduce explanation strategies in opaque models such as neural networks and matrix factorization techniques? 5. What kind of novel metrics can go beyond accuracy and reward more transparent and explainable recommendations? 6. Can we think about novel personalization paradigms (e.g., chatbots, conversational recommender systems) that enable a more transparent interaction? 7. What is the role of end-users in personalization and adaptation algorithms?
Nowadays, we interact with adaptive and personalized systems that exploit personal data to support us in various scenarios, such as suggesting music to be listened to or movies to be watched. These personalized and adaptive services are continuously evolving, becoming part of our everyday lives and increasingly acting as personal assistants which proactively help us in complex decision-making tasks. Unfortunately, most of these systems adopt black box models whose internal mechanisms are opaque to end-users. Indeed, users typically enjoy personalized suggestions, or they like to be supported in their decision-making tasks. Still, they are unaware of the general rationale that guides the algorithms in the adaptation and personalization process. Moreover, the metrics that are usually adopted to evaluate the effectiveness of the algorithms reward those opaque methodologies, such as matrix factorization and neural network-based techniques that maximize the accuracy of the suggestions at the expense of the transparency and explainability of the model. This issue is even more evident in light of the EU General Data Protection Regulation (GDPR). The GDPR further emphasized the need and right for scrutable and transparent methodologies that can guide the users to comprehend which information the systems hold about them and which is the internal behavior of the personalization algorithms. As a consequence, the primary motivation of the workshop is straightforward: how can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability? The spread of adaptive and personalized systems took its roots in the recent growth of (personal) data, which led to two different phenomena: on one side, the uncontrolled growth of information emphasized the need for systems able to support the users in sifting this huge flow of data. On the other, all the data points about the users which are now available (what she likes, who are her friends, which places she often visits, etc.) led to the definition of very precise and fine-grained user models, that in turn enabled very effective personalization and adaptation mechanisms.