Abstract This talk addresses recommender systems in domains where users may seek behavioral change. Food recommender systems have become popular to help users to find foods to buy and eat. An issue is that most popular recommendations are unhealthy, making it difficult for users to be exposed to new types of food or to seek out healthier or more sustainable eating habits. Starke will describe recommender studies in which not necessarily the presented content is changed, but rather how the content is explained, examining how users can be supported to make ‘better’ decisions through different types of explanatory nudges, such as food labels, health-based justifications and normative nudges.
Bio Alain Starke is a researcher on recommender systems and nudging, examining how decision-making interfaces can support changes in preferences and behavior, particularly in the food domain. He obtained his PhD in 2019 at Eindhoven University of Technology, Netherlands, on energy recommender systems. Starke has a dual affiliation. He is a postdoctoral researcher at the Marketing and Consumer Behaviour group, Wageningen, Netherlands, where he investigates consumer acceptance of personalized dietary advice. He is also an adjunct associate professor at the Department of Information Science and Media Studies, University of Bergen, Norway, where he performs user studies with recommender systems in the food and news domains.
Abstract Explainability has become an important topic both in Data Science and AI in general and in recommender systems in particular, as algorithms have become much less inherently explainable. However, explainability has different interpretations and goals in different fields. For example, interpretability and explanainability tools in machine learning are predominantly developed for Data Scientists to understand and scrutinize their models. Current tools are therefore often quite technical and not very ‘user-friendly’. I will illustrate this with our recent work on improving the explainability of model-agnostic tools such as LIME and SHAP. Another stream of research on explainability in the HCI and XAI fields focuses more on users’ needs for explainability, such as contrastive and selective explanations and explanations that fit with the mental models and beliefs of the user. However, how to satisfy those needs is still an open question. Based on recent work in interactive AI and machine learning, I will propose that explainability goes together with interactivity, and will illustrate this with examples from our own work in music genre exploration, that combines visualizations and interactive tools to help users understand and tune our exploration model.
Bio Martijn Willemsen (www.martijnwillemsen.nl) is an Associate Professor on human decision making in interactive systems in the Human Technology Interaction group at Eindhoven University of Technology (TU/e) and at the Jheronimus Academy of Data Science in Den Bosch (JADS). He researches the cognitive aspects of Human-Technology Interaction, with a strong focus on judgment and decision making in online environments. From a theoretical perspective, he has a special interest in process tracing technologies to capture and analyze information processing of decision makers. His applied research focuses on how (online) decisions can be supported by recommender systems, and includes domains such as movies, music, health-related decisions (food, lifestyle, exercise) and energy-saving measures. His recent focus is on interactive recommender systems that help users to move forward, developing new preferences and/or healthier behavior, rather than reinforcing their current behaviors. Such systems can provide personalized behavioral change. Martijn also focusses on interactive and explainable AI, with recent work studying health and sport coaches interacting with prediction models.