I am a Full Professor in Explainable AI at Maastricht University. I lead and contribute to several projects in the field of human-computer interaction in artificial advice-giving systems, such as recommender systems; specifically developing the state-of-the-art for automatically generated explanations (transparency) and explanation interfaces (recourse and control). These include projects a EU Marie-Curie ITN on Interactive Natural Language Technology for Explainable Artificial Intelligence. Currently, I am representing Maastricht university as a Co-Investigator in the ROBUST consortium to carry out long term (10-years) research into trustworthy artificial intelligence. I am also a co-lab director of the TAIM lab, working on trustworthy media, in collaboration with UvA and RTL.
I regularly shape local and international scientific research programs (e.g., on steering committees of journals, or as program chair of conferences), and actively organize and contribute to high-level strategic workshops relating to responsible data science, both in the Netherlands and internationally. I am a senior member of the ACM.
(Cartoon by Erwin Suvaal from CVIII ontwerpers.)
As algorithmic decision-making becomes prevalent across many sectors it is important to help users understand why certain decisions are being proposed. Explanations are needed when there is a large knowledge gap between human and systems, or when joint understanding is only implicit. This type of joint understanding is becoming increasingly important for example when news providers, and social media systems; such as Twitter and Facebook; filter and rank the information that people see.
To link the mental models of both systems and people our work develops ways to supply users with a level of transparency and control that is meaningful and useful to them. We develop methods for generating and interpreting rich meta-data that helps bridge the gap between computational and human reasoning (e.g., for understanding subjective concepts such as diversity and credibility). We also develop a theoretical framework for generating better explanations (as both text and interactive explanation interfaces), which adapts to a user and their context. To better understand the conditions for explanation effectiveness, we look at
when to explain (e.g., surprising content, lean in/lean out, risk, complexity); and
what to adapt to (e.g., group dynamics, personal characteristics of a user).
Relevant keywords: explanations, natural language generation, human-computer interaction, personalization (recommender systems), intelligent user interfaces, diversity, filter bubbles, responsible data analytics.
News:
2023
7th of July Report accepted to SIGIR Forum: Bauer, C., Carterette, B., Ferro, N., Fuhr, N. et al. (2023). Report on the Dagstuhl Seminar on Frontiers of Information Access Experimentation for Research and Education. SIGIR Forum.
2nd of June Two full papers accepted at the 1st international XAI conference:
Title: A co-design study for multi-stakeholder job recommender system explanations
Authors: Roan Schellingerhout, Francesco Barile and Nava Tintarev
Title: Explaining Search Result Stances to Opinionated People
Authors: Zhangyi Wu, Tim Draws, Federico Cau, Francesco Barile, Alisa Rieger and Nava Tintarev
2nd of June Two journal papers to appear in the UMUAI special issue on group recommender systems.
Title: Evaluating Explainable Social Choice-based Aggregation Strategies for Group Recommendation.
Authors: Francesco Barile, Tim Draws, Oana Inel, Alisa Rieger, Shabnam Najafian, Amir Ebrahimi Fard, Rishav
Hada, and Nava Tintarev.
Authors: Shabnam Najafian, Geoff Musick, Bart Knijnenburg, and Nava Tintarev.
Title: "How do People Make Decisions in Disclosing Personal Information in Tourism Group Recommendations in Competitive versus Cooperative Conditions?"
26th of April Upcoming talks and events: Panelist on the opportunities and challenges of artificial intelligence in our society as part of the Philips Innovation Award 15th of May; Panelist on Open Science and AI at UM Open Science Festival on the 25th of May; Invited talk at the SIAS research group and the Civic AI Lab 26th of May; Keynote at IS-EUD June.
17th of February Journal paper (TIIS) accepted: "Effects of AI and Logic-Style Explanations on Users' Decisions under Different Levels of Uncertainty". Authors: Federico Cau, Hanna Hauptmann, Davide Spano, and Nava Tintarev
16th of January The long-term program
ROBUST ``Trustworthy AI systems for sustainable growth'' has been supported by NWO in the new Long Term Program with 25 million Euros! Ph.D. positions for 17 labs will be advertised
here soon, including the TAIM lab (UM, UvA, RTL) on trustworthy media.
16th of January Publications accepted to IUI and CHI!
Cau, F. M., Hauptmann, H., Spano, L. D., & Tintarev, N. (2023). Supporting High-Uncertainty Decisions through AI and Logic-Style Explanations. IUI.
BEST PAPER AWARD: Yurrita, M., Draws, T., Balayn, A., Murray-Rust, D., Tintarev, N., & Bozzon, A. (2023). Disentangling Fairness Perceptions in Algorithmic Decision-Making: the Effects of Explanations, Human Oversight, and Contestability. CHI.
2022
16th of December Our join effort, "Viewpoint Diversity in Search Results" has been accepted for presentation at the ECIR 2023 conference. Full author list is: Tim Draws, Nirmal Roy, Oana Inel, Alisa Rieger, Rishav Hada, Mehmet Orcun Yalcin, Benjamin Timmermans, and Nava Tintarev.
16th of December Another paper acceptance to celebrate! This time at CHIIR: Explainable Cross-Topic Stance Detection for Search Results. Co-authors: Tim Draws, Karthikeyan Natesan Ramamurthy, Ioana Baldini Soares, Amit Dhurandhar, Inkit Padhi, Benjamin Timmermans and Nava Tintarev
18th of October: Submission accepted to WSDM 2023: Beyond Digital ``Echo Chambers'': The Role of Viewpoint Diversity in Political Discussion. With Rishav Hada, Amir Ebrahimi Fard, Sarah Shugars, Federico Bianchi, Patricia Rossini, Dirk Hovy, and Rebekah Tromble
6th of October: Short paper accepted to EMNLP: ``It's Not Just Hate'': A Multi-Dimensional Perspective on Detecting Harmful Speech Online. With Federico Bianchi, Stefanie Hills, Patricia Rossini, Dirk Hovy, and Rebekah Tromble.
10th of October: Enjoyed giving a talk and visiting the IRlab (Amsterdam)!
3rd of October: Enjoyed giving a talk for Bell Labs (Cambridge/Online).
6th of July: Looking forward to participating in the panel on Ethics and NLG International Natural Language Generation Conference (INLG) on the 20th of July.
3rd of June: The third edition of the Recommender Systems Handbook including our book Chapter ``Explaining Recommendations: Beyond single items'' has now been published.
3rd of June: Congratulations to PhD candidate Alisa Reiger on her acceptance to the UMAP doctoral consortium, and accepted paper at the Explainable User Modeling workshop titled ``Towards Healthy Online Debate: An Investigation of Debate Summaries and Personalized Persuasive Suggestions''!
31st March: I'll be giving one of the keynotes at the
Joint EurAI Advanced Course on AI, TAILOR Summer School 2022. This year's theme is Explainable AI.
17th of March: Our paper ``Comprehensive Viewpoint Representations for a Deeper Understanding of User Interactions With Debated Topics'' with Tim Draws, Oana Inel, Christian Baden, and Benjamin Timmermans has won best paper award at CHIIR'22! Third best paper award in a row!
23rd of November: Our paper
``The European Approach to AI from a Recommender System Perspective'' is now online. With Tommaso Di Noia, Panagiota Fatourou, and Markus Schedl This is a Big Trend paper in the Communications of the ACM (CACM) Region Special Section Europe 2022
2021
17th of November: Our paper ``A Checklist to Combat Cognitive Biases in Crowdsourcing'' with Tim Draws, Alisa Reiger, Oana Inel and Ujwal Gadiraju has won the Amazon Best Paper Award at HCOMP2021
6th of October: The University of Maastricht is preparing to participate in a 10-year AI research programme. I will be a co-investigator and chair for the integration of humanities and social sciences in ROBUST, a consortium applying for an NWO grant with a total budget of 95M (25M from NWO) to carry out long term research into reliable artificial intelligence (AI).
UM press release,
ICAI lab press release,
NWO press release
27th of September: Our journal submission,
``Design Implications for Explanations: A Case Study on Supporting Reflective Assessment of Potentially Misleading Videos'', was published in Frontiers in Artificial Intelligence, section AI for Human Learning and Behavior Change. Authors: Oana Inel, Tomislav Duricic, Harmanpreet Kaur, Elisabeth Lex, Nava Tintarev
9th of September: Our submission ``This Item Might Reinforce Your Opinion: Obfuscation and Labeling of Search Results to Mitigate Confirmation Bias'' won Douglas Engelbart best paper award at HyperText'21! Authors: Alisa Rieger, Tim Draws, Mariet Theune, and Nava Tintarev.
26th of August:
A group effort titled ``Toward Benchmarking Group Explanations: Evaluating the Effect of Aggregation Strategies versus Explanation'' was accepted to the Perspectives workshop at ACM RecSys. Submission led by Francesco Barile.
Another group effort was also accepted to HCOMP: A Checklist to Combat Cognitive Biases in Crowdsourcing, led by Tim Draws.
12th of July: Two full papers accepted to HyperText'21. 1) ``This Item Might Reinforce Your Opinion: Obfuscation and Labeling of Search Results to Mitigate Confirmation Bias''. Lead by Alisa Rieger, and with Tim Draws
and Mariet Theune. 2) ``Exploring User Concerns about Disclosing Location and Emotion Information in Group Recommendations''. Lead by Shabnam Najafian, and co-authored with Tim Draws, Francesco Barile, Marko Tkalcic, and Jie Yang.
9th of June: Paper accepted at the workshop NLP for positive impact: ``Are we human, or are we users? The role of natural language processing in human-centric news recommenders that nudge users to diverse content.'' With: Myrthe Reuver, Nicolas Mattis, Marijn Sax, Suzan Verberne, Natali Helberger, Judith Moeller, Sanne Vrijenhoek, Antske Fokkens and Wouter van Atteveldt.
26th of May: ``How Do Biased Search Result Rankings Affect User Attitudes on Debated Topics?'' was accepted as a full paper to SIGIR'21, lead by PhD candidate Tim Draws and with Ujwal Gadiraju, Alessandro Bozzon, and Ben Timmermans.
18th of March: Really enjoyed moderating the Webinar on Ethics in AI, co-organized by DKE, BISS, Brightlands, and IBM!
17th of March:Gave a talk at Computational Communication Science group at the VU titled: ``Toward Measuring Viewpoint Diversity in News Consumption''.
8th of March: Full paper accepted at UMAP: Factors Influencing Privacy Concern for Explanations of Group Recommendation (acceptance rate full papers 23,3%)! Paper lead by Shabnam Najafian and with Amra Delic and Marko Tkalcic.
15th of January: Full paper accepted at Persuasive'21, lead by Tim Draws and with Zoltan Szlavik, Benjamin Timmermans, Kush R. Varshney, and Michael Hind. Disparate Impact Diminishes Consumer Trust Even for Advantaged Users.
14th of January: New journal paper accepted to ACM TiiS: Humanized Recommender Systems: State-of-the-Art and Research Issues. Lead by Trang Tran Ngoc, and with Alexander Felfernig.
4th of January: Recognized as a
senior member of the ACM!
4th of January: Book Chapter: ``Explaining Recommendations: Beyond single items'' has been conditionally accepted for the publication of the third edition of the Recommender Systems Handbook!
4th of January: New full paper accepted at the FAccT Conference: ``Operationalizing Framing to Support Multiperspective Recommendations of Opinion Pieces''. Joint work with Mats Mulder, Oana Inel, and Jasper Oosterman. This is an outcome of Mat's Masters thesis and a great collaboration with Blendle Research.