Webinar: A Journey towards Explainable AI and its Societal Implications

23rd Nov 2020

Speakers

The world of AI has witnessed a significant paradigm shift from feature-driven learning to a data-driven one. While the working principle of feature-based models aligns well with human perception, the nature of the abstraction learned by the data-driven models are very different from those perceived by humans. Consequently, to be able to understand the working principle of a data-driven predictive model, it is imperative to understand the subtle relations between the abstract representations of the data and the prediction outputs.

In this tutorial-style lecture, I will be reviewing existing methodologies on explanations in AI, namely the LIME, L2X and the Shapley algorithms. Following this I will describe some of my own work towards applying explanations for search systems. I will also describe some of my own thoughts on how explainable AI can play a crucial role in building next generation AI systems that in addition to being fair and trustable are also likely to have a broad societal impact.

Short bio:

Debasis Ganguly is a research staff member of IBM Research Europe, Dublin, Ireland. His research interests broadly span across topics such as semantic matching, trusted and fair AI, model interpretability, and AI for healthcare. He obtained his masters in computer science from the Indian Statistical Institute, Calcutta and his PhD on the topic of Information Retrieval from Dublin City University. He maintains an active academic profile by publishing in top-tier conferences and also acting as a PC member for conferences such as SIGIR, CIKM, WWW, NAACL, AAAI etc. He also regularly serves as a part of the reviewing committee for a number of high impact journals such as Pattern Recognition, IPM, TOIS etc. As workshop and shared task organizational experience, he has co-organized the two versions of the `Exploitation of Social Media for Emergency Relief and Preparedness (SMERP)’ workshops in ECIR 2017 and WWW 2018, and has co-organized the Personalized IR tracks at FIRE’11 and CLEF’17. He is currently organizing two shared tasks at FIRE-2020 on retrieval from conversational dialogues (RCD) and CAIR (Causality-driven Adhoc Information Retrieval).