Re: [AIRG] Model-Agnostic Explanations, 10/31


Date: Wed, 31 Oct 2018 16:40:44 +0000
From: Aubrey Barnard <barnard@xxxxxxxxxxx>
Subject: Re: [AIRG] Model-Agnostic Explanations, 10/31
AIRG,

Happy Halloween! I would like to remind you that this afternoon Yuriy will be presenting on explaining black box models. Not only is this a topic of increasing relevance in a world seeing increasing use of deep networks, but one of the related papers is a classic from our own Mark Craven.

4pm, CS 3310

I hope to see you there! (AIRG is not scary, even on Halloween.)

Aubrey

________________________________________
From: AIRG <airg-bounces@xxxxxxxxxxx> on behalf of Yuriy Sverchkov via AIRG <airg@xxxxxxxxxxx>
Sent: Friday, October 26, 2018 09:48
To: airg@xxxxxxxxxxx
Subject: [AIRG] Model-Agnostic Explanations, 10/31

Hi All,


At the next AIRG I’ll be presenting some work on getting explanations from black-box models including deep neural networks for images and recurrent neural networks for text.


CS 3310, 4pm, October 31


The main paper I will focus on is:
Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High-precision model-agnostic explanations. In AAAI Conference on Artificial Intelligence.
https://homes.cs.washington.edu/~marcotcr/aaai18.pdf
Anchors: High Precision Model-Agnostic Explanations<https://homes.cs.washington.edu/~marcotcr/aaai18.pdf>
homes.cs.washington.edu
English Portuguese This is the question we must address Esta e a quest´ ao que temos que˜ enfrentar This is the problem we must address Este ´e o problema que temos


I will also go over some relevant background including:
https://dl.acm.org/citation.cfm?id=2939778 (a predecessor to the above by the same lab, SIGKDD 2016)
http://papers.nips.cc/paper/1152-extracting-tree-structured-representations-of-trained-networks.pdf (Mark Craven and Jude Shavlik,NIPS 1995)
Extracting Tree-Structured Representations of Trained Networks<http://papers.nips.cc/paper/1152-extracting-tree-structured-representations-of-trained-networks.pdf>
papers.nips.cc
Extracting Tree-structured Representations of Trained Networks 25 is very general in its applicability, and scales well to large networks and problems


"Why Should I Trust You?" - Association for Computing ...<https://dl.acm.org/citation.cfm?id=2939778>
dl.acm.org
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model.


See you all there,
Yuriy

[← Prev in Thread] Current Thread [Next in Thread→]