[madPL] Reading Group Today 10/29


Date: Fri, 29 Oct 2021 12:03:49 +0000
From: Wiley Corning <wcorning@xxxxxxxx>
Subject: [madPL] Reading Group Today 10/29
Hi everyone,

The madPL reading group will be happening today, at the usual time and place - 1pm CT in the CS Department, Room 533 (and also on Zoom). This week, Yuhao and Anna will each be presenting on their recent work.

Deep neural networks for natural language processing are fragile in the face of adversarial examples -- small input perturbations, like synonym substitution or word duplication, which cause a neural network to change its prediction. We present an approach to certifying the robustness of LSTMs (and extensions of LSTMs) and training models that can be efficiently certified. Our approach can certify robustness to intractably large perturbation spaces defined programmatically in a language of string transformations. Our evaluation shows that (1) our approach can train models that are more robust to combinations of string transformations than those produced using existing techniques; (2) our approach can show high certification accuracy of the resulting models.

Datasets can be biased due to societal inequities, human biases, under-representation of minorities, etc. Our goal is to certify that models produced by a learning algorithm are pointwise-robust to potential dataset biases. This is a challenging problem: it entails learning models for a large, or even infinite, number of datasets, ensuring that they all produce the same prediction. We focus on decision-tree learning due to the interpretable nature of the models. Our approach allows programmatically specifying bias models across a variety of dimensions (e.g., missing data for minorities), composing types of bias, and targeting bias towards a specific group. To certify robustness, we use a novel symbolic technique to evaluate a decision-tree learner on a large, or infinite, number of datasets, certifying that each and every dataset produces the same prediction for a specific test point. We evaluate our approach on datasets that are commonly used in the fairness literature, and demonstrate our approach's viability on a range of bias models.
âIf you are planning to attend in person, please let me know in advance either via email or Slack so I can determine how much food to bring. I look forward to hearing today's talks!

Best,
Wiley


[← Prev in Thread] Current Thread [Next in Thread→]
  • [madPL] Reading Group Today 10/29, Wiley Corning <=