[theory students] Fwd: Quarterly Theory Workshop: Algorithmic Fairness (Friday, June 8)


Date: Tue, 22 May 2018 13:31:59 -0500
From: Shuchi Chawla <shuchi@xxxxxxxxxxx>
Subject: [theory students] Fwd: Quarterly Theory Workshop: Algorithmic Fairness (Friday, June 8)
FYI...

I'm planning to go to this and would be happy to carpool if others are interested.

Shuchi

---------- Forwarded message ---------
From: Aravindan Vijayaraghavan <aravindv@xxxxxxxxxxxxxxxx>
Date: Mon, May 21, 2018 at 10:10 PM
Subject: Quarterly Theory Workshop: Algorithmic Fairness (Friday, June 8)
To: Shuchi Chawla <shuchi@xxxxxxxxxxx>
Cc: Jason Hartline <hartline@xxxxxxxxxxxxxxxxxxxxx>, Anindya De <anindya@xxxxxxxxxxxxxxxxxxxxx>, Konstantin Makarychev <konstantin@xxxxxxxxxxxxxxxx>


Hi Shuchi,
We are having our quarterly theory workshop at Northwestern on Friday,
June 8th on the topic of algorithmic fairness.

The speakers for this workshop are Nicole Immorlica, Jon Kleinberg,
Omer Reingold and Aaron Roth. The talks will start at around 9:00 am and will go until 12:30pm. This will be followed by
lunch at the workshop venue and 1-on-1s in the afternoon. The details of the workshop are given below.

Please forward the announcement to people in your group (or
department) who might be interested. Registration is free. However, we
ask you to register at http://tinyurl.com/qtw-reg-fair if possible, so that we have a rough estimate of numbers for planning.

Thanks,
Aravindan (on behalf of the Northwestern theory group)


===========================================
Northwestern Computer Science 
--- hosts ---
The Quarterly Theory Workshop
--- on ---
Algorithmic Fairness
--- with ---
Nicole Immorlica (MSR), Jon Kleinberg (Cornell), Omer Reingold (Stanford), and Aaron
Roth (U. of Pennsylvania).
--- https://theory.eecs.northwestern.edu/events/algorithmic-fairness/ ---
===========================================

ABOUT THIS SERIES

The Quarterly Theory Workshop brings in three or four theoretical computer science
experts present their perspective and research on a common theme.  Chicago area
researchers with interest in theoretical computer science are invited to attend. 
The technical program is in the morning and includes coffee and lunch.  The
afternoon of the workshop will allow for continued discussion between attendees and
the speakers.


SYNOPSIS

As algorithmic systems have increasingly been deployed to make consequential
decisions, it is becoming urgent that we grapple with the problem of (un)fairness
and discrimination. These are delicate issues â to think rigorously about them, we
first need to figure out how to formally define what we want, and then reason about
how we might go about achieving our goals algorithmically â and what tradeoffs we
will have to manage. This workshop focuses on recent advances in the theory of
algorithmic fairness: both on foundational definitional questions, and on
algorithmic techniques. The speakers are Nicole Immorlica (MSR), Jon Kleinberg
(Cornell), Omer Reingold (Stanford), and Aaron Roth (U. of Pennsylvania).

The technical program of this workshop is organized by Aaron Roth and Jason Hartline.


LOGISTICS

Date: Friday, June 8, 2018.
Location: Kellogg Global Hub, Room 4101, Northwestern U, Evanston Il
Parking: Validation for North Campus Parking Garage available at workshop.
Registration: Please see webpage for info on registration and signing up for meetings, and bring your own name badge from past conference.


SCHEDULE

8:30-9:00: Continental Breakfast
9:00-9:05: Opening Remarks
9:05-9:45: Jon Kleinberg: Algorithmic Fairness Criteria for Evaluating Individuals
and Sets
9:50-10:30: Aaron Roth: Preventing Fairness Gerrymandering in Machine Learning
10:35-11:05: Coffee Break
11:05-11:45: Omer Reingold: On Algorithmic Fairness Between Groups and Individuals
11:50-12:30: Nicole Immorlica: Fair Classification and Learning
12:35-1:30: Lunch


ABSTRACTS

Speaker: Jon Kleinberg
Title: Algorithmic Fairness Criteria for Evaluating Individuals and Sets

Abstract: Recent discussion in the public sphere about classification by algorithms
has involved tension between competing notions of what it means for such a
classification to be fair to different groups. We consider several of the key
fairness conditions that lie at the heart of these debates, and discuss recent
research establishing inherent trade-offs between these conditions. We also consider
a variety of methods for promoting fairness and related notions for classification
and selection problems that involve sets rather than just individuals.

This talk is based on joint work with Sendhil Mullainathan, Manish Raghavan, and
Maithra Raghu.


Speaker: Aaron Roth
Title: Preventing Fairness Gerrymandering in Machine Learning

Abstract: The most prevalent notions of fairness in machine learning are statistical
notions: they fix a small collection of high-level, pre-defined groups (such as race
or gender), and then ask for approximate parity of some statistic of the classifier
(like positive classification rate or false positive rate) across these groups.
Constraints of this form are susceptible to (intentional or inadvertent) fairness
gerrymandering, in which a classifier appears to be fair on each individual group,
but badly violates the fairness constraint on one or more structured subgroups
defined over the protected attributes (such as certain combinations of protected
attribute values). Individual notions of fairness avoid this problem, but at the
cost of needing to make often unrealistic assumptions about the setting. We propose
instead to demand statistical notions of fairness across exponentially (or
infinitely) many subgroups, defined by a structured class of functions over the
protected attributes. We will show how to achieve this in polynomial time, under the
assumption that we are given oracles for solving the unconstrained learning problems
(for both the set of classifiers to be learned, and the set of subgroups to be
protected). Our algorithm casts the problem as solving for an equilibrium in a zero
sum game, and observes that learning oracles are enough to efficiently play
no-regret learning dynamics. We then demonstrate experimentally that our proposed
algorithms are practical, by investigating their performance on several real
datasets when instantiated with learning heuristics in place of oracles.

This talk is based on joint work with Michael Kearns, Seth Neel, and Steven Wu.


Speaker: Omer Reingold
Title: On Algorithmic Fairness Between Groups and Individuals 

Abstract: As algorithms increasingly inform and influence decisions made about
individuals, it is increasingly important to address concerns that these algorithms
might be discriminatory. Historically, definitions of fairness fell into one of two
extremes: (1) broad-strokes statistical guarantees; (2) individual-level
protections. Statistical guarantees tend to be much easier to satisfy (information
and complexity theoretically), but tend to be much weaker in the protections they
provide. We will discuss two recent works towards bridging the gap between
statistical and individual protections. These works provide efficient learning
algorithms that also ensure every subpopulation within some rich class is protected
(according to some fairness notion).

One of the notions we will discuss is that of multicalibration â a new measure of
algorithmic fairness that aims to mitigate concerns about discrimination that is
introduced in the process of learning a predictor from data. The second notion
studies fair classification within the versatile framework of Dwork et al. [ITCS
2012], which assumes the existence of a metric that measures similarity between
pairs of individuals. Unlike previous works on metric-based fairness, we study the
setting where a learning algorithm does not know the entire metric, but can query
similarities between particular pairs of individuals a bounded number of times. We
note that we do not make any assumption on the metric and are still able to obtain
meaningful protection from systemic discrimination that we refer to as âmetric
multifairness.â

The talk will be focused on the various ways in which algorithmic fairness can be
defined but will also elude to some of the ways in which it can be obtained. It is
based on joint works with Ãrsula HÃbert-Johnson, Michael P. Kim and Guy Rothblum.


Speaker: Nicole Immorlica
Title: Fair Classification and Learning

Abstract:  Classification, learning, and optimization algorithms are increasingly
used to make decisions of social consequence. An automated resume reader classifies
applicants, deciding which are worth interviewing. A learning algorithm estimates
the impact of a drug on health outcomes. A facility location algorithm chooses where
to locate new warehouses. For these algorithms to be useful, practitioners strive to
maximize their accuracy and optimality. However, this can result in differential
treatment across population subgroups, resulting in a sense of inequity. Fairness
metrics are used to measure the degree of inequity. In this talk, we explore how to
jointly optimize fairness and accuracy using, as a black box, existing algorithms
that optimize accuracy or optimality. Our first solution is tailored to
classification tasks and proposes decoupling the classification task, using
different classifiers for each subgroup. Our second solution can optimize any smooth
and continuous function of accuracy and fairness in classification, learning, and
optimization settings. It further has the advantage that subgroup membership, while
used to train, is not used at run time. This additional power comes with the cost of
added computational complexity in training.


[← Prev in Thread] Current Thread [Next in Thread→]