Re: [theory students] faculty candidate Lunjia Hu (April 30)


Date: Tue, 30 Apr 2024 16:40:45 +0000
From: ILIAS DIAKONIKOLAS <ilias@xxxxxxxxxxx>
Subject: Re: [theory students] faculty candidate Lunjia Hu (April 30)
Dear all,

This talk starts at noon in CS1240. Hope to see many of you there!

Ilias

From: Csfaculty <csfaculty-bounces@xxxxxxxxxxx> on behalf of ILIAS DIAKONIKOLAS <ilias@xxxxxxxxxxx>
Sent: Monday, April 29, 2024 3:01 PM
To: csfaculty@xxxxxxxxxxx <csfaculty@xxxxxxxxxxx>; faculty <faculty@xxxxxxxxxxx>; theory-students@xxxxxxxxxxx <theory-students@xxxxxxxxxxx>; graduates@xxxxxxxxxxx <graduates@xxxxxxxxxxx>
Subject: [Csfaculty] faculty candidate Lunjia Hu (April 30)
 
Dear all,

We will have a theory faculty candidate on April 30, Lunjia Hu (https://sites.google.com/stanford.edu/lunjia?pli=1) from Stanford. 

Lunjia has been doing great work in the foundations of trustworthy machine learning. 

His schedule is here:


Talk information follows.

Best,
Ilias

========

Title: Mathematical Foundations for Trustworthy Machine Learning

 

Abstract: 

Machine learning holds significant potential for positive societal impact. However, in critical applications involving people such as healthcare, employment, and lending, machine learning raises serious concerns of fairness, robustness, and interpretability. Addressing these concerns is crucial for making machine learning more trustworthy. This talk will focus on three lines of my recent research establishing the mathematical foundations of trustworthy machine learning. First, I will introduce a theory that optimally characterizes the amount of data needed for achieving multicalibration, a recent fairness notion with many impactful applications. This result is an instance of a broader theory developed in my research giving the first sample complexity characterizations for learning tasks with multiple interacting function classes (ALT’22 Best Student Paper, ITCS’23 Best Student Paper). Next, I will discuss my research in omniprediction, a new approach to robust learning that allows for simultaneous optimization of different loss functions and fairness constraints (ITCS'23, ICML’23). Finally, I will present a principled theory of calibration of neural networks (STOC’23). This theory provides an essential tool for understanding uncertainty quantification and interpretability in deep learning, allowing rigorous explanations for interesting empirical phenomena (NeurIPS'23 spotlight, ITCS'24).

 

Bio: Lunjia Hu is a final-year Computer Science PhD student at Stanford University, advised by Moses Charikar and Omer Reingold. He works on advancing the theoretical foundations of trustworthy machine learning, addressing fundamental questions about interpretability, fairness, robustness, and uncertainty quantification. His works on algorithmic fairness and machine learning theory have received Best Student Paper awards at ALT 2022 and ITCS 2023.




[← Prev in Thread] Current Thread [Next in Thread→]