ï
Hi everyone,
Details: Wed Nov 12 at 2:30-3:30pm, Morgridge Hall 3610
Title:
How fast can you find a good hypothesis?
Abstract:
In the hypothesis selection problem, we are given a finite set of candidate distributions (hypotheses), H_1, . . . , H_n, and samples from an unknown distribution P. Our goal is to find a hypothesis H_i whose total variation distance to P is comparable to
that of the nearest hypothesis in H. Specifically, if the minimum distance is OPT, we aim to output an H_i such that, with probability at least 1 â Î, its total variation distance to P is at most C Â OPT + Î. Key aspects of this problem remain unresolved,
especially regarding the computational efficiency of statistically optimal hypothesis selection. In this talk, we will highlight recent progress on two questions: (a) Does there exists an algorithm that simultaneously runs in near-linear time, achieves the
best possible approximation factor, and succeeds with high probability? Intriguingly, such algorithms exist if we relax any one of these criteria. (b) Can the approximation factor be improved in expectation (similarly, if we are allowed to output a mixture
of the hypotheses)? This question is closely related to the challenge of achieving the optimal approximation for improper hypothesis selection in finite time on real-valued domains. This talk is based joint work by Anders Aamand, Maryam Aliakbarpour, Justin
Y. Chen, and Sandeep Silwal.
See
you there!
Sandeep