Date: | Tue, 10 Apr 2018 14:10:32 -0500 |
---|---|
From: | Benjamin Miller <bmiller@xxxxxxxxxxx> |
Subject: | [theory students] Theory lunch and TCS+ tomorrow at noon in 2310 (Shay Moran) |
Hi all,
We'll have theory lunch and the TCS+ talk tomorrow at noon in CS 2310. Shay Moran of IAS will be presenting "On the expressiveness of comparison queries" (abstract below). See you there! Benjamin Title: On the expressiveness of comparison queries. Abstrac Comparisons are a classical and well studied algorithmic tool that is used in a variety of contexts and applications. We will discuss two manifestations of the expressiveness of these queries in machine learning and complexity theory (a more detailed overview is given below). Both manifestations are based on the notion of "inference dimensionâ that can be viewed as another instance of the fruitful link between machine learning and discrete mathematics - a link dating back to the discovery of the VC dimension. 1. Active classification with comparison queries [Kane, Lovett, Moran, Zhang â17] Active learning is a model for semi-supervised learning that captures situations in which unlabeled data is abundant and manually labelling it is expensive. We consider an extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class. For example, in a recommendation system application (say for restaurants), the annotator may be asked whether she liked or disliked a specific restaurant (a label query); or which one of two restaurants did she like more (a comparison query). We prove that the usage of comparison queries leads to an exponential improvement in the query complexity of some well studied problems. Specifically, for the class of half-spaces, we show that under natural assumptions, such as large margin or bounded bit-description of the input examples, it is possible to reveal all the labels of a sample of size n using approximately O(logn) queries. 2. Nearly optimal linear decision trees for k-SUM and related problems [Kane, Lovett, Moran â17] We use the above framework to construct linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant k, we construct linear decision trees that solve the k-SUM problem on n elements using O(n log n) linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two k-subsets; when viewed as linear queries, comparison queries are 2k-sparse and have only {â1,+1} coefficients. We give similar constructions for sorting sumsets A+B and for solving the SUBSET-SUM problem, both with optimal number of queries, up to poly-logarithmi Based on joint works with Daniel Kane, Shachar Lovett, and Jiapeng Zhang. |
[← Prev in Thread] | Current Thread | [Next in Thread→] |
---|---|---|
|
Previous by Date: | Re: [theory students] Fwd: Midwest Theory Day 2018, YUXIN SUN |
---|---|
Next by Date: | [theory students] distributed computing a la PODC, Eric Bach |
Previous by Thread: | [theory students] Theory lunch and TCS+ today at noon in 2310 (Nima Anari), Benjamin Miller |
Next by Thread: | [theory students] Theory lunch at noon in CS 4310, Benjamin Miller |
Indexes: | [Date] [Thread] |