This is the home page for NYU's Computer Science Theory seminar, organized by the NYU Theoretical Computer Science Group. We also host Discrete Mathematics talks organized by Jinyoung Park, so if you are looking for the Discrete Math seminar, you are in the right place!
Time and Location:
Thursday, 11AM
Warren Weaver Hall
251 Mercer Street
Room 317
You can sign up for the Theory Seminar mailing list here. For more information, or if you would like to invite a speaker or give a talk, please contact Christopher Musco.
Spring 2024
 RaduCristian Curticapean (IT University of Copenhagen)
 Counting (with) homomorphisms [+]

Abstract: In recent years, homomorphism counts between graphs were studied in different subfields within theoretical computer science: From a complexitytheoretic point of view, it was shown that various counting problems can be understood well by expressing them as linear combinations of homomorphism counts. Such linear combinations enjoy the exceptional and very useful "monotonicity property" of being exactly as hard to compute as their hardest terms. Since the complexity of computing the individual terms, i.e., of counting homomorphisms, is reasonably well understood, this strategy allowed the community to make significant progress in the study of the complexity of various counting problems arising in database theory and network sciences.
From a more abstract perspective, the numbers of homomorphisms between graphs encode valuable information. For example, a classical result by Lovász shows that homomorphism counts from all graphs into a given graph G identify G up to isomorphism. Restricting the class of graphs from which homomorphisms are counted gives rise to natural relaxations of graph isomorphism that are connected to the expressiveness of logics.
In this selfcontained talk, we survey recent developments in the study of computational and abstract properties of homomorphism counts.
 Kunal Marwaha (University of Chicago)
 On the promise of quantum advantage for classical optimization [+]
 Abstract: The holy grail of quantum computing is a practical use for today's machines. A popular suggestion is that quantum computers can approximately solve optimization problems better than classical computers, despite little theoretical evidence. I take this claim seriously, analyzing and comparing averagecase algorithms on CSPs of large (but fixed) clause density. It turns out that both algorithms and obstructions from spin glass theory naturally transfer to sparse CSPs, culminating in an optimal algorithm among all boundedfanout quantum and classical circuits of depth up to \(\epsilon · \log n\). This talk surveys several recent works, especially BFMVZ21, JMSS22, and CHM23.
 Yang P. Liu (Institute for Advanced Study)
 Sparsifying generalized linear models [+]
 Abstract: We consider the sparsification of sums \(F : \mathbb{R}^n \to \mathbb{R}_+\) where \(F(x) = f_1(\langle a_1,x \rangle) + ... + f_m(\langle a_m,x \rangle)\) for vectors \(a_1, ..., a_m \in \mathbb{R}^n\) and functions \(f_1, ... ,f_m : \mathbb{R} \to \mathbb{R}_+\). We show that \((1+\epsilon)\)approximate sparsifiers of \(F\) with support size \(\frac{n}{\epsilon^2} (\log \frac{n}{\epsilon})^{O(1)}\) exist whenever the functions \(f_1, ... ,f_m\) are symmetric, monotone, and satisfy natural growth bounds. Additionally, we give efficient algorithms to compute such a sparsifier assuming each \(f_i\) can be evaluated efficiently. Our results generalize the classic case of \(\ell_p\) sparsification, where \(f_i(z) = z^p\), for \(p \in (0, 2]\), and give the first nearlinear size sparsifiers in the wellstudied setting of the Huber loss function and its generalizations, e.g., \(f_i(z) = \min\{z^p, z^2\}\) for \(0 ＜ p \leq 2\). Our sparsification algorithm can be applied to give nearoptimal reductions for optimizing a variety of generalized linear models including \(\ell_p\) regression for \(p \in (1, 2]\) to high accuracy, via solving \((\log n)^{O(1)}\) sparse regression instances with \(m \le n(\log n)^{O(1)}\), plus runtime proportional to the number of nonzero entries in the vectors \(a_1, \dots, a_m\). Based on joint work with Arun Jambulapati, James Lee, and Aaron Sidford.
 Ilias Zadik (Yale University)
 Sharp thresholds imply circuit lower bounds: From random 2SAT to planted clique [+]
 Abstract: In this talk, we will present a recent result that if a Boolean function exhibits a sharp threshold at an arbitrary density then it immediately implies a bounded depth circuit lower bounds for computing this function on average. By leveraging the rich literature of sharp thresholds, this method implies multiple new averagecase circuit lower bounds. Time permitted, we will discuss new AC0 lower bounds for the random 2SAT problem at the satisfiability threshold, the kclique decision problem in random graphs, and also the planted clique model from the literature of statistical estimation. Joint work with David Gamarnik and Elchanan Mossel.
 Kevin Schewior (University of Southern Denmark)
 Prophet Inequalities: Samples, SemiOnline, and Trading [+]
 Abstract: Consider the following online problem: There is a gambler who receives a finite stream of independent draws from a probability distribution on the positive real numbers. The gambler can stop the stream at any point and receive the current value as final payoff. When the distribution is known, it is known that the gambler can achieve an expected payoff of at least roughly 0.745 times the expected maximum of the stream, the value of the “prophet”. In this talk, we review this result and discuss variants in which the distribution is not (or partially) known, in which the gambler only receives limited information about the values, and in which the goal is not just to stop a single time.
 Ainesh Bakshi (Massachusetts Institute of Technology)
 Learning quantum Hamiltonians at any temperature in polynomial time [+]

Abstract: A quantum system of \(n\) interacting particles at thermal equilibrium can be described using a polynomial number of parameters, which correspond to the strengths of the forces between particles. A natural open question has been whether these parameters can be estimated efficiently by running experiments on the system. We resolve this question by giving the first polynomialtime algorithm for this task. This improves over prior work, which uses polynomially many samples from the system, but requires exponential time. In this talk, I will introduce the problem and describe some of the key ideas behind our algorithm. No background in quantum information is required.
Based on joint work with Allen Liu, Ankur Moitra and Ewin Tang.
 Abhinav Bhardwaj (Yale University)
 Matrix Perturbation: DavisKahan in the Infinity Norm [+]

Abstract: Perturbation theory is developed to analyze the impact of noise on data and has been an essential part of numerical analysis. Recently, it has played an important role in designing and analyzing matrix algorithms. One of the most useful tools in this subject, the DavisKahan sine theorem, provides an \(\ell_2\) error bound on the perturbation of the leading singular vectors (and spaces). We focus on the case when the signal matrix has low rank and the perturbation is random, which occurs often in practice. In an earlier paper, O'Rourke, Wang, and the second author showed that in this case, one can obtain an improved theorem. In particular, the noisetogap ratio condition in the original setting can be weakened considerably. In the current paper, we develop an infinity norm version of the O'RourkeVuWang result. The key ideas in the proof are a new bootstrapping argument and the socalled iterative leaveoneout method, which may be of independent interest. Applying the new bounds, we develop new, simple, and quick algorithms for several wellknown problems, such as finding hidden partitions and matrix completion. The core of these new algorithms is the fact that one is now able to quickly approximate certain key objects in the infinity norm, which has critical advantages over approximations in the \(\ell_2\) norm, Frobenius norm, or spectral norm.
Joint work with Prof. Van Vu.
 Matthew Jenssen (King's College London)
 A new lower bound for sphere packing [+]

Abstract: The classical sphere packing problem asks: what is the densest possible arrangement of identical, nonoverlapping spheres in \(\mathbb{R}^d\)?
I will discuss a recent proof that there exists a sphere packing with density at least \( (1o(1))\frac{d \log d}{2^{d+1}}. \) This improves upon previous bounds by a factor of order \(\log d\) and is the first improvement by more than a constant factor to Rogers' bound from 1947.
This is joint work with Marcelo Campos, Marcus Michelen and Julian Sahasrabudhe.
 Radoslav Fulek (New York University)
 Eliminating intersections of polygons by infinitesimal moves [+]

Abstract: We give a gentle introduction to research problems related to deciding whether the selfintersections of a given polygon can be eliminated by its infinitesimal perturbation. In computer science, the problem is known as the recognition of weakly simple polygons. Weakly simple polygons naturally generalize to the socalled ``weak embeddings of graphs'''.
This scenario arises in applications in clustering, cartography, and graph visualization, where nearby vertices and edges are often bundled to the same point or overlapping arcs due to data compression, graph semantics, or low resolution.
The recent advances in the area led to the resolution of several seemingly unrelated long standing open problems in computational geometry, such as the complexity status of the embeddability of simply connected polyhedra in \(\mathbb{R}^3\) and clustered planarity. We cover recent developments and open problems in the area, mainly, the problem of deciding whether a pair of polygonal chains can be made disjoint by infinitesimal perturbations.
Based on joint work with Hugo Akitaya and Csaba Toth.
 No Meeting  Spring Break.
 Yuri Faenza (Columbia University)
 Von NeumannMorgenstern Stability and Internal Closedness in Matching Theory [+]

Abstract: Gale and Shapley's family of stable matchings enjoys a rich mathematical structure, that has been exploited over the years to construct efficient algorithms. In this talk, we investigate alternatives to stable matchings that rely on the concept of internal stability, a notion introduced for abstract games by von Neumann and Morgenstern and motivated by the need of finding a set of mutually compatible solutions. The set of stable matchings is internally stable (IS). However, the class of IS sets is much richer, for an IS set of matchings may also include unstable matchings and/or exclude stable ones. In this paper, we focus on two families of IS sets of matchings: von NeumannMorgenstern (vNM) stable and internally closed. We study structural and algorithmic questions around those concepts in both the marriage (ie, bipartite) and roommate (ie, nonbipartite) model. Our results imply that, in the marriage model, internally closed and vNM stable sets are an alternative to stable matchings that are as tractable as stable matchings themselves, a fairly rare occurrence in the area. En route to our results, we present classical algebraic structures associated with stable matchings, and some of their generalizations that we derive in our work.
Based on joint work with Cliff Stein, Jia Wan, and Xuan Zhang.
 Guy Moshkovitz (City University of New York)
 Recent progress in the study of tensor ranks [+]
 Abstract: While for matrices there is one notion of rank with many equivalent definitions, for tensors  or higherdimensional matrices  there are many inequivalent notions. The interplay between these notions plays a central role in the context of the structurevsrandomness paradigm, and is closely related to important questions in additive combinatorics (e.g. Polynomial Freiman–Ruzsa) and computer science (e.g. GoldreichLevin theorems), to name a few. In this talk I will discuss recent (exciting!) progress in the study of tensor ranks from the last few years.
 Dingding Dong (Harvard University)
 Nearly all kSAT functions are unate [+]
 Abstract: We discuss recent work on the enumeration of kSAT functions (boolean functions that can be represented as kSAT formulae). For any fixed constant k, we show that almost all kSAT functions on n Boolean variables are unate, i.e., monotone after first negating some variables. This resolves a conjecture by Bollobás, Brightwell, and Leader from 2003. Joint work with József Balogh, Bernard Lidický, Nitya Mani and Yufei Zhao.
 Venkatesan Guruswami (UC Berkeley/Simons Institute)
 Parameterized Inapproximability Hypothesis under ETH [+]
 Abstract: The Parameterized Inapproximability Hypothesis (PIH) asserts that no fixed parameter tractable (FPT) algorithm can distinguish a satisfiable CSP instance, parameterized by the number of variables, from one where every assignment fails to satisfy 1% of the constraints. PIH plays the role of the PCP theorem in parameterized complexity. In this talk, we will sketch how to prove PIH under the Exponential Time Hypothesis (ETH). The approach involves two broad steps. We identify a highly structured ETHhard CSP whose variables take vector values, and whose constraints are either parallel or linear. Both kinds of constraints are then checked with constant soundness via a "parallel PCP of proximity" based on the Hadamard code. Together these steps establish that ETH implies PIH. In a recent followup, we show that ReedMuller code based PCPs can be devised to even get nearoptimal runtime lower bounds, eg. showing that constant factor approximations to kclique on nvertex graphs requires n^{k^{1o(1)}} time under ETH. Based on joint works with Bingkai Lin, Xuandi Ren, Yican Sun, Kewen Wu.
 Michal Koucký (Charles University)
 Locally consistent decomposition of strings with applications to edit distance sketching [+]

Abstract: In this talk I will present a new locally consistent decomposition of
strings with applications to sketching of edit distance and approximate
pattern matching. Edit distance is a string similarity measure that counts
how many characters have to be deleted, inserted or substituted in one
string to get another one. Our ultimate goal is to be able to compute for
each string a short sketch such that from sketches of two strings we can
estimate their edit distance. Our new string decomposition allows to build
such sketches of certain length.
For a parameter k, each string x is decomposed into blocks that can be described by grammars of size O(k) (using some amount of randomness). If we take two strings x and y of edit distance at most k then their block decomposition uses the same number of grammars and the ith grammar of x is the same as the ith grammar of y except for at most k indexes i. The edit distance of x and y equals to the sum of edit distances of pairs of blocks where x and y differ. This decomposition can be used to build edit distance sketches and streaming approximate pattern matching algorithms.
Joint work with Sudatta Bhattacharya.
 Van Vu (Yale University)
 Matrix computation with random noise [+]
 Abstract: Let \(A\) be a large matrix, and E is a random matrix. The aim is to compare \(f(A)\) and \(f(A+E)\) where \(f\) is a matrix functional. (For instance, one can take \(f(A)= A_k\), the best rank \(k\) approximation of \(A\).) We study the case when \(A\) has a rapidly decaying spectrum (a special case is \(A\) has low rank), which is quite common in practice, and obtain, for various functional \(f\), new bounds which significantly improve those obtained by traditional methods. We will discuss these results with applications, including the multi hidden cliques problem, PCA analysis for spiked sample covariance matrices, and private matrix approximation.
 Rotem Oshman (TelAviv University, Princeton)
 Computationally Sound Proofs of Network Properties [+]

Abstract: In distributed certification, our goal is to certify that a network has a certain desired property, e.g., the network is connected, or the internal states of its nodes encode a valid spanning tree of the network. To this end, a prover generates certificates that are stored at each network node, and the nodes can then interact with one another in order to check their certificates and verify that the property holds. This can be viewed as a distributed analog of NP. However, like most theoretical models of distributed computing, the traditional model for distributed certification is informationtheoretic: the prover and the network nodes are computationally unbounded, and the only efficiency measures we consider are the length of the certificates and the communication required to verify them. The informationtheoretic nature of the model sets the area of distributed certification apart from classical complexity theory, and also makes it subject to strong lower bounds from the world of nondeterministic communication complexity.
In this talk I will describe a computationally sound version of distributed certification, where the prover and the network nodes are required to run in polynomial time in the size of the network. Introducing computational restrictions allows us to leverage cryptography, and we show that any network property in P can be certified using certificates of polylogarithmic length by a global prover that sees the entire network. Ideally, however, we would like the prover to also be distributed, and we show that this can be done as well: any polynomial distributed algorithm can efficiently certify its own execution with low overhead in terms of communication and rounds. The main challenge in the distributed setting is that no single network node can see the entire execution of the algorithm.
This is joint work with Eden Aldema Tshuva, Elette Boyle, Ran Cohen and Tal Moran. No background in cryptography is assumed for the talk.
 Sorrachai Yingchareonthawornchai (Hebrew University of Jerusalem)
 Deterministic kVertex Connectivity in k Maxflows [+]

Abstract: Vertex connectivity \(\kappa\) of an \(n\)vertex \(m\)edge undirected graph is the minimum number of vertices whose removal disconnects the graph. After more than half a century of intensive research, a randomized algorithm for computing vertex connectivity in nearoptimal \(m^{1+o(1)}\) time was recently shown [LNPSY, STOC'21] via a (randomized) reduction to the maxflow problem. Deterministic algorithms, unfortunately, have remained much slower even though the maxflow problem can be solved deterministically in almost linear time [BCGKLPSS, FOCS'23]. When \(\kappa \le 2\), lineartime algorithms were known [Tarjan'72; Hopcroft Tarjan'73]. For \(\kappa \geq 3\), the stateoftheart deterministic algorithms run either in \(m^{1+o(1)} \cdot 2^{O(\kappa^2)}\) time [Saranurak, Yingchareonthawornchai, FOCS'22] or in time proportional to \(\tilde O(n + \kappa\cdot \min\{\kappa, \sqrt{n}\})\) [Even'75; Gabow, FOCS'00] maxflow calls where we use \(\tilde O(\cdot)\) to hide a polylogarithmic factor.
In this talk, I will present a deterministic algorithm for computing vertex connectivity in time proportional to \(\kappa \cdot n^{o(1)}\) maxflow calls. For all values of \(\kappa\), our algorithm subsumes (up to a subpolynomial factor) all known stateoftheart algorithms [Tarjan'72; Hopcroft Tarjan'73; Even'75; Gabow, FOCS'00; Saranurak, Yingchareonthawornchai, FOCS'22]. In particular, our algorithm runs in \(m^{1+o(1)}\cdot n\) time in the worstcase, answering the open question asked by [Gabow, JACM'06].
To obtain our result, we introduce a novel technique called commonneighborhood clustering and crucially use it for both reducing the number of maxflow calls, and reducing the size of maxflow instances per call. Furthermore, we show how to exploit Ramanujan expanders more efficiently than the previous technique of Gabow [FOCS'00] and introduce the pseudorandom object called selector into the context of vertex connectivity.
Joint work with Yonggang Jiang, Chaitanya Nalam, and Thatchaphol Saranurak.
Room 517, WWH
 Nishant Mehta (University of Victoria)
 Nearoptimal peraction regret bounds for adaptive sleeping bandits and beyond [+]
 Abstract: The sleeping bandits problem is a variant of the classical multiarmed bandit problem. In each of a sequence of rounds: an adversary selects a set of arms (actions) which are available (unavailable arms are "asleep"), the learning algorithm (Learner) then pulls an arm, and finally the adversary sets the loss of each available arm. Learner suffers the loss of the selected arm and observes only that arm's loss. A standard performance measure for this type of problem with changing action sets is the peraction regret: the learning algorithm wishes, for all arms simultaneously, to have cumulative loss not much larger than the cumulative loss of the arm when considering only those rounds in which the arm was available. For this problem, I'll present the first algorithms which enjoy low peraction regret against reactive (i.e., adaptive) adversaries; moreover, the regret guarantees are nearoptimal, unlike previous results which were far from optimal even for oblivious adversaries. Along the way, we will review the simpler, fullinformation version of the problem, known as sleeping experts. I will also mention various extensions of our results, including (i) new results for bandits with side information from sleeping experts; (ii) guarantees for the adaptive regret and tracking regret for standard (nonsleeping) bandits. The results described so far are joint work with my PhD student Quan Nguyen (who worked out most of the results). Time permitting, I may start the talk by presenting an elegant lower bound for a related setting (dying experts, where experts only disappear over time) when using a different, much harder notion of regret known as the ranking regret.
Fall 2023
 Zihan Tan (Rutgers University)
 On \((1+\epsilon)\)Approximate Flow Sparsifiers [+]

Abstract: Given a large graph \(G\) with a subset \(T=k\) of its vertices called terminals, a quality\(q\)
flow sparsifier is a small graph \(G'\) that contains \(T\) and preserves all multicommodity flows that
can be routed between terminals in \(T\), to within factor \(q\). The problem of constructing flow
sparsifiers with good (small) quality and (small) size has been a central problem in graph
compression for decades.
A natural approach of constructing \(O(1)\)quality flow sparsifiers, which was adopted in most previous constructions, is contraction. Andoni, Krauthgamer, and Gupta constructed a sketch of size \(f(k,\epsilon)\) that stores all feasible multicommodity flows up to factor \((1+\epsilon)\), raised the question of constructing quality\((1+\epsilon)\) flow sparsifiers whose size only depends on \(k\),\(\epsilon\) (but not the number of vertices in the input graph \(G\)), and proposed a contractionbased framework towards it using their sketch result. In this paper, we settle their question for contractionbased flow sparsifiers, by showing that quality\((1+\epsilon)\) contractionbased flow sparsifiers with size \(f(\epsilon)\) exist for all 5terminal graphs, but not all 6terminal graphs. Our hardness result on 6terminal graphs improves upon a recent hardness result by Krauthgamer and Mosenzon on exact (quality1) flow sparsifiers, for contractionbased constructions. Our construction and proof utilize the notion of tight span in metric geometry.
This talk is based on joint work with Yu Chen.
 Tal Yankovich (Tel Aviv University)
 Codes with locality [+]

Abstract: Codes with locality play a major role in modern coding theory. Examples for such codes are locally decodable codes (LDC), locally correctable codes (LCC), locally testable codes (LTC), relaxed LDC and relaxed LCC. In this talk we will present a few results on codes with locality, for example, asymptotically good relaxed locally correctable codes with query complexity \(\(log n)^{2+o(1)}\).
Based on joint works with Gil Cohen.
 Henry Yuen (Columbia University)
 A Complexity Theory for the Quantum Age? [+]

Abstract: How hard is it to compress quantum information? To produce a counterfeit quantum money state? To unscramble the Hawking radiation of a black hole? Traditional complexity theory  which is centered around decision problems and tasks with classical inputs and outputs  appears inadequate for reasoning about the complexity of such tasks involving quantum inputs and outputs.
In this talk I will discuss why we need a "fully quantum" complexity theory, and will describe some facets of such a theory. As a key illustration I will explain how a "fully quantum" task called the Uhlmann Transformation Problem characterizes the complexity of seemingly unrelated problems, such as decoding noisy quantum channels, performing the HarlowHayden black hole radiation decoding task, and breaking the security of quantum bit commitment schemes. Along the way I will describe many open problems and directions to explore in the world of fully quantum complexity theory.
No prior quantum background will be assumed. Joint work with John Bostanci, Yuval Efron, Tony Metger, Alexander Poremba, and Luowen Qian (https://arxiv.org/abs/2306.13073).
 Hanna Komlós (Rutgers University)
 Online List Labeling [+]
 Abstract: The online listlabeling problem is a classical combinatorial problem with a large literature of upper bounds, lower bounds, and applications. The goal is to store a dynamicallychanging set of n items in an array of m slots, while maintaining the invariant that the items appear in sorted order, and while minimizing the relabeling cost, defined to be the number of items that are moved per insertion/deletion. There has long existed a gap between the lower and upper bounds in the most algorithmically interesting part of the problem's parameter space. We present our results, which narrow this gap for the first time in nearly 4 decades. Additionally, we will describe some recent results on composing multiple listlabeling algorithms in order to obtain the best combination of their respective cost guarantees.
 Cosmin Pohoata (Emory University)
 A new upper bound for the Heilbronn triangle problem [+]
 Abstract: We discuss a new upper bound for the Heilbronn triangle problem, showing that for sufficiently large $n$ in every configuration of \(n\) points chosen inside a unit square there exists a triangle of area less than \(n^{8/71/2000}\). This is joint work with Alex Cohen and Dmitrii Zakharov.
 Liana Yepremyan (Emory University)
 Ramsey graphs and additive combinatorics without addition [+]
 Abstract: A graph is Ramsey if its largest clique or independent set is of size logarithmic in the number of vertices. While almost all graphs are Ramsey, there is still no known explicit construction of Ramsey graphs. We discuss recent work on some questions in this area. Along the way, we study some fundamental problems in additive combinatorics, and discover that group structure is superfluous for these problems. Joint work with David Conlon, Jacob Fox and Huy Tuan Pham.
 Tamalika Mukherjee (Columbia University)
 How to Make Your Approximation Algorithm Private: A BlackBox DifferentiallyPrivate Transformation for Tunable Approximation Algorithms of Functions with Low Sensitivity [+]

Abstract: We develop a framework for efficiently transforming certain approximation algorithms into differentiallyprivate variants, in a blackbox manner. Specifically, our results focus on algorithms \(A\) that output an approximation to a function \(f\) of the form \((1−a)f(x)−k\leq A(x)\leq(1+a)f(x)+k\), where \(k\in R_{\geq 0}\) denotes additive error and \(a\in[0,1)\) denotes multiplicative error can be "tuned" to smallenough values while incurring only a polynomial blowup in the running time/space. We show that such algorithms can be made DP without sacrificing accuracy, as long as the function f has small global sensitivity. We achieve these results by applying the smooth sensitivity framework developed by Nissim, Raskhodnikova, and Smith (STOC 2007).
Our framework naturally applies to transform nonprivate FPRAS and FPTAS algorithms into \(\epsilon\)DP approximation algorithms where the former case requires an additional postprocessing step. We apply our framework in the context of sublineartime and sublinearspace algorithms, while preserving the nature of the algorithm in meaningful ranges of the parameters. Our results include the first (to the best of our knowledge) \(\epsilon\)edge DP sublineartime algorithm for estimating the number of triangles, the number of connected components, and the weight of a minimum spanning tree of a graph. In the area of streaming algorithms, our results include ϵDP algorithms for estimating Lpnorms, distinct elements, and weighted minimum spanning tree for both insertiononly and turnstile streams. Our transformation also provides a private version of the smooth histogram framework, which is commonly used for converting streaming algorithms into sliding window variants, and achieves a multiplicative approximation to many problems, such as estimating Lpnorms, distinct elements, and the length of the longest increasing subsequence.
 Yotam Gafni (Technion)
 Blockchain Transaction Fee Mechanisms  an Axiomatic and NonMyopic Analysis [+]
 Abstract: Decentralized cryptocurrencies rely on aligning the incentives of users and miners to operate correctly and offer a high QoS to users. Recent literature studies the mechanism design problem of the auction serving as a cryptocurrency's Transaction Fee Mechanism (TFM). We find that the nonmyopic modeling of miners falls close to the wellknown problem of online buffer management for packet switching. The main difference is that unlike packets, which are of a fixed size throughout their lifetime, in a financial environment user preferences (and therefore revenue extraction) may be timedependent. We study the competitive ratio guarantees given a certain discount rate and show how existing methods from packet scheduling perform suboptimally in the more general discounted setting. We find a novel, simple, memoryless, and competitively optimal deterministic algorithm for the semimyopic case, as well as a randomized algorithm that achieves better performance than the best possible deterministic algorithm, for any discount rate
 Edinah Gnang (Johns Hopkins University)
 A proof of the Kotzig–Ringel–Rosa conjecture and the GrahamSloane Conjecture [+]
 Abstract: We describe proof of the long standing Kotzig–Ringel–Rosa conjecture (1964) as well the slightly more recent GrahamSloane Conjectures (1980) respectively known as the graceful tree labeling conjecture and the harmonious tree labeling conjecture. Both proof stem from a functional reformulation of these conjectures and a marriage of Noga Alon's Combinatorial Nullstellensatz with a new composition lemma. If time permits we will also discuss algorithmic aspects of the composition lemma.
 Michael Simkin (Massachusetts Institute of Technology)
 The number of \(n\)queens configurations [+]
 Abstract: The \(n\)queens problem is to determine \(Q(n)\), the number of ways to place \(n\) mutually nonthreatening queens on an \(n\) x \(n\) chess board. We show that there exists a constant \(1.94 < a < 1.9449\) such that \(Q(n) = ((1 + o(1))ne^(a))^n\). The constant \(a\) is characterized as the solution to a convex optimization problem in \(P([1/2,1/2]^2)\), the space of Borel probability measures on the square. The chief innovation is the introduction of limit objects for \(n\)queens configurations, which we call "queenons". These are a convex set in \(P([1/2,1/2]^2)\). We define an entropy function that counts the number of \(n\)queens configurations approximating a given queenon. The upper bound uses the entropy method of Radhakrishnan and LinialLuria. For the lower bound we describe a randomized algorithm that constructs a configuration near a prespecified queenon and whose entropy matches that found in the upper bound. The enumeration of \(n\)queens configurations is then obtained by maximizing the (concave) entropy function over the space of queenons.
 Greg Rosenthal (University of Warwick, University of Cambridge)
 Efficient Quantum State Synthesis with One Query [+]

Abstract: We present a polynomialtime quantum algorithm making a single query (in superposition) to a classical oracle, such that for every state \(\psi \rangle\), there exists a choice of oracle that makes the algorithm construct an exponentially close approximation of \(\psi \rangle\). Previous algorithms for this problem either used a linear number of queries and polynomial time, or a constant number of queries and polynomially many ancillae but no nontrivial bound on the runtime. As corollaries we do the following:
 We simplify the proof that \(statePSPACE \subseteq stateQIP\) (a quantum state analogue of \(PSPACE \subseteq IP\)) and show that a constant number of rounds of interaction suffices.
 We show that QACf0 lower bounds for constructing explicit states would imply breakthrough circuit lower bounds for computing explicit boolean functions.
 We prove that every \(n\)qubit state can be constructed to within 0.01 error by an \(O(2^n/n)\)size circuit over an appropriate finite gate set. More generally we give a sizeerror tradeoff which, by a counting argument, is optimal for any finite gate set.
 No Meeting  Thanksgiving.
 Victor Reis (Institute for Advanced Study)
 Optimal Online Discrepancy Minimization [+]
 Abstract: We prove that there exists an online algorithm that for any sequence of vectors \(v_1, ..., v_T\) in \(\mathbb{R}^n\) of Euclidean norm at most 1, arriving one at a time, decides random signs \(x_1,..., x_T \in \{−1,1\}\) so that for every \(t \leq T\), their signed prefix sum is 10subgaussian. This improves over the work of Alweiss, Liu and Sawhney who kept prefix sums \(O(\sqrt{\log(nT)})\)subgaussian, and gives a \(O(\sqrt{\log T})\) bound on the discrepancy \(max_{t \le T} \\sum_{i=1}^t x_i v_i\_\infty\). Our proof combines a generalization of Banaszczyk’s prefix balancing result to trees with a cloning argument to find distributions rather than single colorings. We also show a matching \(\Omega(\sqrt{\log T})\) strategy for an oblivious adversary.
 Kevin Pratt (New York Univeristy)
 Cap sets, corners, and fast matrix multiplication [+]
 Abstract: In this talk I will give a brief overview of the area for fast matrix multiplication, with an emphasis on connections to additive combinatorics. After discussing connections between the current main approach to fast matrix multiplication and the resolution of the capset problem, I will raise the following question: suppose that \(S \subseteq [n]^2\) contains no three points of the form \((x,y), (x,y+\delta), (x+\delta,y')\), where \(\delta \neq 0\). How big can \(S\) be? Trivially, \(n \le S \le n^2\). Slight improvements on these bounds are obtained from Shkredov's upper bound for the corners problem, which shows that \(S \le O(n^2/(\log \log n)^c)\) for some small \(c > 0\), and a construction due to Petrov, which shows that \(S \ge \Omega(n \log n/\sqrt{\log \log n})\). Could it be that for all \(\varepsilon > 0\), \(S \le O(n^{1+\varepsilon})\)? I will sketch that if so, this would rule out obtaining \(\omega = 2\) via an approach of Cohn, Umans, Kleinberg, and Szegedy.
 Matija Bucic (Princeton University)
 Robust sublinear expanders [+]

Abstract: Expander graphs are perhaps one of the most widely useful classes of graphs ever considered. In this talk, we will focus on a fairly weak notion of expanders called sublinear expanders, first introduced by Komlós and Szemerédi around 30 years ago. They have found many remarkable applications ever since. In particular, we will focus on certain robustness conditions one may impose on sublinear expanders and some applications of this idea, which include:
 recent progress on the classical ErdősGallai cycle decomposition conjecture
 an essentially tight answer to the classical Erdős unit distance problem for "most" real normed spaces and
 an asymptotic solution to the rainbow Turan problem for cycles, raised by Keevash, Mubayi, Sudakov and Verstraete, with an interesting corollary in additive number theory.
Spring 2023
 Roie Levin (Tel Aviv University)
 Online Covering: Secretaries, Prophets and Universal Maps [+]

Abstract: We give a polynomialtime algorithm for online covering IPs with a competitive ratio of \(O(\log mn)\) when the constraints are revealed in random order, essentially matching the best possible offline bound of \(O(\log n)\) and circumventing the \(\Omega(\log m \log n)\) lower bound known in adversarial order. We then use this result to give an \(O(log mn)\) competitive algorithm for the prophet version of this problem, where constraints are sampled from a sequence of known distributions (in fact, our algorithm works even when only a single sample from each of the distributions is given). Since our algorithm is universal, as a byproduct we establish that only \(O(n)\) samples are necessary to build a universal map for online covering IPs with competitive ratio O(\log mn) on input sequences of length n.
This talk is based on joint work with Anupam Gupta and Gregory Kehne, the first half of which appeared at FOCS 2021.
 Nati Linial (Hebrew University of Jerusalem)
 Geodesic Geometry on Graphs [+]

Abstract: The underlying idea of this lecture is that all important phenomena in geometry have interesting graphtheoretic counterparts. A path system in an nvertex graph \(G=(V,E)\) is a collection of \({n \choose 2}\) paths \(Puv=Pvu\), one per each pair of vertices \(u,v\), where \(Puv\) connects the vertices \(u\) and \(v\). We say that \(P\) is consistent if it is closed under taking subpaths. Namely, for any vertex \(x\) that resides on \(Puv\), it holds that \(Puv\) is the concatenation of \(Pux\) and \(Pxv\). There is a very simple way to generate consistent path systems. Namely, pick a positive weight function \(w\) on the edge set \(E\), and let \(Puv\) be a \(w\)shortest \(uv\) path. A path system that can be attained this way, is said to be metric. Question: Are all consistent path systems metric? Answer: A very emphatic NO.
We call \(G\) metrizable if every consistent path system in \(G\) is metric. Our main discoveries are:
 1. Almost all graphs (at a very strong sense) are nonmetrizable.
 Yet, all outerplanar graphs are metrizable.
 Metrizability is polynomialtime decidable.
 Prantar Ghosh (Rutgers University)
 Adversarially Robust Coloring for Graph Streams [+]

Abstract: A streaming algorithm is called adversarially robust if it provides correct outputs even when the stream updates are chosen by an adaptive adversary based on past outputs given by the algorithm. There has been a surge in interest for such algorithms over the last couple of years. I shall begin this talk by describing this model and then address the natural question of exhibiting a separation between classical and robust streaming. We shall see how this question leads to the problem of streaming graph coloring, where we need to maintain a proper vertexcoloring of a streaming graph using as few colors as possible. In the classical streaming model, Assadi, Chen, and Khanna showed that an nvertex graph with maximum degree Δ can be colored with Δ+1 colors in O(n polylog(n)) space, i.e., semistreaming space. We shall show that an adversarially robust algorithm running under a similar space bound must spend almost Ω(Δ^2) colors, and that robust O(Δ)coloring requires a linear amount of space, namely Ω(nΔ). These lower bounds not only establish the first separation between adversarially robust algorithms and ordinary randomized algorithms for a natural problem on insertiononly streams, but also the first separation between randomized and deterministic coloring algorithms for graph streams: this is because deterministic algorithms are automatically robust. I shall also go over some complementary upper bounds: in particular, there are robust coloring algorithms using O(Δ^2.5) colors in semistreaming space and O(Δ^2) colors in O(n√Δ) space.
This is based on a couple of joint works: one with Amit Chakrabarti and Manuel Stoeckl, and another with Sepehr Assadi, Amit, and Manuel.
 Spencer Peters (Cornell University)
 Revisiting TimeSpace Tradeoffs for Function Inversion [+]
 Abstract: Function inversion is a fundamental problem in cryptography and in theoretical computer science more broadly. Given a function f: [N] > [N] from a finite domain to itself, the goal is to construct a small data structure, so that for any point y in the image of f, you can recover an inverse of y using few evaluations of f. First, I will describe a modification to Fiat and Naor's classic function inversion algorithm [STOC '91] that improves the timespace tradeoff in the regime where the number of evaluations T exceeds the data structure bitlength S. Then I'll present the first (barely) nontrivial nonadaptive algorithm for function inversion (a nonadaptive algorithm chooses all the points it will evaluate f on before seeing any of the results). This algorithm resolves a question posed by CorriganGibbs and Kogan [TCC'19], and I'll show that the tradeoff it achieves is tight for a natural class of nonadaptive algorithms. Both results are joint work with Alexander Golovnev, Siyao Guo, and Noah StephensDavidowitz.
 Mark Braverman (Princeton University)
 Optimizationfriendly generic mechanisms without money [+]
 Abstract: Our goal is to develop a generic framework for converting modern gradientdescent based optimization algorithms into mechanisms where inputs come from selfinterested agents. We focus on aggregating preferences from n players in a context without money. Special cases of this setting include voting, allocation of items by lottery, and matching. Our key technical contribution is a new metaalgorithm we call APEX (Adaptive Pricing Equalizing Externalities). The framework is sufficiently general to be combined with any optimization algorithm that is based on local search. In the talk I'll outline the algorithm, and open problem/research directions that it raises. As an application of the framework, I will discuss a special case of applying the framework to the problem of onesided allocation with lotteries. In this case, we obtain a strengthening of the 1979 result by Hylland and Zeckhauser on allocation via a competitive equilibrium from equal incomes (CEEI). The [HZ79] result posits that there is a (fractional) allocation and a set of item prices such that the allocation is a competitive equilibrium given prices. We further show that there is always a reweighing of the players' utility values such that running the standard unitdemand VCG with reweighed utilities leads to a HZequilibrium prices. Interestingly, not all HZ competitive equilibria come from VCG prices.
 No Meeting
 Karthik C.S. (Rutgers University)
 (In)Approximability of Steiner Tree in \(L_p\) metrics [+]

Abstract: In the Discrete Steiner Tree problem (DST), we are given as input two sets of points in a metric space, called terminals and facilities respectively, and the goal is to find the minimumcost tree connecting the terminals, by possibly introducing new points (called Steiner points) from the set of facilities, as nodes in the solution. In the Continuous Steiner Tree problem (CST) we are only given as input a set of terminals and are free to introduce Steiner points in the solution tree from anywhere in the metric space.
Determining the efficiency of (even approximately) computing Steiner trees has received a lot of attention from the theoretical computer science community. Despite this focus, our understanding of the approximability of Steiner trees remains poor, and our understanding of the hardness of approximation of these tasks was even worse. In this talk, I will detail some recent progress on the hardness of approximating DST and CST when the input points are in \(L_p\) metric spaces.
The talk is based on joint work with Henry Fleischmann and Surya Teja Gavva.
 Or Zamir (Institute for Advanced Study)
 Algorithmic Applications of Hypergraph and Partition Containers [+]
 Abstract: We present a general method to convert algorithms into faster algorithms for almostregular input instances. Informally, an almostregular input is an input in which the maximum degree is larger than the average degree by at most a constant factor. This family of inputs vastly generalizes several families of inputs for which we commonly have improved algorithms, including boundeddegree inputs and random inputs. It also generalizes families of inputs for which we don't usually have faster algorithms, including regularinputs of arbitrarily high degree and all very dense inputs. We apply our method to achieve breakthroughs in exact algorithms for several central NPComplete problems including kSAT, Graph Coloring, and Maximum Independent Set. Our main tool is the first algorithmic application of the relatively new Hypergraph Container Method (Saxton and Thomason 2015, Balogh, Morris and Samotij 2015). This recent breakthrough, which generalizes an earlier version for graphs (Kleitman and Winston 1982, Sapozhenko 2001), has been used extensively in recent years in extremal combinatorics. An important component of our work is a new generalization of (hyper)graph containers to Partition Containers.
 Gil Cohen (Tel Aviv Univeristy)
 Random walks on rotating expanders [+]

Abstract: Random walks on expanders are extremely useful in theoretical computer science, and beyond. Unfortunately though, they have an inherent cost. E.g., the spectral expansion of a Ramanujan graph deteriorates exponentially with the length of the walk. In this talk we will see how this exponential cost can be reduced to linear by applying a permutation after each random step. These permutations are tailormade to the graph at hand, requiring no randomness to generate.
Our proof is established using the powerful framework of finite free probability and interlacing families that was introduced by Marcus, Spielman, and Srivastava in their seminal sequence of works on bipartite Ramanujan graphs.
Joint work with Gal Maor. Link to paper: https://eccc.weizmann.ac.il/report/2022/163/.
 Misha Khodak (Carnegie Mellon University)
 New Directions in Algorithms with Predictions: Learning and Privacy [+]
 Abstract: A burgeoning paradigm in algorithm design is learningaugmented algorithms, or algorithms with predictions, where methods can take advantage of a (possibly imperfect) prediction about their instance. While past work has focused on using predictions to improve competitive ratios and runtime, this talk addresses a different, salient question: how do we learn the predictions themselves? We introduce an approach for codesigning learningaugmented algorithms with their own custom learning algorithms, with the crucial step being to optimize nice surrogate losses bounding the algorithms’ costs. This leads to improved sample complexity bounds for several learningaugmented graph algorithms and the first learningtheoretic guarantees for page migration with predictions, among other contributions. We also instantiate these ideas on the new direction of learningaugmented private algorithms, where the goal is to reduce utility loss due to privacy rather than runtime. Our approach drives numerous insights on how to robustly incorporate external information to release better statistics of sensitive datasets, which we verify empirically on the task of multiple quantile release.
 Rachel Cummings (Columbia University)
 Maintaining Privacy of the Minority without Ignoring It: Differentially Private Oversampling for Imbalanced Learning [+]

Abstract: The problem of learning from imbalanced datasets, where the label classes are not equally represented, arises often in practice. A widely used approach to address this problem is to rely on preprocessing techniques to amplify the minority class, e.g., by reusing or resampling the existing minority instances. However, when the training data are sensitive and privacy is a concern, this will also amplify privacy leakage from members of the minority class. Therefore, oversampling preprocessing steps must be designed with downstream privacy in mind. Our work provides insights about the implications of preprocessing before the use of DP algorithms, which we hope will be of broad interest.
In this paper, we first quantify the privacy degradation from running a commonly used Synthetic Minority Oversampling TEchnique (SMOTE) as a preprocessing step before applying any differentially private algorithm. We show that this degregation increases exponentially in the dimensionality of the data, which harms the accuracy of downstream private learning at a corresponding rate. We then present a differentially private variant of this algorithm (DPSMOTE) that guarantees formal privacy for the minority class, even during oversampling. Finally, we show empirically that, when applied as a preprocessing step to highly imbalanced data before private regression, DPSMOTE outperformed SMOTE and other baseline methods, due primarily to the privacy degradation in the nonprivate preprocessing methods.
joint work with Yuliia Lut, Ethan Turok, and Marco AvellaMedina
 No Meeting
 Eric Balkanski (Columbia University)
 LearningAugmented Mechanism Design [+]

Abstract: We introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learningaugmented algorithms". Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worstcase instances, this line of work has focused on the design and analysis of algorithms that are enhanced with machinelearned predictions regarding the optimal solution. The algorithms can use the predictions as a guide to inform their decisions, and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining nearoptimal worstcase guarantees, even if these predictions are very inaccurate (robustness). So far, these results have been limited to algorithms.
We initiate the design and analysis of strategyproof mechanisms that are augmented with predictions regarding the private information of the participating agents. To exhibit the important benefits of this approach, we revisit the canonical problems of strategic facility location and strategic scheduling. We propose new strategyproof mechanisms that leverage predictions to guarantee an optimal tradeoff between consistency and robustness guarantees. Furthermore, we also prove parameterized approximation results as a function of the prediction error, showing that our mechanisms perform well even when the predictions are not fully accurate.
Joint work with Priyank Agrawal, Vasilis Gkatzelis, Tingting Ou, and Xizhi Tan
 Grigory Yaroslavtsev (George Mason University)
 Learning from Tuples [+]

Abstract: How many labeled tuples are required for learning a good representation of the data? This question arises in similarity learning via contrastive sampling, in algorithms for learning tree representations, etc. I will present tight bounds on the number of samples required for these problems, including learning general metric spaces, Euclidean and lpspaces, tree metrics and other tree representations.
Joint work with Dmitrii Avdiukhin, Sauman Das, Orr Fischer, Faraz Mirza and Danny Vainstein.
Fall 2022
 Vladimir Podolskii (NYU)
 ConstantDepth Sorting Networks [+]

Abstract: We consider sorting networks that are constructed from comparators of arity k>2. That is, in our setting the arity of the comparators — or, in other words, the number of inputs that can be sorted at the unit cost — is a parameter. We study its relationship with two other parameters — n, the number of inputs, and d, the depth. This model received considerable attention. Partly, its motivation is to better understand the structure of sorting networks. In particular, sorting networks with large arity are related to recursive constructions of ordinary sorting networks. Additionally, studies of this model have natural correspondence with a recent line of work on constructing circuits for majority functions from majority gates of lower fanin.
We obtain the first lower bounds on the arity of constantdepth sorting networks. More precisely, we consider sorting networks of depth d up to 4, and determine the minimal k for which there is such a network with comparators of arity k. For depths d=1, 2 we observe that k=n. For d=3 we show that k=n/2. For d=4 the minimal arity becomes sublinear: k=\Theta(n^{2/3}). This contrasts with the case of majority circuits, in which k=O(n^{2/3}) is achievable already for depth d=3.
Joint work with Natalia DobrokhotovaMaikova and Alexander Kozachinskiy: https://eccc.weizmann.ac.il/report/2022/116/
Bio: Vladimir Podolskii defended his PhD thesis in 2009 in Moscow State University. His research areas are computational complexity, tropical algebra, applications of complexity theory to databases augmented with logical theories. Until recently he worked in Steklov Mathematical Institute (Moscow) and HSE University (Moscow).
 Sepehr Assadi (Rutgers)
 Deterministic Graph Coloring in the Streaming Model [+]

Abstract: Recent breakthroughs in graph streaming have led to the design of singlepass semistreaming algorithms for various graph coloring problems such as (Δ+1)coloring, Δcoloring, degeneracycoloring, coloring trianglefree graphs, and others. These algorithms are all randomized in crucial ways and whether or not there is any deterministic analogue of them has remained an important open question in this line of work.
In this talk, we will discuss our recent result that fully resolves this question: there is no deterministic singlepass semistreaming algorithm that given a graph G with maximum degree Δ, can output a proper coloring of G using any number of colors which is subexponential in Δ. The proof is based on analyzing the multiparty communication complexity of a related communication game using elementary random graph theory arguments.
Based on joint work with Andrew Chen (Cornell) and Glenn Sun (UCLA): https://arxiv.org/abs/2109.14891A.
 Peng Zhang (Rutgers)
 Hardness Results for Weaver's Discrepancy Problem [+]
 Abstract: Marcus, Spielman, and Srivastava (Annals of Mathematics, 2015) solved the KadisonSinger Problem by proving a strong form of Weaver’s discrepancy conjecture. They showed that for all \(\alpha > 0\) and all lists of vectors of norm at most \(\sqrt{\alpha}\) whose outer products sum to the identity, there exists a signed sum of those outer products with operator norm at most \(\sqrt{8 \alpha} + 2 \alpha\). Besides its relation to the KadisonSinger problem, Weaver’s discrepancy problem has applications in graph sparsification and randomized experimental design. In this talk, we will prove that it is NPhard to distinguish such a list of vectors for which there is a signed sum that equals the zero matrix from those in which every signed sum has operator norm at least \(k \sqrt{\alpha}\), for some absolute constant \(k > 0\). Thus, it is NPhard to construct a signing that is a constant factor better than that guaranteed to exist. This result is joint work with Daniel Spielman.
 Dominik Kempa (Stony Brook)
 LZEnd Parsing: Upper Bounds and Algorithmic Techniques [+]

Abstract: LempelZiv (LZ77) compression is one of the most commonly used lossless compression algorithms. The basic idea is to greedily break the input string into blocks (called phrases), every time forming as a phrase the longest prefix of the unprocessed part that has an earlier occurrence. In 2010, Kreft and Navarro introduced a variant of LZ77 called LZEnd, which requires the previous occurrence of each phrase to end at the boundary of an already existing phrase. They conjectured that it achieves a compression that is always close to LZ77. In this talk, we: (1) present the first proof of this conjecture; (2) discuss the first data structure implementing fast random access to the original string using space linear in the size of LZEnd parsing. We will also give a broad overview of the increasingly popular field of compressed data structures/algorithms.
This is joint work with Barna Saha (UC San Diego) and was presented at SODA 2022.
(this is a Tuesday!)
Room 202.
 Yuval Rabani (Hebrew University of Jerusalem)
 New Results in Online Computing [+]

Abstract: We prove new bounds, mostly lower bounds, on the competitive ratio of a few online problems, including the \(k\)server problem and other related problems. In particular:
1. The randomized competitive ratio of the \(k\)server problem is at least \(\Omega(\log^2 k)\) in some metric spaces. This refutes the longstanding randomized \(k\)server conjecture that the competitive ratio is \(O(\log k)\) in all metric spaces.
2. Consequently, there is a lower bound of \(\Omega(\log^2 n)\) on the competitive ratio of metrical task systems in some \(n\)point metric spaces, refuting an analogous conjecture that in all \(n\)point metric spaces the competitive ratio is \(O(\log n)\). This lower bound matches asymptotically the best previously known universal upper bound.
3. The randomized competitive ratio of traversing width\(w\) layered graphs is \(\Theta(w^2)\). The lower bound improves slightly the previously best lower bound. The upper bound improves substantially the previously best upper bound.
4. The \(k\)server lower bounds imply improved lower bounds on the competitive ratio of distributed paging and metric allocation.
5. The universal lower bound on the randomized competitive ratio of the \(k\)server problem is \(\Omega(\log k)\). Consequently, the universal lower bound for \(n\)point metrical task systems is \(\Omega(\log n)\). These bounds improve the previously known universal lower bounds, and they match asymptotically existential upper bounds.
The talk is based on two papers which are both joint work with Sebastien Bubeck and Christian Coester: Shortest paths without a map, but with an entropic regularizer (the upper bound in 3, to appear in FOCS 2022) The randomized \(k\)server conjecture is false! (all the lower bounds, manuscript)
 Jessica Sorrell (University of Pennsylvania)
 Reproducibility in Learning [+]

Abstract: Reproducibility is vital to ensuring scientific conclusions are reliable, but failures of reproducibility have been a major issue in nearly all scientific areas of study in recent decades. A key issue underlying the reproducibility crisis is the explosion of methods for data generation, screening, testing, and analysis, where, crucially, only the combinations producing the most significant results are reported. Such practices (also known as phacking, data dredging, and researcher degrees of freedom) can lead to erroneous findings that appear to be significant, but that don’t hold up when other researchers attempt to replicate them.
In this talk, we introduce a new notion of reproducibility for randomized algorithms. This notion ensures that with high probability, an algorithm returns exactly the same output when run with two samples from the same distribution. Despite the exceedingly strong demand of reproducibility, there are efficient reproducible algorithms for several fundamental problems in statistics and learning, including statistical queries, approximate heavyhitters, medians, and halfspaces. We also discuss connections to other wellstudied notions of algorithmic stability, such as differential privacy.
This talk is based on prior and ongoing work with Mark Bun, Marco Gaboardi, Max Hopkins, Russell Impagliazzo, Rex Lei, Toniann Pitassi, and Satchit Sivakumar.
 Riko Jacob (IT University of Copenhagen)
 Atomic Power in Forks: A SuperLogarithmic Lower Bound for Implementing Butterfly Networks in the Nonatomic Binary ForkJoin Model [+]

Authors: Michael T. Goodrich, Riko Jacob, and Nodari Sitchinava
Abstract: We prove an \(\Omega(\log n \log \log n)\) lower bound for the span of implementing the \(n\) input, \(\log n\)depth FFT circuit (also known as butterfly network) in the nonatomic binary forkjoin model. In this model, memoryaccess synchronizations occur only through fork operations, which spawn two child threads, and join operations, which resume a parent thread when its child threads terminate. Our bound is asymptotically tight for the nonatomic binary forkjoin model, which has been of interest of late, due to its conceptual elegance and ability to capture asynchrony.Our bound implies superlogarithmic lower bound in the nonatomic binary forkjoin model for implementing the butterfly merging networks used, e.g., in Batcher's bitonic and oddeven mergesort networks. This lower bound also implies an asymptotic separation result for the atomic and nonatomic versions of the forkjoin model, since, as we point out, FFT circuits can be implemented in the atomic binary forkjoin model with span equal to their circuit depth.
 Huacheng Yu (Princeton University)
 Strong XOR Lemma for Communication with Bounded Rounds [+]
 Abstract: In this talk, we show a strong XOR lemma for boundedround twoplayer randomized communication. For a function \(f:X\times Y\rightarrow\{0,1\}\), the nfold XOR function \(f^{\oplus n}:X^n\times Y^n \rightarrow\{0,1\}\) maps n input pairs \((x_1,...,x_n), (y_1,...,y_n)\) to the XOR of the n output bits \(f(x_1,y_1)\oplus \ldots \oplus f(x_n, y_n)\). We prove that if every rround communication protocols that computes f with probability 2/3 uses at least C bits of communication, then any rround protocol that computes \(f^{\oplus n}\) with probability \(1/2+exp(O(n))\) must use \(n(r^{O (r)}C1)\) bits. When r is a constant and C is sufficiently large, this is \(\Omega(nC)\) bits. It matches the communication cost and the success probability of the trivial protocol that computes the n bits \(f(x_i,y_i)\) independently and outputs their XOR, up to a constant factor in n. A similar XOR lemma has been proved for f whose communication lower bound can be obtained via bounding the discrepancy [Shaltiel03]. By the equivalence between the discrepancy and the correlation with 2bit communication protocols, our new XOR lemma implies the previous result.
 Sophie Huiberts (Columbia University)
 Smoothed analysis of the simplex method [+]
 Abstract: Explaining why the simplex method is fast in practice, despite it taking exponential time in the theoretical worst case, continues to be a challenge. Smoothed analysis is a paradigm for addressing this question. During my talk I will present an improved upper bound on the smoothed complexity of the simplex method, as well as prove the first nontrivial lower bound on the smoothed complexity. This is joint work with Yin Tat Lee and Xinzhi Zhang.
 Roei Tell (Institute for Advanced Study/DIMACS)
 A lunch that looks free: Eliminating randomness from proof systems with no time overhead [+]

Abstract: In the first half of the talk I'll set up some background, by describing two recent directions in the study of derandomization: A nonblackbox algorithmic framework, which replaces the classical PRGbased paradigm; and free lunch results, which eliminate randomness with essentially no runtime overhead.
In the second half we'll see one result along these directions: Under hardness assumptions, every doubly efficient proof system with constantly many rounds of interaction can be simulated by a deterministic NPtype verifier, with essentially no runtime overhead, such that no efficient adversary can mislead the deterministic verifier. Consequences include an NPtype verifier of this type for #SAT, running in time \(2^{\epsilon n}\) for an arbitrarily small constant eps>0; and a complexitytheoretic analysis of the FiatShamir heuristic in cryptography.
The talk is based on a joint work with Lijie Chen (UC Berkeley).
 Michael Chapman (NYU)
 Property testing in Group theory [+]

Abstract: A property is testable if there exists a probabilistic test that distinguishes between objects satisfying this property and objects that are far away from satisfying the property. In other words, if an object passes the test with high probability, it is close to an object that satisfies the test with probability 1 (namely, an object with the desired property). Property testing results are useful for many TCS applications, mainly in error correction, probabilistically checkable proofs and hardness of approximation.
In this talk we are going to discuss two property testing problems that arise naturally in group theory. (All relevant group theoretic notions will be defined during the talk).
1. The first is due to Ulam, who in 1940 asked: Given an almost homomorphism between two groups, is it close to an actual homomorphism between the groups? The answer depends on the choice of groups, as well as what we mean by "almost" and "close". We will present some classical and recent results in TCS in this framework, specifically the BLR test and quantum soundness of 2player games. We will discuss some recent developments and open problems in this field.
2. The second problem is the following: Is being a proper subgroup a testable property? We will discuss partial results and a very nice open problem in this direction.
 Aaron Sidford (Stanford)
 Efficiently Minimizing the Maximum Loss [+]
 Abstract: In this talk I will discuss recent advances in the fundamental robust optimization problem of minimizing the maximum of a finite number of convex loss functions. In particular I will show how to develop stochastic methods for approximately solving this problem with a nearoptimal number of gradient queries. Along the way, I will cover several broader tools in the design and analysis of efficient optimization algorithms, including accelerated methods for using balloptimization oracles and stochastic biasreduced gradient methods. This talk will include joint work with Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin including https://arxiv.org/abs/2105.01778 and https://arxiv.org/abs/2106.09481.
Spring 2019
 Jeroen Zuiddam (IAS)
 The asymptotic spectrum of graphs: duality for Shannon capacity [+]
 Abstract: We give a dual description of the Shannon capacity of graphs. The Shannon capacity of a graph is the rate of growth of the independence number under taking the strong power, or in different language, it is the maximum rate at which information can be transmitted over a noisy communication channel. Our dual description gives Shannon capacity as a minimization over the "asymptotic spectrum of graphs", which as a consequence unifies previous results and naturally gives rise to new questions. Besides a gentle introduction to the asymptotic spectrum of graphs we will discuss, if time permits, Strassen's general theory of "asymptotic spectra" and the asymptotic spectrum of tensors.
 Omri Weinstein (Columbia University)
 Data Structure Lower Bounds imply Rigidity [+]

Abstract: I will talk about new connections between arithmetic data structures, circuit lower bounds and pseudorandomness. As the main result, we show that static data structure lower bounds in the group (linear) model imply semiexplicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of t ω(log^2 n) on the cellprobe complexity of linear data structures in the group model, even against arbitrarily small linear space (s = (1+ε)n), would already imply a semiexplicit (P^NP) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy, and Yekhanin, 2009). Our result further asserts that polynomial (t n^δ) data structure lower bounds against nearmaximal space, would imply superlinear circuit lower bounds for logdepth linear circuits (which would close a fourdecade open question). In the succinct space regime (s = n+o(n)), we show that any improvement on current cellprobe lower bounds in the linear model would also imply new rigidity bounds. Our main result relies on a new connection between the inner and outer dimensions of a matrix (Paturi and Pudlak, 2006), and on a new worsttoaverage case reduction for rigidity, which is of independent interest.
Based mostly on joint work with Zeev Dvir (Princeton) and Alexander Golovnev (Harvard).
 Noah StephensDavidowitz (MIT)
 SETHhardness of coding problems [+]

Abstract: We show that, assuming a common conjecture in complexity theory, there are "no nontrivial algorithms" for the two most important problems in coding theory: the Nearest Codeword Problem (NCP) and the Minimum Distance Problem (MDP). Specifically, for any constant \eps > 0, there is no N^{1\eps}time algorithm for codes with N codewords. In fact, the NCP result even holds for a family of codes with a single code of each cardinality, and our hardness result therefore also applies to the preprocessing variant of the problem.
These results are inspired by earlier work showing similar results for the analogous lattice problems (joint works with three other NYU alums: Huck Bennett and Sasha Golovnev and with Divesh Aggarwal), but the proofs for coding problems are far simpler. As in those works, we also prove weaker hardness results for approximate versions of these problems (showing that there is no N^{o(1)}time algorithm in this case).
Based on joint work with Vinod Vaikuntanathan.
 LiYang Tan (Stanford University)
 A highdimensional LittlewoodOfford inequality [+]
 Abstract: We prove a new LittlewoodOffordtype anticoncentration inequality for mfacet polytopes, a highdimensional generalization of the classic LittlewoodOfford theorem. Our proof draws on and extends techniques from Kane's bound on the boolean average sensitivity of mfacet polytopes. Joint work with Ryan O'Donnell and Rocco Servedio; manuscript available at https://arxiv.org/abs/1808.04035.
 Sahil Singla (Princeton)
 The Byzantine Secretary Problem [+]
 Abstract: In the classical secretary problem, a sequence of n elements arrive in a uniformly random order. The goal is to maximize the probability of selecting the largest element (or to maximize the expected value of the selected item). This model captures applications like online auctions, where we want to select the highest bidder. In many such applications, however, one may expect a few outlier arrivals that are adversarially placed in the arrival sequence. Can we still select a large element with good probability? Dynkin’s popular 1/esecretary algorithm is sensitive to even a single adversarial arrival: if the adversary gives one large bid at the beginning of the stream, the algorithm does not select any element at all. In this work we introduce the Byzantine Secretary problem where we have two kinds of elements: green (good) and red (bad). The green elements arrive uniformly at random. The reds arrive adversarially. The goal is to find a large green element. It is easy to see that selecting the largest green element is not possible even when a small fraction of the arrival is red, i.e., we cannot do much better than random guessing. Hence we introduce the secondmax benchmark, where the goal is to select the secondlargest green element or better. This dramatically improves our results. We study both the probabilitymaximization and the valuemaximization settings. For probabilitymaximization, we show the existence of a good randomized algorithm, using the minimax principle. Specifically, we give an algorithm for the known distribution case, based on trying to guess the secondmax in hindsight, and using this estimate as a good guess for the future. For valuemaximization, we give an explicit poly log^∗ n competitive algorithm, using a multilayered bucketing scheme that adaptively refines our estimates of secondmax as we see more elements. For the multiple secretary problem, where we can pick up to r secretaries, we show constant competitiveness as long as r is large enough. For this, we give an adaptive thresholding algorithm that raises and lowers thresholds depending on the quality and the quantity of recently selected elements.
 Mert Saglam (University of Washington)
 Near logconvexity of heat and the kHamming distance problem [+]
 Abstract: We answer a 1982 conjecture of Erdős and Simonovits about the growth of number ofkwalks in a graph, which incidentally was studied even earlier by Blakley and and Dixon in 1966. We prove this conjecture in a more general setup than the earlier treatment, furthermore, through a refinement and strengthening of this inequality, we resolve two related open questions in complexity theory: the communication complexity of thekHamming distance isΩ(k log k)and that consequently any property tester forklinearity requires Ω(k log k)queries.
 Mika Göös (IAS)
 Adventures in Monotone Complexity and TFNP [+]
 Abstract: *Separations:* We introduce a monotone variant of XORSAT and show it has exponential monotone circuit complexity. Since XORSAT is in NC^2, this improves qualitatively on the monotone vs. nonmonotone separation of Tardos (1988). We also show that monotone span programs over R can be exponentially more powerful than over finite fields. These results can be interpreted as separating subclasses of TFNP in communication complexity. *Characterizations:* We show that the communication (resp. query) analogue of PPA (subclass of TFNP) captures span programs over F_2 (resp. Nullstellensatz degree over F_2). Previously, it was known that communication FP captures formulas (KarchmerWigderson, 1988) and that communication PLS captures circuits (Razborov, 1995). Joint work with Pritish Kamath, Robert Robere, and Dmitry Sokolov.