This is the home page for NYU's Computer Science Theory seminar, hosted jointly by the Courant Theoretical Computer Science Group and the Tandon Algorithms and Foundations Group.
Time and Location:
Thursday, 11AM
Warren Weaver Hall
251 Mercer Street
Room 317
You can sign up for the Theory Seminar mailing list here. For more information, or if you would like to invite a speaker or give a talk, please contact Christopher Musco.
Spring 2023
Feb. 2
 Roie Levin (Tel Aviv University)
 Online Covering: Secretaries, Prophets and Universal Maps [+]

Abstract: We give a polynomialtime algorithm for online covering IPs with a competitive ratio of \(O(\log mn)\) when the constraints are revealed in random order, essentially matching the best possible offline bound of \(O(\log n)\) and circumventing the \(\Omega(\log m \log n)\) lower bound known in adversarial order. We then use this result to give an \(O(log mn)\) competitive algorithm for the prophet version of this problem, where constraints are sampled from a sequence of known distributions (in fact, our algorithm works even when only a single sample from each of the distributions is given). Since our algorithm is universal, as a byproduct we establish that only \(O(n)\) samples are necessary to build a universal map for online covering IPs with competitive ratio O(\log mn) on input sequences of length n.
This talk is based on joint work with Anupam Gupta and Gregory Kehne, the first half of which appeared at FOCS 2021.
Feb. 9
 Nati Linial (Hebrew University of Jerusalem)
 Geodesic Geometry on Graphs [+]

Abstract: The underlying idea of this lecture is that all important phenomena in geometry have interesting graphtheoretic counterparts. A path system in an nvertex graph \(G=(V,E)\) is a collection of \({n \choose 2}\) paths \(Puv=Pvu\), one per each pair of vertices \(u,v\), where \(Puv\) connects the vertices \(u\) and \(v\). We say that \(P\) is consistent if it is closed under taking subpaths. Namely, for any vertex \(x\) that resides on \(Puv\), it holds that \(Puv\) is the concatenation of \(Pux\) and \(Pxv\). There is a very simple way to generate consistent path systems. Namely, pick a positive weight function \(w\) on the edge set \(E\), and let \(Puv\) be a \(w\)shortest \(uv\) path. A path system that can be attained this way, is said to be metric. Question: Are all consistent path systems metric? Answer: A very emphatic NO.
We call \(G\) metrizable if every consistent path system in \(G\) is metric. Our main discoveries are:
 1. Almost all graphs (at a very strong sense) are nonmetrizable.
 Yet, all outerplanar graphs are metrizable.
 Metrizability is polynomialtime decidable.
Feb. 16
 Prantar Ghosh (Rutgers University)
 Adversarially Robust Coloring for Graph Streams [+]

Abstract: A streaming algorithm is called adversarially robust if it provides correct outputs even when the stream updates are chosen by an adaptive adversary based on past outputs given by the algorithm. There has been a surge in interest for such algorithms over the last couple of years. I shall begin this talk by describing this model and then address the natural question of exhibiting a separation between classical and robust streaming. We shall see how this question leads to the problem of streaming graph coloring, where we need to maintain a proper vertexcoloring of a streaming graph using as few colors as possible. In the classical streaming model, Assadi, Chen, and Khanna showed that an nvertex graph with maximum degree Δ can be colored with Δ+1 colors in O(n polylog(n)) space, i.e., semistreaming space. We shall show that an adversarially robust algorithm running under a similar space bound must spend almost Ω(Δ^2) colors, and that robust O(Δ)coloring requires a linear amount of space, namely Ω(nΔ). These lower bounds not only establish the first separation between adversarially robust algorithms and ordinary randomized algorithms for a natural problem on insertiononly streams, but also the first separation between randomized and deterministic coloring algorithms for graph streams: this is because deterministic algorithms are automatically robust. I shall also go over some complementary upper bounds: in particular, there are robust coloring algorithms using O(Δ^2.5) colors in semistreaming space and O(Δ^2) colors in O(n√Δ) space.
This is based on a couple of joint works: one with Amit Chakrabarti and Manuel Stoeckl, and another with Sepehr Assadi, Amit, and Manuel.
Feb. 23
 Spencer Peters (Cornell University)
 Revisiting TimeSpace Tradeoffs for Function Inversion [+]
 Abstract: Function inversion is a fundamental problem in cryptography and in theoretical computer science more broadly. Given a function f: [N] > [N] from a finite domain to itself, the goal is to construct a small data structure, so that for any point y in the image of f, you can recover an inverse of y using few evaluations of f. First, I will describe a modification to Fiat and Naor's classic function inversion algorithm [STOC '91] that improves the timespace tradeoff in the regime where the number of evaluations T exceeds the data structure bitlength S. Then I'll present the first (barely) nontrivial nonadaptive algorithm for function inversion (a nonadaptive algorithm chooses all the points it will evaluate f on before seeing any of the results). This algorithm resolves a question posed by CorriganGibbs and Kogan [TCC'19], and I'll show that the tradeoff it achieves is tight for a natural class of nonadaptive algorithms. Both results are joint work with Alexander Golovnev, Siyao Guo, and Noah StephensDavidowitz.
Mar. 2
HOLD
Mar. 9 (Chris away)
Mar. 23
 Karthik C.S. (Rutgers University)
 TBA [+]
 Abstract: TBA
Mar. 30
HOLD
Apr. 6
 Or Zamir (Institute for Advanced Study)
 TBA [+]
 Abstract: TBA
Apr. 13
Apr. 19, 3:00pm (this is a Wednesday)
 Rachel Cummings (Columbia University)
 TBA [+]
 Abstract: TBA
Apr. 27
May 4
Fall 2022
Sept. 22
 Vladimir Podolskii (NYU)
 ConstantDepth Sorting Networks [+]

Abstract: We consider sorting networks that are constructed from comparators of arity k>2. That is, in our setting the arity of the comparators — or, in other words, the number of inputs that can be sorted at the unit cost — is a parameter. We study its relationship with two other parameters — n, the number of inputs, and d, the depth. This model received considerable attention. Partly, its motivation is to better understand the structure of sorting networks. In particular, sorting networks with large arity are related to recursive constructions of ordinary sorting networks. Additionally, studies of this model have natural correspondence with a recent line of work on constructing circuits for majority functions from majority gates of lower fanin.
We obtain the first lower bounds on the arity of constantdepth sorting networks. More precisely, we consider sorting networks of depth d up to 4, and determine the minimal k for which there is such a network with comparators of arity k. For depths d=1, 2 we observe that k=n. For d=3 we show that k=n/2. For d=4 the minimal arity becomes sublinear: k=\Theta(n^{2/3}). This contrasts with the case of majority circuits, in which k=O(n^{2/3}) is achievable already for depth d=3.
Joint work with Natalia DobrokhotovaMaikova and Alexander Kozachinskiy: https://eccc.weizmann.ac.il/report/2022/116/
Bio: Vladimir Podolskii defended his PhD thesis in 2009 in Moscow State University. His research areas are computational complexity, tropical algebra, applications of complexity theory to databases augmented with logical theories. Until recently he worked in Steklov Mathematical Institute (Moscow) and HSE University (Moscow).
Sept. 29
 Sepehr Assadi (Rutgers)
 Deterministic Graph Coloring in the Streaming Model [+]

Abstract: Recent breakthroughs in graph streaming have led to the design of singlepass semistreaming algorithms for various graph coloring problems such as (Δ+1)coloring, Δcoloring, degeneracycoloring, coloring trianglefree graphs, and others. These algorithms are all randomized in crucial ways and whether or not there is any deterministic analogue of them has remained an important open question in this line of work.
In this talk, we will discuss our recent result that fully resolves this question: there is no deterministic singlepass semistreaming algorithm that given a graph G with maximum degree Δ, can output a proper coloring of G using any number of colors which is subexponential in Δ. The proof is based on analyzing the multiparty communication complexity of a related communication game using elementary random graph theory arguments.
Based on joint work with Andrew Chen (Cornell) and Glenn Sun (UCLA): https://arxiv.org/abs/2109.14891A.
Oct. 6
 Peng Zhang (Rutgers)
 Hardness Results for Weaver's Discrepancy Problem [+]
 Abstract: Marcus, Spielman, and Srivastava (Annals of Mathematics, 2015) solved the KadisonSinger Problem by proving a strong form of Weaver’s discrepancy conjecture. They showed that for all \(\alpha > 0\) and all lists of vectors of norm at most \(\sqrt{\alpha}\) whose outer products sum to the identity, there exists a signed sum of those outer products with operator norm at most \(\sqrt{8 \alpha} + 2 \alpha\). Besides its relation to the KadisonSinger problem, Weaver’s discrepancy problem has applications in graph sparsification and randomized experimental design. In this talk, we will prove that it is NPhard to distinguish such a list of vectors for which there is a signed sum that equals the zero matrix from those in which every signed sum has operator norm at least \(k \sqrt{\alpha}\), for some absolute constant \(k > 0\). Thus, it is NPhard to construct a signing that is a constant factor better than that guaranteed to exist. This result is joint work with Daniel Spielman.
Oct. 13
 Dominik Kempa (Stony Brook)
 Title: LZEnd Parsing: Upper Bounds and Algorithmic Techniques [+]

Abstract: LempelZiv (LZ77) compression is one of the most commonly used lossless compression algorithms. The basic idea is to greedily break the input string into blocks (called phrases), every time forming as a phrase the longest prefix of the unprocessed part that has an earlier occurrence. In 2010, Kreft and Navarro introduced a variant of LZ77 called LZEnd, which requires the previous occurrence of each phrase to end at the boundary of an already existing phrase. They conjectured that it achieves a compression that is always close to LZ77. In this talk, we: (1) present the first proof of this conjecture; (2) discuss the first data structure implementing fast random access to the original string using space linear in the size of LZEnd parsing. We will also give a broad overview of the increasingly popular field of compressed data structures/algorithms.
This is joint work with Barna Saha (UC San Diego) and was presented at SODA 2022.
Oct. 18
(this is a Tuesday!)
Room 202.
(this is a Tuesday!)
Room 202.
 Yuval Rabani (Hebrew University of Jerusalem)
 New Results in Online Computing [+]

Abstract: We prove new bounds, mostly lower bounds, on the competitive ratio of a few online problems, including the \(k\)server problem and other related problems. In particular:
1. The randomized competitive ratio of the \(k\)server problem is at least \(\Omega(\log^2 k)\) in some metric spaces. This refutes the longstanding randomized \(k\)server conjecture that the competitive ratio is \(O(\log k)\) in all metric spaces.
2. Consequently, there is a lower bound of \(\Omega(\log^2 n)\) on the competitive ratio of metrical task systems in some \(n\)point metric spaces, refuting an analogous conjecture that in all \(n\)point metric spaces the competitive ratio is \(O(\log n)\). This lower bound matches asymptotically the best previously known universal upper bound.
3. The randomized competitive ratio of traversing width\(w\) layered graphs is \(\Theta(w^2)\). The lower bound improves slightly the previously best lower bound. The upper bound improves substantially the previously best upper bound.
4. The \(k\)server lower bounds imply improved lower bounds on the competitive ratio of distributed paging and metric allocation.
5. The universal lower bound on the randomized competitive ratio of the \(k\)server problem is \(\Omega(\log k)\). Consequently, the universal lower bound for \(n\)point metrical task systems is \(\Omega(\log n)\). These bounds improve the previously known universal lower bounds, and they match asymptotically existential upper bounds.
The talk is based on two papers which are both joint work with Sebastien Bubeck and Christian Coester: Shortest paths without a map, but with an entropic regularizer (the upper bound in 3, to appear in FOCS 2022) The randomized \(k\)server conjecture is false! (all the lower bounds, manuscript)
Oct. 27
 Jessica Sorrell (University of Pennsylvania)
 Reproducibility in Learning [+]

Abstract: Reproducibility is vital to ensuring scientific conclusions are reliable, but failures of reproducibility have been a major issue in nearly all scientific areas of study in recent decades. A key issue underlying the reproducibility crisis is the explosion of methods for data generation, screening, testing, and analysis, where, crucially, only the combinations producing the most significant results are reported. Such practices (also known as phacking, data dredging, and researcher degrees of freedom) can lead to erroneous findings that appear to be significant, but that don’t hold up when other researchers attempt to replicate them.
In this talk, we introduce a new notion of reproducibility for randomized algorithms. This notion ensures that with high probability, an algorithm returns exactly the same output when run with two samples from the same distribution. Despite the exceedingly strong demand of reproducibility, there are efficient reproducible algorithms for several fundamental problems in statistics and learning, including statistical queries, approximate heavyhitters, medians, and halfspaces. We also discuss connections to other wellstudied notions of algorithmic stability, such as differential privacy.
This talk is based on prior and ongoing work with Mark Bun, Marco Gaboardi, Max Hopkins, Russell Impagliazzo, Rex Lei, Toniann Pitassi, and Satchit Sivakumar.
Nov. 3
 Riko Jacob (IT University of Copenhagen)
 Atomic Power in Forks: A SuperLogarithmic Lower Bound for Implementing Butterfly Networks in the Nonatomic Binary ForkJoin Model [+]

Authors: Michael T. Goodrich, Riko Jacob, and Nodari Sitchinava
Abstract: We prove an \(\Omega(\log n \log \log n)\) lower bound for the span of implementing the \(n\) input, \(\log n\)depth FFT circuit (also known as butterfly network) in the nonatomic binary forkjoin model. In this model, memoryaccess synchronizations occur only through fork operations, which spawn two child threads, and join operations, which resume a parent thread when its child threads terminate. Our bound is asymptotically tight for the nonatomic binary forkjoin model, which has been of interest of late, due to its conceptual elegance and ability to capture asynchrony.Our bound implies superlogarithmic lower bound in the nonatomic binary forkjoin model for implementing the butterfly merging networks used, e.g., in Batcher's bitonic and oddeven mergesort networks. This lower bound also implies an asymptotic separation result for the atomic and nonatomic versions of the forkjoin model, since, as we point out, FFT circuits can be implemented in the atomic binary forkjoin model with span equal to their circuit depth.
Nov. 10
 Huacheng Yu (Princeton University)
 Strong XOR Lemma for Communication with Bounded Rounds [+]
 Abstract: In this talk, we show a strong XOR lemma for boundedround twoplayer randomized communication. For a function \(f:X\times Y\rightarrow\{0,1\}\), the nfold XOR function \(f^{\oplus n}:X^n\times Y^n \rightarrow\{0,1\}\) maps n input pairs \((x_1,...,x_n), (y_1,...,y_n)\) to the XOR of the n output bits \(f(x_1,y_1)\oplus \ldots \oplus f(x_n, y_n)\). We prove that if every rround communication protocols that computes f with probability 2/3 uses at least C bits of communication, then any rround protocol that computes \(f^{\oplus n}\) with probability \(1/2+exp(O(n))\) must use \(n(r^{O (r)}C1)\) bits. When r is a constant and C is sufficiently large, this is \(\Omega(nC)\) bits. It matches the communication cost and the success probability of the trivial protocol that computes the n bits \(f(x_i,y_i)\) independently and outputs their XOR, up to a constant factor in n. A similar XOR lemma has been proved for f whose communication lower bound can be obtained via bounding the discrepancy [Shaltiel03]. By the equivalence between the discrepancy and the correlation with 2bit communication protocols, our new XOR lemma implies the previous result.
Nov. 17
 Sophie Huiberts (Columbia University)
 Smoothed analysis of the simplex method [+]
 Abstract: Explaining why the simplex method is fast in practice, despite it taking exponential time in the theoretical worst case, continues to be a challenge. Smoothed analysis is a paradigm for addressing this question. During my talk I will present an improved upper bound on the smoothed complexity of the simplex method, as well as prove the first nontrivial lower bound on the smoothed complexity. This is joint work with Yin Tat Lee and Xinzhi Zhang.
Dec. 1
 Roei Tell (Institute for Advanced Study/DIMACS)
 A lunch that looks free: Eliminating randomness from proof systems with no time overhead [+]

Abstract: In the first half of the talk I'll set up some background, by describing two recent directions in the study of derandomization: A nonblackbox algorithmic framework, which replaces the classical PRGbased paradigm; and free lunch results, which eliminate randomness with essentially no runtime overhead.
In the second half we'll see one result along these directions: Under hardness assumptions, every doubly efficient proof system with constantly many rounds of interaction can be simulated by a deterministic NPtype verifier, with essentially no runtime overhead, such that no efficient adversary can mislead the deterministic verifier. Consequences include an NPtype verifier of this type for #SAT, running in time \(2^{\epsilon n}\) for an arbitrarily small constant eps>0; and a complexitytheoretic analysis of the FiatShamir heuristic in cryptography.
The talk is based on a joint work with Lijie Chen (UC Berkeley).
Dec. 8
 Michael Chapman (NYU)
 Property testing in Group theory [+]

Abstract: A property is testable if there exists a probabilistic test that distinguishes between objects satisfying this property and objects that are far away from satisfying the property. In other words, if an object passes the test with high probability, it is close to an object that satisfies the test with probability 1 (namely, an object with the desired property). Property testing results are useful for many TCS applications, mainly in error correction, probabilistically checkable proofs and hardness of approximation.
In this talk we are going to discuss two property testing problems that arise naturally in group theory. (All relevant group theoretic notions will be defined during the talk).
1. The first is due to Ulam, who in 1940 asked: Given an almost homomorphism between two groups, is it close to an actual homomorphism between the groups? The answer depends on the choice of groups, as well as what we mean by "almost" and "close". We will present some classical and recent results in TCS in this framework, specifically the BLR test and quantum soundness of 2player games. We will discuss some recent developments and open problems in this field.
2. The second problem is the following: Is being a proper subgroup a testable property? We will discuss partial results and a very nice open problem in this direction.
Dec. 15
 Aaron Sidford (Stanford)
 Efficiently Minimizing the Maximum Loss [+]
 Abstract: In this talk I will discuss recent advances in the fundamental robust optimization problem of minimizing the maximum of a finite number of convex loss functions. In particular I will show how to develop stochastic methods for approximately solving this problem with a nearoptimal number of gradient queries. Along the way, I will cover several broader tools in the design and analysis of efficient optimization algorithms, including accelerated methods for using balloptimization oracles and stochastic biasreduced gradient methods. This talk will include joint work with Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin including https://arxiv.org/abs/2105.01778 and https://arxiv.org/abs/2106.09481.
Spring 2019
May 21
 Jeroen Zuiddam (IAS)
 The asymptotic spectrum of graphs: duality for Shannon capacity [+]
 Abstract: We give a dual description of the Shannon capacity of graphs. The Shannon capacity of a graph is the rate of growth of the independence number under taking the strong power, or in different language, it is the maximum rate at which information can be transmitted over a noisy communication channel. Our dual description gives Shannon capacity as a minimization over the "asymptotic spectrum of graphs", which as a consequence unifies previous results and naturally gives rise to new questions. Besides a gentle introduction to the asymptotic spectrum of graphs we will discuss, if time permits, Strassen's general theory of "asymptotic spectra" and the asymptotic spectrum of tensors.
May 16
 Omri Weinstein (Columbia University)
 Data Structure Lower Bounds imply Rigidity [+]

Abstract: I will talk about new connections between arithmetic data structures, circuit lower bounds and pseudorandomness. As the main result, we show that static data structure lower bounds in the group (linear) model imply semiexplicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of t ω(log^2 n) on the cellprobe complexity of linear data structures in the group model, even against arbitrarily small linear space (s = (1+ε)n), would already imply a semiexplicit (P^NP) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy, and Yekhanin, 2009). Our result further asserts that polynomial (t n^δ) data structure lower bounds against nearmaximal space, would imply superlinear circuit lower bounds for logdepth linear circuits (which would close a fourdecade open question). In the succinct space regime (s = n+o(n)), we show that any improvement on current cellprobe lower bounds in the linear model would also imply new rigidity bounds. Our main result relies on a new connection between the inner and outer dimensions of a matrix (Paturi and Pudlak, 2006), and on a new worsttoaverage case reduction for rigidity, which is of independent interest.
Based mostly on joint work with Zeev Dvir (Princeton) and Alexander Golovnev (Harvard).
May 2
 Noah StephensDavidowitz (MIT)
 SETHhardness of coding problems [+]

Abstract: We show that, assuming a common conjecture in complexity theory, there are "no nontrivial algorithms" for the two most important problems in coding theory: the Nearest Codeword Problem (NCP) and the Minimum Distance Problem (MDP). Specifically, for any constant \eps > 0, there is no N^{1\eps}time algorithm for codes with N codewords. In fact, the NCP result even holds for a family of codes with a single code of each cardinality, and our hardness result therefore also applies to the preprocessing variant of the problem.
These results are inspired by earlier work showing similar results for the analogous lattice problems (joint works with three other NYU alums: Huck Bennett and Sasha Golovnev and with Divesh Aggarwal), but the proofs for coding problems are far simpler. As in those works, we also prove weaker hardness results for approximate versions of these problems (showing that there is no N^{o(1)}time algorithm in this case).
Based on joint work with Vinod Vaikuntanathan.
Apr. 11
 LiYang Tan (Stanford University)
 A highdimensional LittlewoodOfford inequality [+]
 Abstract: We prove a new LittlewoodOffordtype anticoncentration inequality for mfacet polytopes, a highdimensional generalization of the classic LittlewoodOfford theorem. Our proof draws on and extends techniques from Kane's bound on the boolean average sensitivity of mfacet polytopes. Joint work with Ryan O'Donnell and Rocco Servedio; manuscript available at https://arxiv.org/abs/1808.04035.
Mar. 21
 Sahil Singla (Princeton)
 The Byzantine Secretary Problem [+]
 Abstract: In the classical secretary problem, a sequence of n elements arrive in a uniformly random order. The goal is to maximize the probability of selecting the largest element (or to maximize the expected value of the selected item). This model captures applications like online auctions, where we want to select the highest bidder. In many such applications, however, one may expect a few outlier arrivals that are adversarially placed in the arrival sequence. Can we still select a large element with good probability? Dynkin’s popular 1/esecretary algorithm is sensitive to even a single adversarial arrival: if the adversary gives one large bid at the beginning of the stream, the algorithm does not select any element at all. In this work we introduce the Byzantine Secretary problem where we have two kinds of elements: green (good) and red (bad). The green elements arrive uniformly at random. The reds arrive adversarially. The goal is to find a large green element. It is easy to see that selecting the largest green element is not possible even when a small fraction of the arrival is red, i.e., we cannot do much better than random guessing. Hence we introduce the secondmax benchmark, where the goal is to select the secondlargest green element or better. This dramatically improves our results. We study both the probabilitymaximization and the valuemaximization settings. For probabilitymaximization, we show the existence of a good randomized algorithm, using the minimax principle. Specifically, we give an algorithm for the known distribution case, based on trying to guess the secondmax in hindsight, and using this estimate as a good guess for the future. For valuemaximization, we give an explicit poly log^∗ n competitive algorithm, using a multilayered bucketing scheme that adaptively refines our estimates of secondmax as we see more elements. For the multiple secretary problem, where we can pick up to r secretaries, we show constant competitiveness as long as r is large enough. For this, we give an adaptive thresholding algorithm that raises and lowers thresholds depending on the quality and the quantity of recently selected elements.
Mar. 14
 Mert Saglam (University of Washington)
 Near logconvexity of heat and the kHamming distance problem [+]
 Abstract: We answer a 1982 conjecture of Erdős and Simonovits about the growth of number ofkwalks in a graph, which incidentally was studied even earlier by Blakley and and Dixon in 1966. We prove this conjecture in a more general setup than the earlier treatment, furthermore, through a refinement and strengthening of this inequality, we resolve two related open questions in complexity theory: the communication complexity of thekHamming distance isΩ(k log k)and that consequently any property tester forklinearity requires Ω(k log k)queries.
Jan. 24
 Mika Göös (IAS)
 Adventures in Monotone Complexity and TFNP [+]
 Abstract: *Separations:* We introduce a monotone variant of XORSAT and show it has exponential monotone circuit complexity. Since XORSAT is in NC^2, this improves qualitatively on the monotone vs. nonmonotone separation of Tardos (1988). We also show that monotone span programs over R can be exponentially more powerful than over finite fields. These results can be interpreted as separating subclasses of TFNP in communication complexity. *Characterizations:* We show that the communication (resp. query) analogue of PPA (subclass of TFNP) captures span programs over F_2 (resp. Nullstellensatz degree over F_2). Previously, it was known that communication FP captures formulas (KarchmerWigderson, 1988) and that communication PLS captures circuits (Razborov, 1995). Joint work with Pritish Kamath, Robert Robere, and Dmitry Sokolov.