_____
 ____  __
  ______     _______
    ____________________
           _______________
          _________________
          ________________
           ______________
            ____________
            __________
            ________
            _______
            _____
            _____
            ___
            ___
            ___



LATIN 2024


Pablo Barceló (Universidad Católica de Chile), TBA
Pierre Fraigniaud (Université Paris Cité and CNRS), TBA
TBA
Penny Haxell (University of Waterloo), TBA
TBA
Eunjung Kim (CNRS and LAMSADE lab at Université Paris Dauphine), TBA
Jon Kleinberg (Cornell University), TBA


[Top] [Home] [LATIN 2024]



LATIN 2022


Jeffrey D. Ullman (Stanford University, USA), Abstractions in Computer Science Theory

The creative use of abstractions is central to computer science. But not all abstractions address the same kinds of problems. We identify four different reasons abstractions appear in computer-science theory, and focus on the "declarative abstractions," whose purpose is to raise the level at which we program. Important declarative abstractions appear in the theory of compiling and the theory of databases. We shall touch on the most important elements in those two fields.

David Eppstein (University of California, Irvine, USA), The Complexity of Iterated Reversible Computation

Reversible computation has been studied for over 60 years as a way to evade fundamental physical limits on the power needed for irreversible computational steps, and because quantum computing circuits are necessarily reversible. We study a class of problems based on computing the iterated values of a reversible function. The story leads through Thomason's lollipop algorithm in graph theory, circuit complexity, and reversible cellular automata, to card shuffling, the reflections of light in jewels, and curves on topological surfaces, and involves both PSPACE-hard problems and problems with unexpected polynomial-time algorithms.

Mauricio Osorio (Universidad de las Américas, Puebla, México), A Graph Theoretic Approach for Resilient Distributed Algorithms
Following the immense recent advances in distributed networks, the explosive growth of the Internet, and our increased dependency on these infrastructures, guaranteeing the uninterrupted operation of communication networks has become a major objective in network algorithms. The modern instantiations of distributed networks, such as the Bitcoin network and cloud computing, introduce new security challenges that deserve urgent attention in both theory and practice. In this talk, I will present a unified framework for obtaining fast, resilient and secure distributed algorithms for fundamental graph problems. Our approach is based on a graph-theoretic perspective in which common notions of resilient requirements are translated into suitably tailored combinatorial graph structures. We will discuss recent developments along the following two lines of research: - Designing distributed algorithms that can handle various adversarial settings, such as, node crashes and Byzantine attacks. We will mainly provide general compilation schemes that are based on exploiting the high-connectivity of the graph. Our key focus will be on the efficiency of the resilient algorithms in terms of the number of communication rounds. - Initiating and establishing the theoretical exploration of security in distributed graph algorithms. Such a notion has been addressed before mainly in the context of secure multi-party computation (MPC). The heart of our approach is to develop new graph theoretical infrastructures to provide graphical secure channels between nodes in a communication network of an arbitrary topology. Finally, I will highlight future directions that call for strengthening the connections between the areas of fault tolerant network design, distributed graph algorithms and information theoretic security.

Merav Parter (Weizmann Institute of Science, Tel Aviv, Israel), A Graph Theoretic Approach for Resilient Distributed Algorithms
Abstract: Following the immense recent advances in distributed networks, the explosive growth of the Internet, and our increased dependency on these infrastructures, guaranteeing the uninterrupted operation of communication networks has become a major objective in network algorithms. The modern instantiations of distributed networks, such as the Bitcoin network and cloud computing, introduce new security challenges that deserve urgent attention in both theory and practice. In this talk, I will present a unified framework for obtaining fast, resilient and secure distributed algorithms for fundamental graph pproblems. Our approach is based on a graph-theoretic perspective in which common notions of resilient requirements are translated into suitably tailored combinatorial graph structures. We will discuss recent developments along the following two lines of research:
  • Designing distributed algorithms that can handle various adversarial settings, such as, node crashes and Byzantine attacks. We will mainly provide general compilation schemes that are based on exploiting the high-connectivity of the graph. Our key focus will be on the efficiency of the resilient algorithms in terms of the number of communication rounds.
  • Initiating and establishing the theoretical exploration of security in distributed graph algorithms. Such a notion has been addressed before mainly in the context of secure multi-party computation (MPC). The heart of our approach is to develop new graph theoretical infrastructures to provide graphical secure channels between nodes in a communication network of an arbitrary topology.
Finally, I will highlight future directions that call for strengthening the connections between the areas of fault tolerant network design, distributed graph algorithms and information theoretic security.


[Top] [Home] [LATIN 2022]



LATIN 2020


Maria-Florina Balcan (Carnegie Mellon University), Data-driven algorithm design

Data-driven algorithm design for combinatorial problems is an important aspect of modern data science. Rather than using off the shelf algorithms that only have worst case performance guarantees, practitioners typically optimize over large families of parameterized algorithms and tune the parameters of these algorithms using a training set of problem instances from their domain to determine a configuration with high expected performance over future instances. However, most of this work comes with no performance guarantees. The challenge is that for many combinatorial problems, including partitioning and subset selection problems, a small tweak to the parameters can cause a cascade of changes in the algorithm's behavior, so the algorithm's performance is a discontinuous function of its parameters.

In this talk, I will present new work that helps put data-driven combinatorial algorithm selection on firm foundations. This includes strong computational and statistical performance guarantees, both for the batch and online scenarios where a collection of typical problem instances from the given application are presented either all at once or in an online fashion, respectively. I will describe both specific examples (for clustering, partitioning, and subset selection problems) and general principles that emerge in this context (including general techniques for sample complexity guarantees in the batch setting and no-regret guarantees in the online settings).

Nikhil Bansal (CWI and Eindhoven University of Technology), Discrepancy, Rounding and Approximation

Discrepancy theory deals with the following question: Given a set system on some universe of elements, color the elements red and blue so that each set in the system is colored as evenly as possible. I will give an overview of discrepancy and describe some of its applications. Then we focus on some results and techniques in discrepancy, and in particular show how they can be used to design new general rounding techniques leading to improved approximation guarantees for various algorithmic problems.

Maria Chudnovsky (Princeton University), Induced subgraphs and tree decompositions

Tree decompositions are a powerful tool in structural graph theory, that is traditionally used in the context of forbidden graph minors. Connecting tree decompositions and forbidden induced subgraphs has so far remained out of reach. Recently we obtained several results in this direction; the talk will be a survey of these results.

Nicole Immorlica (Microsoft Research), Incentivizing Exploration with Selective Data Disclosure

We study the design of rating systems that incentivize efficient social learning. Agents arrive sequentially and choose actions, each of which yields a reward drawn from an unknown distribution. A policy maps the rewards of previously-chosen actions to messages for arriving agents. The regret of a policy is the difference, over all rounds, between the expected reward of the best action and the reward induced by the policy. Prior work proposes policies that recommend a single action to each agent, obtaining optimal regret under standard rationality assumptions. We instead assume a frequentist behavioral model and, accordingly, restrict attention to disclosure policies that use messages consisting of the actions and rewards from a subsequence of past agents, chosen ex ante. We design a policy with optimal regret in the worst case over reward distributions. Our research suggests three components of effective policies: independent focus groups, group aggregators, and interlaced information structures.

Joint work with Jieming Mao, Alex Slivkins and Steven Wu.

Eduardo Sany Laber (Pontifical Catholic University of Rio de Janeiro), On the Price of Explainability for some Clustering Problems

Machine learning models and algorithms have been used in a number of systems that take decisions that affect our lives. Thus, explainable methods are desirable so that people are able to have a better understanding about their behaviour. However, we may be forced to lose quality and/or efficiency in order to achieve explainability. In this talk we investigate, from a theoretical perspective, the price of explainability for some clustering problems.

Alexander Razborov (The University of Chicago), Theons and Quasi-Randomness

There are two known approaches to the theory of limits of discrete combinatorial objects: geometric (graph limits) and algebraic (flag algebras). In the first part of the talk we present a general framework intending to combine useful features of both theories and compare it with previous attempts of this kind. Our main objects are \(T\)-ons, for a universal relational first-order theory \(T\); they generalize all previously considered partial cases, some of them (like permutons) in a rather non-trivial way.

In the second part we apply this framework to offer a new perspective on quasi-randomness for combinatorial objects more complicated than ordinary graphs. Our quasi-randomness properties are natural in the sense that they do not use ad hoc densities and they are preserved under the operation of defining combinatorial structures of one kind from structures of a different kind. One key concept in this theory is that of unique coupleability roughly meaning that any alignment of two objects on the same ground set should “look like” random.

Based on two joint papers with Leonardo Coregliano: Russian Mathematical Surveys (2020, 75(4)) and arXiv:2012.11773.

Luca Trevisan (Bocconi University), Graph and Hypergraph Sparsification
graph \(H\) is a sparsifier of a graph \(G\) if \(H\) has much fewer edges than \(G\) and, in an appropriate technical sense, \(H\) “approximates” \(G\). Sparsifiers are useful as compressed representations of graphs and to speed up certain graph algorithms. In a “cut sparsifier,” the notion of approximation is that every cut is crossed by approximately the same number of edges in \(G\) as in \(H\). In a “spectral sparsifier” a stronger, linear-algebraic, notion of approximation holds. Similar definitions can be given for hypergraph.

We discuss recent progress on constructions and lower bounds for graph and hypergraph sparsification, and we point out some challenging open problems.

Bianca Zadrozny (IBM Research Brazil), Evaluating classifier learning methods under covariate shift and spatial correlation

Classifier learning methods commonly assume that the training data consist of randomly drawn examples from the same distribution as the test examples about which the learned model is expected to make predictions. In the real world, the joint distribution of inputs to the model and outputs of the model differs between training and test data, a problem known as sample selection bias or dataset shift. In this talk, I will review existing methods for dealing with this problem, in particular of the special case known as covariate shift where only the input distribution changes and the conditional distribution of the output for a given input is assumed to remain fixed. I will then introduce the problem of covariate shift in geospatial data and illustrate the challenges of learning from geospatial data by assessing existing methods for evaluating the accuracy of classifier learning methods under covariate shift and spatial correlation.


[Top] [Home] [LATIN 2020]



LATIN 2018


Flavia Bonomo (Universidad de Buenos Aires), On the thinness and proper thinness of a graph
Leslie Goldberg (University of Oxford),
Andrea Richa (Arizona State University), Algorithmic Foundations of Programmable Matter
Santosh Vempala (Georgia Institute of Technology),


[Top] [Home] [LATIN 2018]



LATIN 2016


Jin Akiyama (Tokyo University of Science), Reversible Figures and Solids

An example of reversible (or hinge inside-out transformable) figures is Dudeney's Haberdasher's puzzle in which an equilateral triangle is dissected into four pieces, hinged like a chain, and then is transformed into a square by rotating the hinged pieces. Furthermore, the entire boundary of each figure goes into the inside of the other figure and becomes the dissection lines of the figure. Many intriguing results on reversibilities of figures have been found in the preceding research, but most of them are results on polygons. We generalize those results to general connected figures. It is shown that two nets obtained by cutting the surface of an arbitrary convex polyhedron along non-interesting dissection trees are reversible. Moreover, we generalize reversibility for 2D-figures to one for 3D-solids.

Joint work with Jin Akiyama (Tokyo University of Science).

Allan Borodin (University of Toronto), Simplicity Is in Vogue (again)

Throughout history there has been an appreciation of the importance of simplicity in the arts and sciences. In the context of algorithm design, and in particular in apprxoximation algorithms and algorithmic game theory, the importance of simplicity is currently very much in vogue. I will present some examples of the current interest in the design of “simple algorithms”. And what is a simple algorithm? Is it just “you'll know it when you see it”, or can we benefit from some precise models in various contexts?

José Correa (Universidad de Chile), Subgame Perfect Equilibrium: Computation and Efficiency
The concept of Subgame Perfect Equilibrium (SPE) naturally arises in games which are played sequentially. In a simultaneous game the natural solution concept is that of a Nash equilibrium in which no players has an incentive to unilaterally deviate from her current strategy. However, if the game is played sequentially, i.e., there is a prescribed order in which the players make their moves, an SPE is a situation in which all players anticipate the full strategy of all other players contingent on the decisions of previous players. Although most research in algorithmic game theory has been devoted to understand properties of Nash equilibria including its computation and the so-called price of anarchy in recent years there has been an interest in understanding the computational properties of SPE and its corresponding efficiency measure, the sequential price of anarchy.

In this talk we will review some of these recent results putting particular emphasis on a very basic game, namely that of atomic selfish routing in a network. In particular we will discuss some hardness results such as the PSPACE-completeness of computing an SPE and its NP-hardness even when the number of players fixed to two. We will also see that for interesting classes of games SPE avoid worst case Nash equilibria, resulting in substantial improvements for the price of anarchy. However, for the atomic network routing games with linear latencies, where the price of anarchy has long been known to be equal to \(5/2\), we prove that the sequential price of arachy is not bounded by any constant and can be as large as \(\Omega(\sqrt{n})\), with \(n\) being the number of players.

Alan Frieze (Carnegie Mellon University), Buying Stuff Online
Suppose there is a collection \(x_1, x_2, \ldots, x_N\) of independent uniform \([0, 1]\) random variables, and a hypergraph \(F\) of target structures on the vertex set \(\{1, \ldots, N\}\). We would like to buy a target structure at small cost, but we do not know all the costs \(x_i\) ahead of time. Instead, we inspect the random variables \(x_i\) one at a time, and after each inspection, choose to either keep the vertex \(i\) at cost \(x_i\), or reject vertex \(i\) forever.

In the present paper, we consider the case where \(\{1, \ldots, N\}\) is the edge-set of some graph, and the target structures are the spanning trees of a graph; the spanning arborescences of a digraph; the Hamilton cycles of a graph; the prefect matchings of a graph; the paths between a fixed pair of vertices; or the cliques of some fixed size.

Joint work with Wesley Pegden (CMU).

Héctor García-Molina (Stanford University), Data Crowdsourcing: Is It for Real?
Crowdsourcing refers to performing a task using human workers that solve sub-problems that arise in the task. In this talk I will give an overview of crowdsourcing, focusing on how crowdsourcing can help traditional data processing and analysis tasks. I will also give a brief overview of some of the crowdsourcing research we have done at the Stanford University InfoLab.


[Top] [Home] [LATIN 2016]



LATIN 2014


Ronitt Rubinfeld (MIT), Something for Almost Nothing: Advances in Sub-linear Time Algorithms
Linear-time algorithms have long been considered the gold standard of computational endciency. Indeed, it is hard to imagine doing better than that, since for a nontrivial problem, any algorithm must consider all of the input in order to make a decision. However, as extremely large data sets are pervasive, it is natural to wonder what one can do in sub-linear time. Over the past two decades, several surprising advances have been made on designing such algorithms. We will give a non-exhaustive survey of this emerging area, highlighting recent progress and directions for further research.
Gilles Barthe (IMDEA Software Institute), Computer-Aided Cryptographic Proofs
EasyCrypt [6] is a computer-assisted framework for reasoning about the security of cryptographic constructions, using the methods and tools of provable security, and more specifically of the game-based techniques. The core of EasyCrypt is a relational program logic for a core probabilistic programming language with sequential composition, conditionals, loops, procedure calls, assignments and sampling from discrete distributions. The relational program logic is key to capture reductionist arguments that arise in cryptographic proofs. It is complemented by a (standard, non-relational) program logic that allows to reason about the probability of events in the execution of probabilistic programs; this program logic allows for instance to upper bound the probability of failure events, that are pervasive in game-based cryptographic proofs. In combination, these logics capture general reasoning principles in cryptography and have been used to verify the security of emblematic constructions, including the Full-Domain Hash signature [8], the Optimal Asymmetric Encryption Padding (OAEP) [7], hash function designs [3] and zero-knowledge protocols [5, 1]. Yet, these logics can only capture instances of general principles, and lack mechanisms for stating and proving these general principles once and for all, and then for instantiating them as needed. To overcome this limitation, we have recently extended EasyCrypt with programming language mechanisms such as modules and type classes. Modules provide support for composition of cryptographic proofs, and for formalizing hybrid arguments, whereas type classes are convenient to model and reason about algebraic structures. Together, these extensions significantly expand the class of examples that can be addressed with EasyCrypt. For instance, we have used the latest version of EasyCrypt to verify the security of a class of authenticated key exchange protocols, and of a secure function evaluation protocol based on garbled circuits and oblivious transfer.

Our current work explores two complementary directions. On the one hand, we are extending the EasyCrypt infrastructure in order to derive security guarantees about implementations of cryptographic constructions. Indeed, practical attacks often target specific implementations and exploit some characteristics that are not considered in typical provable security proofs; as a consequence, several widely used implementations of provably secure schemes are vulnerable to attacks. In order to narrow the gap between provable security and implementations, we are extending EasyCrypt with support to reason about C-like implementations, and use the CompCert verified C compiler (http://compcert.inria.fr) to carry the security guarantees down to executable implementations [2]. On the other hand, we are developing specialized formalisms to reason about the security of particular classes of constructions. For instance, we have recently developed the ZooCrypt framework [4], which supports automated analysis of chosen-plaintext and chosen ciphertext-security for public-key encryption schemes built from (partial-domain) one-way trapdoor permutations and random oracles. Using ZooCrypt, we have analyzed over a million (automatically generated) schemes, including many schemes from the literature. For chosen-plaintext security, ZooCrypt is able to report in nearly 99% of the cases a proof of security with a concrete security bound, or an attack. We are currently extending our approach to reason about encryption schemes based on Diffie-Hellmann groups and bilinear pairings, both in the random oracle and in the standard models. More information about the project is available from the project web page http://www.easycrypt.info.

References

1. Almeida, J.B., Barbosa, M., Bangerter, E., Barthe, G., Krenn, S., Zanella-Bé́guelin, S.: Full proof cryptography: verifiable compilation of efficient zero-knowledge protocols. In: 19th ACM Conference on Computer and Communications Security. ACM (2012)

2. Almeida, J.B., Barbosa, M., Barthe, G., Dupressoir, F.: Certified computer-aided cryptography: efficient provably secure machine code from high-level implementations. In: ACM Conference on Computer and Communications Security. ACM (2013)

3. Backes, M., Barthe, G., Berg, M., Grégoire, B., Skoruppa, M., Zanella-Béguelin, S.: Verified security of Merkle-Damgård. In: IEEE Computer Security Foundations. ACM (2012)

4. Barthe, G., Crespo, J.M., Grégoire, B., Kunz, C., Lakhnech, Y., Schmidt, B., Zanella-Béguelin, S.: Automated analysis and synthesis of padding-based encryption schemes. In: ACM Conference on Computer and Communications Security. ACM (2013)

5. Barthe, G., Grégoire, B., Hedin, D., Heraud, S., Zanella-Béguelin, S.: A MachineChecked Formalization of Sigma-Protocols. In: IEEE Computer Security Foundations. ACM (2010)

6. Barthe, G., Grégoire, B., Heraud, S., Zanella-Béguelin, S.: Computer-aided security proofs for the working cryptographer. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 71–90. Springer, Heidelberg (2011)

7. Barthe, G., Grégoire, B., Lakhnech, Y., Zanella-Béguelin, S.: Beyond Provable Security Verifiable IND-CCA Security of OAEP. In: Kiayias, A. (ed.) CT-RSA 2011. LNCS, vol. 6558, pp. 180–196. Springer, Heidelberg (2011)

8. Zanella-Béguelin, S., Barthe, G., Grégoire, B., Olmedo, F.: Formally certifying the security of digital signature schemes. In: IEEE Symposium on Security and Privacy. IEEE Computer Society (2009)

Robert Sedgewick (Princeton), “If You Can Specify It, You Can Analyze It” — The Lasting Legacy of Philippe Flajolet
The “Flajolet School” of the analysis of algorithms and combinatorial structures is centered on an effective calculus, known as analytic combinatorics, for the development of mathematical models that are sufficiently accurate and precise that they can be validated through scientific experimentation. It is based on the generating function as the central object of study, first as a formal object that can translate a specification into mathematical equations, then as an analytic object whose properties as a function in the complex plane yield the desired quantitative results. Universal laws of sweeping generality can be proven within the framework, and easily applied. Standing on the shoulders of Cauchy, Polya, de Bruijn, Knuth, and many others, Philippe Flajolet and scores of collaborators developed this theory and demonstrated its effectiveness in a broad range of scientific applications. Flajolet's legacy is a vibrant field of research that holds the key not just to understanding the properties of algorithms and data structures, but also to understanding the properties of discrete structures that arise as models in all fields of science. This talk will survey Flajolet's story and its implications for future research.

“A man … endowed with an an exuberance of imagination which puts it in his power to establish and populate a universe of his own creation”.

Gonzalo Navarro (University of Chile), Encoding Data Structures
Classical data structures can be regarded as additional information that is stored on top of the raw data in order to speed up some kind of queries. Some examples are the suffix tree to support pattern matching in a text, the extra structures to support lowest common ancestor queries on a tree, or precomputed shortest path information on a graph.

Some data structures, however, can operate without accessing the raw data. These are called encodings. Encodings are relevant when they do not contain enough information to reproduce the raw data, but just what is necessary to answer the desired queries (otherwise, any data structure could be seen as an encoding, by storing a copy of the raw data inside the structure).

Encodings are interesting because they can occupy much less space than the raw data. In some cases the data itself is not interesting, only the answers to the queries on it, and thus we can simply discard the raw data and retain the encoding. In other cases, the data is used only sporadically and can be maintained in secondary storage, while the encoding is maintained in main memory, thus speeding up the most relevant queries.

When the raw data is available, any computable query on it can be answered with sufficient time. With encodings, instead, one faces a novel fundamental question: what is the effective entropy of the data with respect to a set of queries? That is, what is the minimum size of an encoding that can answer those queries without accessing the data? This question is related to Information Theory, but in a way inextricably associated to the data structure: the point is not how much information the data contains, but how much information is conveyed by the queries. In addition, as usual, there is the issue of how efficiently can be the queries answered depending on how much space is used.

In this talk I will survey some classical and new encodings, generally about preprocessing arrays \(A[1, n]\) so as to answer queries on array intervals \([i, j]\) given at query time. I will start with the classical range minimum queries (which is the minimum value in \(A[i,j]\)?) which has a long history that culminated a few years ago in an asymptotically space-optimal encoding of \(2n+o(n)\) bits answering queries in constant time. Then I will describe more recent (and partly open) problems such as finding the second minimum in \(A[i, j]\), the \(k\) smallest values in \(A[i, j]\), the \(k\)-th smallest value in \(A[i, j]\), the elements that appear more than a fraction τ of the times in \(A[i, j]\), etc. All these queries appear recurrently within other algorithmic problems, and they have also direct application in data mining.

J. Ian Munro (University of Waterloo), Succint Data Structures ... Not Just for Graphs
Succinct data structures are data representations that use (nearly) the information theoretic minimum space, for the combinatorial object they represent, while performing the necessary query operations in constant (or nearly constant) time. So, for example, we can represent a binary tree on n nodes in \(2n + o(n)\) bits, rather than the “obvious” \(5n\) or so words, i.e. \(5n \lg(n)\) bits. Such a difference in memory requirements can easily translate to major differences in runtime as a consequence of the level of memory in which most of the data resides. The field developed to a large extent because of applications in text indexing, so there has been a major emphasis on trees and a secondary emphasis on graphs in general; but in this talk we will draw attention to a much broader collection of combinatorial structures for which succinct structures have been developed. These will include sets, permutations, functions, partial orders and groups, and yes, a bit on graphs.


[Top] [Home] [LATIN 2014]



LATIN 2012


Martin Davis (Courant Institute, NYU, USA), Universality is Ubiquitous
Scott Aaronson (Massachusetts Institute of Technology, USA), Turing Year Lecture
Kirk Pruhs (University of Pittsburgh, USA), Green Computing Algorithmics: Managing Power Heterogeneity
Marcos Kiwi (Universidad de Chile, Chile), Combinatorial and Algorithmic Problems Involving (Semi-)Random Sequences


[Top] [Home] [LATIN 2012]



LATIN 2010


Piotr Indyk (Massachusetts Institute of Technology, USA), Sparse Recovery Using Sparse Random Matrices
Cristopher Moore (University of New Mexico and Santa Fe Institute, USA), Continuous and Discrete Methods in Computer Science
Sergio Rajsbaum (Universidad Nacional Autónoma de México, Mexico), Iterated Shared Memory Models
Leslie Valiant (Harvard University, USA), Some Observations on Holographic Algorithms
Ricardo Baeza-Yates, John Brzozowski, Volker Diekert, and Jacques Sakarovitch (Yahoo Research (Spain), University of Waterloo (Canada), Universität Stuttgart (Germany), and Ecole Nationale Supérieure des Télécommunications (France)), Vignettes on the work of Imre Simon


[Top] [Home] [LATIN 2010]



LATIN 2008


Cláudio Lucchesi (Unicamp, Brazil), Pfaffian Bipartite Graphs: The Elusive Heawood Graph
Moni Naor (Weizmann Institute, Israel), Games, Exchanging Information and Extracting Randomness
Wojciech Szpankowski (Purdue University, USA), Tries
Éva Tardos (Cornell U., USA), Games in Networks
Robert Tarjan (Princeton U., USA), Graph Algorithms


[Top] [Home] [LATIN 2008]



LATIN 2006


Ricardo Baeza-Yates (Universidad de Chile & Universitat Pompeu Fabra), Algorithmic Challenges in Web Search Engine
In this paper we present the main algorithmic challenges that large Web search engines face today. These challenges are present in all the modules of a Web retrieval system, ranging from the gathering of the data to be indexed (crawling) to the selection and ordering of the answers to a query (searching and ranking). Most of the challenges are ultimately related to the quality of the answer or the efficiency in obtaining it, although some are relevant even to the existence of search engines: context based advertising.
Anne Condon (RNA molecules: glimpses through an algorithmic lens), U. British Columbia
Dubbed the “architects of eukaryotic complexity”, RNA molecules are increasingly in the spotlight, in recognition of the important catalytic and regulatory roles they play in our cells and their promise in therapeutics. Our goal is to describe the ways in which algorithms can help shape our understanding of RNA structure and function.
Ferran Hurtado (Universitat Politècnica de Catalunya), Squares
In this talk we present several results and open problems having squares, the basic geometric entity, as a common thread. These results have been gathered from various papers; coauthors and precise references are given in the descriptions that follow.
R. Ravi (Matching Based Augmentations for Approximating Connectivity Problems), Carnegie Mellon University
We describe a very simple idea for designing approximation algorithms for connectivity problems: For a spanning tree problem, the idea is to start with the empty set of edges, and add matching paths between pairs of components in the current graph that have desirable properties in terms of the objective function of the spanning tree problem being solved. Such matching augment the solution by reducing the number of connected components to roughly half their original number, resulting in a logarithmic number of such matching iterations. A logarithmic performance ratio results for the problem by appropriately bounding the contribution of each matching to the objective function by that of an optimal solution.

In this survey, we trace the initial application of these ideas to traveling salesperson problems through a simple tree pairing observation down to more sophisticated applications for buy-at-bulk type network design problems.

Madhu Sudan (Modelling Errors and Recovery for Communication), MIT
The theory of error-correction has had two divergent schools of thought, going back to the works of Shannon and Hamming. In the Shannon school, error is presumed to have been effected probabilistically. In the Hamming school, the error is modeled as effected by an all-powerful adversary. The two schools lead to drastically different limits. In the Shannon model, a binary channel with error-rate close to, but less than, \(50\%\) is useable for effective communication. In the Hamming model, a binary channel with an error-rate of more than \(25\%\) prohibits unique recovery of the message.

In this talk, we describe the notion of list-decoding, as a bridge between the Hamming and Shannon models. This model relaxes the notion of recovery to allow for a “list of candidates”. We describe results in this model, and then show how these results can be applied to get unique recovery under “computational restrictions” on the channel's ability, a model initiated by R. Lipton in 1994.

Based on joint works with Venkatesan Guruswami (U. Washington), and with Silvio Micali (MIT), Chris Peikert (MIT) and David Wilson (MIT).

Sergio Verdú (Lossless Data Compression via Error Correction), Princeton
This plenary talk gives an overview of recent joint work with G. Caire and S. Shamai on the use of linear error correcting codes for lossless data compression, joint source/channel coding and interactive data exchange.
Avi Wigderson (The power and weakness of randomness in computation), IAS
Humanity has grappled with the meaning and utility of randomness for centuries. Research in the Theory of Computation in the last thirty years has enriched this study considerably. We describe two main aspects of this research on randomness, demonstrating its power and weakness respectively.


[Top] [Home] [LATIN 2006]



LATIN 2004


Cynthia Dwork (Microsoft Research), Fighting Spam: The Science
Mike Paterson (U. of Warwick), Analysis of Scheduling Algorithms for Proportionate Fairness
We consider a multiprocessor operating system in which each current job is guaranteed a given proportion over time of the total processor capacity. A scheduling algorithm allocates units of processor time to appropriate jobs at each time step. We measure the goodness of such a scheduler by the maximum amount by which the cumulative processor time for any job ever falls below the “fair” proportion guaranteed in the long term.

In particular we focus our attention on very simple schedulers which impose minimal computational overheads on the operating system. For several such schedulers we obtain upper and lower bounds on their deviations from fairness. The scheduling quality which is achieved depends quite considerably on the relative processor proportions required by each job.

We will outline the proofs of some of the upper and lower bounds, both for the unrestricted problem and for restricted versions where constraints are imposed on the processor proportions. Many problems remain to be investigated and we will give the results of some exploratory simulations.

This is joint research with Micah Adler, Petra Berenbrink, Tom Friedetzky, Leslie Ann Goldberg and Paul Goldberg.

Yoshiharu Kohayakawa (U. São Paulo), Advances in the Regularity Method
Jean-Eric Pin (CNRS/U. Paris VII), The consequences of Imre Simon's work in the theory of automata, languages and semigroups
In this lecture, I will show how influential has been the work of Imre in the theory of automata, languages and semigroups. I will mainly focus on two celebrated problems, the restricted star-height problem (solved) and the decidability of the dot-depth hierarchy (still open). These two problems lead to surprising developments and are currently the topic of very active research.

I will present the prominent results of Imre on both topics, and demonstrate how these results have been the motor nerve of the research in this area for the last thirty years.

Dexter Kozen (Cornell U.), Kleene Algebra with Tests and the Static Analysis of Programs
I will propose a general framework for the static analysis of programs based on Kleene algebra with tests (KAT). I will show how KAT can be used to statically verify compliance with safety policies specified by security automata. The method is sound and complete over relational interpretations. I will illustrate the method on an example involving the correctness of a device driver.


[Top] [Home] [LATIN 2004]



LATIN 2002


Jennifer T. Chayes (Microsoft Research), Phase Transitions in Computer Science
Phase transitions are familiar phenomena in physical systems. But they also occur in many probabilistic and combinatorial models, including random versions of some classic problems in theoretical computer science.

In this talk, I will discuss phase transitions in several systems, including the random graph — a simple probabilistic model which undergoes a critical transition from a disordered to an ordered phase; k-satisfiability — a canonical model in theoretical computer science which undergoes a transition from solvability to insolvability; and optimum partitioning — a fundamental problem in combinatorial optimization, which undergoes a transition from existence to non-existence of a perfect optimizer.

Technically, phase transitions only occur in infinite systems. However, real systems and the systems we simulate are large, but finite. Hence the question of finite-size scaling: Namely, how does the transition behavior emerge from the behavior of large, finite systems? Results on the random graph, satisfiability and optimum partitioning locate the critical windows of these transitions and establish interesting features within these windows.

No knowledge of phase transitions will be assumed in this talk

Christos H. Papadimitriou (U. California, Berkeley), The Internet, the Web, and Algorithms
The Internet and the worldwide web, unlike all other computational artifacts, were not deliberately designed by a single entity, but emerged from the complex interactions of many. As a result, they must be approached very much the same way that cells, galaxies or markets are studied in other sciences: By speculative (and falsifiable) theories trying to explain how selfish algorithmic actions could have led to what we observe. I present several instances of recent work on this theme, with several collaborators
Joel Spencer (Courant Institute), Erdős Magic
The Probabilistic Method is a lasting legacy of the late Paul Erdős. We give two examples — both problems first formulated by Erdős in the 1960s with new results in the last few years and both with substantial open questions. Further in both examples we take a Computer Science vantagepoint, creating a probabilistic algorithm to create the object (coloring, packing respectively) and showing that with positive probability the created object has the desired properties.

Given \(m\) sets each of size \(n\) (with an arbitrary intersection pattern) we want to color the underlying vertices Red and Blue so that no set is monochromatic. Erdős showed this may always be done if \(m< 2(n-1)\), we give a recent argument of Srinivasan and Radhukrishnan that extends this to \(m < c 2n\sqrt{n/\ln n}\). One first colors randomly and then recolors the blemishes with a clever random sequential algorithm.

In a universe of size \(N\) we have a family of sets, each of size \(k\), such that each vertex is in \(D\) sets and any two vertices have only \(o(D)\) common sets. Asymptotics are for fixed \(k\) with \(N,D\). We want an asymptotic packing, a subfamily of \(N/k\) disjoint sets. Erdős and Hanani conjectured such a packing exists (in an important special case of asymptotic designs) and this conjecture was shown by Rödl. We give a simple proof of the speaker that analyzes the random greedy algorithm.

Jorge Urrutia (UNAM), Open Problems in Computational Geometry
In this paper we present a collection of problems which have defied solution for some time. We hope that this paper will stimulate renewed interest in these problems, leading to solutions to at least some of them.
Umesh V. Vazirani (U. California, Berkeley), Quantum Algorithms
Quantum computers are the only model of computing to credibly violate the modified Church-Turing thesis, which states that any reasonable model of com-putation can be simulated by a probabilistic Turing Machine with at most poly-nomial factor simulation overhead. This is dramatically demonstrated by Shor’s polynomial time algorithms for factorization and discrete logarithms [13]. Shor’s algorithm, as well as the earlier algorithm due to Simon [12] can both be cast into the general framework of the hidden subgroup problem (see for example [10]). Two recent papers [11],  [9] study how well this framework extends to solving the hidden subgroup problem for non-abelian groups (which includes the graph iso-morphism problem).

Grigni, M., Schulman, S., Vazirani, M., Vazirani, U.. Quantum Mechnical Algorithms for the Non beli n Hidden Subgroup Problem. In: Proceedings of the Thirty-third Annual ACM Symposium on the Theory of Computing Crete, Greece, 2001.

L. Hales and S. Hallgren. Quantum Fourier S mpling Simplified. In: Proceedings of the Thirty-first Annual ACM Symposium on the Theory of Computing pages 330–338,Atlanta,Georgia, 1–4 May 1999.

Hallgren, S., Russell, A., T-Shma, A.. Normal subgroup reconstruction and quantum computation using group representations. In: Proceedings of the 32nd Annual ACM Symposium on Theory of Computing 627–635, 2000.

D. Simon. On the power of quantum computation. In: Proc. 35th Symposium on Foundations of Computer Science (FOCS), 1994.

Shor P. W.. Polynomial-time algorithms for prime factorization and discrete alogarithms on a quantum computer. SIAM J. Comp., 26 No.5, pp 1484–1509, October 1997. Mihalis Yannakakis (Avaya Laboratories), Testing and Checking of Finite State Systems

Finite state machines have been used to model a wide variety of systems, including sequential circuits, communication protocols, and other types of reactive systems, i.e., systems that interact with their environment. In testing problems we are given a system, which we may test by providing inputs and observing the outputs produced.

The goal is to design test sequences so that we can deduce desired information about the given system under test, such as whether it implements correctly a given specification machine (conformance testing), or whether it satisfies given requirement properties (black-box checking).

In this talk we will review some of the theory and algorithms on basic testing problems for systems modeled by different types of finite state machines. Conformance testing of deterministic machines has been investigated for a long time; we will discuss various efficient methods. Testing of nondeterministic and probabilistic machines is related to games with incomplete information and to partially observable Markov decisions processes.

The verification of properties for finite state systems with a known structure (i.e., “white box” checking) is known as the model-checking problem, and has been thoroughly studied for many years, yielding a rich theory, algorithms and tools. Black-box checking, i.e., the verification of properties for systems with an unknown structure, combines elements from model checking, conformance testing and learning.


[Top] [Home] [LATIN 2002]



LATIN 2000


Allan Borodin (U. Toronto), On the Competitive Theory and Practice of Portfolio Selection
Philippe Flajolet (INRIA), Joachim von zur Gathen (U. Paderborn), Subresultants Revisited
Yoshiharu Kohayakawa (U. São Paulo), Algorithmic Aspects of Regularity
Andrew Odlyzko (AT&T Labs), Integer Factorization and Discrete Logarithms
Prabhakar Raghavan (IBM Almaden), Graph Structure of the Web: A Survey


[Top] [Home] [LATIN 2000]



LATIN 1998


Noga Alon (Tel Aviv Univ.), Spectral Techniques in Graph Algorithms
Richard Beigel (Lehigh Univ.), The Geometry of Browsing
Gilles Brassard (Univ. of Montréal), Quantum Cryptanalysis of Hash and Claw-Free Functions
Herbert Edelsbrunner (Univ. of Illinois, Urbana-Champaign), Shape Reconstruction with Delaunay Complex
Juan A. Garay (IBM, Yorktown Heights), Batch Verification with Applications to Cryptography and Checking


[Top] [Home] [LATIN 1998]



LATIN 1995


Josep Diaz NC Approximations
Isaac Scherson Load Balancing in Distributed Systems
J.Ian Munro Fast, Space Efficient Data Structures
Alberto Apostolico The String Statistics Problem
Mike Waterman Probability Distributions for Sequence Alignment Scores


[Top] [Home] [LATIN 1995]



LATIN 1992


Jean-Paul Allouche (CNRS), \(q\)-Regular Sequences and other Generalizations of \(q\)-Automatic Sequences
Manuel Blum (U. California, Berkeley), Universal Statistical Tests
Kosaburo Hashiguchi (Toyohashi U. of Technology), The Double Reconstruction Conjectures about Colored Hypergraphs and Colored Directed Graphs
Erich Kaltofen (Rensselaer Polytechnic Institute), Polynomial Factorization 1987-1991
Arjen K. Lenstra (Bellcore), Massively Parallel Computing and Factoring
Gene Myers (U. of Arizona), Approxiamte Matching of Network Expressions with Spacers
Jean-Eric Pin (Bull), On Reversible Automata
Vaughan Pratt (Standford U.), Arithmetic + Logic + Geometry = Concurrency
Daniel D. Sleator (Carnegie Mellon U.), Data Structures and Terminating Petri Nets
Michel Cosnard (Ecole Normale Supérieure de Lyon), Complexity Issues in Neural Network Computations


[Top] [Home] [LATIN 1992]