ADGA 2015

ADGA is a one-day workshop that focuses on the theoretical foundations of distributed graph algorithms. The program consists of several invited talks, which are aimed at the general DISC audience.

ADGA 2015 will be held on Tuesday, 6 October 2015 in Tokyo, Japan, and will be co-located with DISC 2015. The workshop will be chaired by Keren Censor-Hillel.

For information about venue and registration, please go to the DISC 2015 website.

For information on past and future ADGA workshops, see the web site of the ADGA workshop series.

Schedule:

(Takes place in the HOUHOU room on the 4th floor at the Arcadia Ichigaya Hotel)

09:00-10:00 Merav Parter:
Fault Resilient Graph Structures slides

10:00-10:15 Coffee Break
10:15-11:15 Seth Gilbert:
Bob the Builder vs. Fix-it-Felix: maintaining overlays in dynamic graphs slides

11:15-11:30 Coffee Break
11:30-12:30 Stefan Schmid:
The Zen of Consistent Distributed Network Updates slides

12:30-14:00 Lunch

14:00-15:00 Dana Ron:
On Centralized Sublinear Algorithms and Some Relations to Distributed Computing slides

15:00-15:15 Coffee Break
15:15-16:15 Sriram Pemmaraju:
Designing Algorithms for the Congested Clique slides

16:15-16:30 Coffee Break
16:30-17:30 Peter Robinson:
Distributed Computation of Large-scale Graph Problems slides

18:00-21:00 DISC reception (Arcadia Ichigaya Hotel)

Invited Speakers and Abstracts:

Seth Gilbert
National University of Singapore

Seth Gilbert is a Dean’s Chair Assistant Professor at the National University of Singapore. He received his PhD from MIT. His research focuses on algorithmic issues of robustness and scalability, wherever they may arise.









Bob the Builder vs. Fix-it-Felix: maintaining overlays in dynamic graphs

Over the last decade, there has been much effort devoted to the problem of building a good overlay network. [“Can we build it? Yes we can!”] At the same time, we want our overlay to tolerate significant churn in the underlying system, while continually maintaining or restoring a set of desirable properties, i.e., small diameter, small degree, good expansion, etc. [“We can fix it!”] In this talk, we will examine several approaches for achieving robust overlays (along with some new results), and discuss some key remaining open questions.

 

Merav Parter
MIT

Merav Parter is a Postdoctoral Fellow at MIT hosted by Prof. Nancy Lynch. She received a Ph.D. degree in Computer Science from the Weizmann Institute of Science under the guidance of Prof. David Peleg. Her thesis “The Topology of Wireless Communication and Applications” won the first place Feder prize award for best student work in communication technology. Parter is a Rothschild Fellow. In the past, she was a Google European Fellow in Distributed Computing, 2012. Her research interests focus on two aspects of reliable communication: fault tolerant graph structures and wireless communication. She’s particularly intrigued with bridging the gap between Electrical Engineering and Theoretical Computer Science.



Fault Resilient Graph Structures

A fault-tolerant (FT) structure for a network is required to continue functioning following the failure of some of the network’s edges or vertices.Fault-resilience can be introduced into the network in several different ways. This talk will focus on a notion of fault-tolerance whereby the structure at hand is augmented (by adding to it various components) so that subsequent to the failure of some of the network’s vertices or edges, the surviving part of the structure is still operational. As this augmentation carries certain costs, it is desirable to minimize the number of added components.We will revise several constructions of sparse fault tolerant structures such as FT spanner and FT shortest-path trees. I will also present a new model for fault resilient network structures that mix two orthogonal protection mechanisms: (a) backup, namely, augmenting the structure with many (redundant) low-cost and fault-prone components, and (b) reinforcement, namely, acquiring high-cost but fault-resistant components.  A trade-off between these two mechanisms will be presented in a concrete setting of shortest-path trees.



Sriram V. Pemmaraju
The University of Iowa

Sriram Pemmaraju is a professor in the department of computer science at The University of Iowa. He received his PhD from Virginia Tech and taught in the department of mathematics at IIT Bombay prior to coming to Iowa. His primary research interests are distributed graph algorithms, especially the role of randomization and approximation in such algorithms. He is also interested in applications of the probabilistic method in combinatorics. More recently he has also become interested in problems of modeling infectious disease-spread, especially in hospital settings.

 

Designing Algorithms for the Congested Clique

In the last 5 years, we have seen a flurry of research on the design of algorithms in a simple distributed model of computing that has now come to be known as the \textit{congested clique} model. As the name suggests, the underlying communication network is a clique with the restriction that in each round, each communication link can carry a single message of size O(log n) bits (in each direction). Many classical graph problems such as minimum spanning tree, shortest paths, and graph coloring have been considered in the congested clique setting. Typically, the input to these problems is a subgraph of underlying clique such that each node initially only knows the edges incident on it. In this setting, even though n^2 messages can be exchanged in a round over the network, each node can only receive n messages, making it impossible for any node to quickly have access to the entire input.

In this talk, I will use example problems to sketch some techniques for designing fast algorithms in the congested clique model. I will then make connections between congested clique and well known models of parallel and distributed computing such as MapReduce and the BSP model, and the more recently introduced k-machine model. For these connections to be fruitful, one needs to design algorithms in the congested clique that are fast \textit{and} have small message complexity. I will end the talk by describing recent efforts at designing fast congested clique algorithms with low message complexity.

 

Peter Robinson
Queen’s University Belfast

Peter is an Assistant Professor at the High Performance and Distributed Computing Research Cluster of Queen’s University Belfast, United Kingdom. He received his PhD from the Vienna University of Technology. He was a postdoctoral fellow at the Nanyang Technological University (Singapore) and at the National University of Singapore. His current research interests include distributed processing of large-scale data and fault-tolerance in dynamic communication networks such as peer-to-peer networks.




Distributed Computation of Large-scale Graph Problems

Motivated by the increasing need for fast distributed processing of large-scale graphs such as the Web graph, biological networks and various social networks, we study a number of fundamental graph problems in the distributed message-passing model, where we have k machines that jointly perform a computation on an arbitrary n-node input graph (typically, n >> k). The graph is assumed to be randomly partitioned among the k machines, which is a common implementation in many real world systems.We present several complexity lower bounds that quantify the fundamental limitations of solving graph problems in a distributed system. To complement our lower bounds, we present algorithms for problems such as verifying graph connectivity, computing the PageRank, and triangle enumeration. We also discuss how our model relates to other distributed systems for large-scale graph processing.



Dana Ron
Tel-Aviv University

Dana Ron received her Ph.D from the Hebrew University in 1995. Between 1995 and 1997 she was an NSF Postdoc at MIT. During the academic year 1997-8 she was a Bunting science scholar at Radcliffe and MIT. Since 1998 she is a faculty member at the Faculty of Engineering, Tel Aviv University.
During the academic year 2003-4 she was a fellow at the Radcliffe Institute for Advanced Study, Harvard University.
Her research focuses on Sublinear Approximation algorithms and in particular Property Testing.


On Centralized Sublinear Algorithms and Some Relations to Distributed Computing

In this talk I will survey research on sublinear approximation algorithms for graphs in a centralized setting. Such algorithms do not read the entire graph, but rather sample random parts of the graph and compute approximations of various parameters of interest.In particular, I will discuss algorithms for the following parameters:
(1) The average degree and the number of triangles
(2) The weight of a minimum spanning tree
(3) The size of a minimum vertex cover and a maximum matching
(4) The number of edges that should be added/removed in order to obtain various propertiesThe design of some of these algorithms is inspired and/or related to distributed network algorithms.

 

Stefan Schmid
TU Berlin & T-Labs, Berlin, Germany

Stefan Schmid is a Senior Research Scientist at T-Labs Berlin. He received his PhD degree from the Distributed Computing Group at ETH Zurich (Roger Wattenhofer) and was a postdoc with Christian Scheideler at TU Munich and the university of Paderborn. In 2014, we was also a visiting professor at CNRS-LAAS in Toulouse, France, as well as at Université catholique de Louvain, Belgium. Stefan is working at the interface of Distributed Systems and Algorithms, currently with a focus on fundamental problems in Software-Defined Networking and Cloud Data Stores. For more information, visit http://net.t-labs.tu-berlin.de/~stefan/.



The Zen of Consistent Distributed Network Updates

It is frequently claimed that software-defined networks simplify the network management, by providing a centralized view of the network state to a software controller. While a centralized network management may be nice to have in practice, this trend is somewhat disappointing from the PODC/DISC perspective, as it seems to render many algorithmic problems trivial. In this talk, I will try to convince the audience that this impression is wrong. By focusing on the fundamental problem of consistent and secure network updates, I will first show that even a setting with a single controller and a single switch, needs to be understood as a distributed system, which poses non-trivial challenges. Indeed, publicly available controller software ignoring the asynchronous setting contained fundamental bugs in the past. I will then discuss consistent network update algorithms for larger networks, provide an overview of different algorithms and tradeoffs, and highlight some algorithmic challenges. Finally, I will make the case for designing distributed control planes, arguing that to provide availability and robustness, logically centralized control networks should actually be physically distributed. Again, I will discuss algorithmic challenges resulting from such a distributed architecture.The talk will only require background in algorithms, but not in computer networking.