







There will be 2 tutorials in the morning and 2 tutorials in the afternoon of Sunday, June 28, 2009. The tutorials will take place in the following two rooms: Grand Ballroom 101, and 102. 




Presenter: Emmanuel J. Candes, California Institute of Technology 


(09:00AM12:30PM / Grand Ballroom 101) 



In many important problems, only partial information about an object of interest is available. In signal processing, for example, we may have far fewer measurements about an image we wish to reconstruct than the number of pixels in the imageas in common biomedical applications. In signal acquisition, we may only be able to sample an ultrawideband signal at a speed way below the Nyquist rate. In data analysis, we may be interested in knowing users' preferences for a collection of items, but only observe their preferences for just a few (this is the famous Netflix Prize problem). In control, we would like to recover a full dynamics from just a few points on a trajectory. Other examples abound.
Perhaps unexpectedly, it is possible to recover objects of interest from a surprisingly small number of measurements in a very concrete and practical fashion. By now, there are both wellestablished, and perhaps less wellknown modern theories showing that one can recover compressible signals or compressible information tables from fewer samples or data points than were thought necessary. In this talk, we will survey some of these theories and trace back some of their origins to early work done in the 50's. We will emphasize their practicality and the role played by convex optimization as a kind of exact interpolation principle for recovering the samples of the signal we have not seen, or the entries of the data matrix that were hidden from us. A good portion of the tutorial will explain thatsurprisinglysome special convex programs, which underly our reconstruction methods, have the same solution as combinatorially hard problems. Because these theories are broadly applicable in nature, the tutorial will move through several applications areas that may be impacted such as signal processing, biomedical imaging, control, machine learning and so on. Finally, we will discuss how these theories and methods have far reaching implications for sensor design and other types of designs. 





Emmanuel J. Candes received his B.Sc. degree from Ecole Polytechnique, Paris, in 1993, and the Ph.D. degree in statistics from Stanford University in 1998. He is the Ronald and Maxine Linde Professor of Applied and Computational Mathematics at the California Institute of Technology. Prior to joining Caltech, he was an Assistant Professor of Statistics at Stanford University from 1998 to 2000. His research interests are in computational harmonic analysis, multiscale analysis, approximation theory, statistical estimation and detection, with applications to the imaging sciences, signal processing, scientific computing, and inverse problems. Other topics of interests include theoretical computer science, mathematical optimization, and information theory. 



Dr. Candes received the Third Popov Prize in Approximation Theory in 2001, and the DOE Young Investigator Award in 2002. He was selected as an Alfred P. Sloan Research Fellow in 2001. He coauthored a paper that won the Best Paper Award of the European Association for Signal, Speech and Image Processing (EURASIP) in 2003.
He has also given plenary and keynote addresses at major international conferences including ICIAM 2007 and ICIP 2007. In 2005, he was awarded the James H. Wilkinson Prize in Numerical Analysis and Scientific Computing by the Society for Industrial and Applied Mathematics (SIAM). In 2006, he won the US National Science Foundation¡¯s highest honor: the Alan T. Waterman Award. He shared the 2008 IEEE Information Theory Society Paper Award for the paper "NearOptimal Signal Recovery from Random Projections: Universal Encoding Strategies?" 






Presenter: Amin Shokrollahi, EPFL 


(09:00AM12:30PM / Grand Ballroom 102 



Digital media have become an integral part of modern lives. Whether surfing the web, making a wireless phone call, watching satellite TV, or listening to digital music, a large part of our professional and leisure time is filled with all things digital. The replacement of analog media by their digital counterparts and the explosion of the Internet use has had a perhaps unintended consequence. Whereas analog media were previously replaced by digital media mostly only to preserve quality, the existence of high speed computer networks makes digital media available to potentially anyone, anywhere, and at any time. This possibility is the basis for modern scientific and economic developments centered around the distribution of digital data to a worldwide audience. The success of web sites like Apple's iTunes store or YouTube is rooted in the marriage of digital data and the Internet.
Reliable transport of digital media to heterogeneous clients becomes thus a central and at time critical issue. Receivers can be anywhere and they may be connected to networks with widely differing fidelities.
In this tutorial we will give an introduction into a new method for solving the data distribution problem. We take four fundamental data transmission problems as examples: delivery of data from one sender to one receiver over a long distance, delivery of data from one sender to multiple receivers, delivery of the same data from multiple senders to one receiver, and finally, delivery of data from many senders to many receivers. Examples of such data transmission scenarios are abundant: the first one is encountered whenever a large piece of data is downloaded from a distant location; satellite data distribution or distribution of data to mobile receivers is a prime example of the second scenario. The application space for the third example is emerging, and includes scenarios like disaster recovery: data is replicated across multiple servers and accessed simultaneously from these servers. A prime example for the fourth scenario is the popular peertopeer data distribution.
We argue that current data transmission protocols are not adequate to solve these data distribution problems, and hence lack the ability to solve some of today's and many of tomorrow's data delivery problems. This is because these transmission protocols were designed at a time when the Internet was still in its infancy, and the problem of bulk data distribution was not high on the agenda.
We then introduce fountain codes and show how they can be used to solve all of these data transmission problems at the same time. For a given piece of content, a fountain code produces a potentially limitless stream of data such that any subset of this data of size essentially equal to the original content is sufficient to recover the original data. Just like the case of filling a glass of water under a fountain where it does not matter which particular drops fill the glass, with a fountain code it does not matter which particular pieces of output data are received, as long as their cumulative size is right. We introduce a very simple, but inefficient, fountain code and refine it to LTcodes, the first class of efficient fountain codes, and then to Raptor codes, the stateoftheart in this field. We discuss tools that allow us to design these fountains, and analyze their performance. We also briefly discuss Raptor codes that are standardized for various data transmission scenarios. Towards the end of the tutorial, we mention how Raptor codes can be generalized for arbitrary symmetric memoryless channels (binary or not). 





Amin Shokrollahi is a professor of Mathematics and Computer Science at the Ecole Polytechnique Federale de Lausanne (EPFL) in Switzerland. He has worked on a variety of topics, including coding theory, computational number theory and algebra, and computational/algebraic complexity theory. He is best known for his work on iterative decoding algorithms of graph based codes, an area in which he holds a number of granted and pending patents. He is the coinventor of Tornado codes, and the inventor of Raptor codes. His codes have been standardized and successfully deployed in practical areas dealing with data transmission over lossy networks.




Prior to joining EPFL, Amin Shokrollahi has held positions as the chief scientist of Digital Fountain, member of the technical staff at Bell Laboratories, senior researcher at the International Computer Science Insitute in Berkeley, and assistant professor at the department of computer science of the university of Bonn. He is a Fellow of the IEEE. He was awarded the best paper award of the IEEE IT Society in 2002 for his work on iterative decoding of LDPC code, the IEEE Eric Sumner Award in 2007 for the development of Fountain Codes, and the joint Communication Society/Information Theory Society best paper award of 2007 for his paper on Raptor Codes. 






Presenter: Balaji Prabhakar, Stanford University 


(14:00PM17:30PM / Grand Ballroom 101) 



Networking has provided a rich set of research questions over the past five decades. The pursuit of these questions has led to the discovery of fundamental mathematical models and algorithms. Throughout, there has been a constant and fruitful interchange between theory and practice: New network technologies have required new models and algorithms, and spurred theory. In turn, theoretical models have vastly improved our conceptual grasp of complex networks and novel algorithms have been rapidly deployed in the practice.
After the frenetic pace of growth of data newtorks in the past decade, research has entered a "clean slate" mode. This has allowed: (i) the rethinking of classical networking paradigms, and the seeking of approaches that are more "fundamentally correct,¡± and (ii) trying out these approaches in new networking contexts such as Data Centers and in testbeds of possible future Internets.
The tutorial aims to convey the excitement and the spirit of the "clean slate" approach by considering examples from recent research. Specifically, we shall look at: (i) Network measurement, where message passing algorithms over sparse random graphs allow the construction of compact counters, (ii) Data Center congestion management, where a new control theoretic idea underlies a congestion control algorithm developed for the IEEE 802.1 Data Center standard, and (iii) flow and packetlevel models, which use ideas from statistical physics for modeling load balancing systems and more general data networks. 





Balaji Prabhakar is an Associate Professor of Electrical Engineering and Computer Science at Stanford University. His research interests are in the area of network algorithms, in scalable methods for network performance monitoring and simulation, and stochastic network theory. He has recently been working on Societal Networks, developing incentive mechanisms for problems such as road traffic congestion and recycling.
Prof. Prabhakar has been a Terman Fellow at Stanford University and a Fellow of the Alfred P. Sloan Foundation. He has received the CAREER award from the National Science Foundation, the Erlang Prize from the INFORMS Applied Probability Society, and the Rollo Davidson Prize, and given the Lunteren Lectures.? He has been the corecipient of best paper awards at Hot Interconnects 2002, Infocom 2004 and Sigmetrics 2008. 






Presenter: David Tse, UC Berkeley 


(14:00PM17:30PM/ Grand Ballroom 102) 



The past one and a half decade has seen many advances in wireless communication theory and practice, such as MIMO communications, spacetime codes and opportunistic communications. All of these techniques pertain to improving the pointtopoint, uplink (manytoone) or downlink (onetomany) spectral efficiency. In practice, however, the performance of most wireless communication systems are limited by interference between multiple communicating links. Unlike the pointtopoint, uplink and downlink cases, our information theoretic understanding of interference channels had been very limited. For example, the capacity of the most basic twouser Gaussian interference channel has been open for 30 years. In the past three or four years, however, significant progress has been made on our fundamental understanding in this direction. The goal of this tutorial is to explain some of these new ideas. Specific questions we hope to answer are:
 How to quantify the resource that multiple links are sharing via interference?
 What is the optimal way of sharing this resource?
 What is the role of feedback and cooperation in maximizing the utilization of such resource?
 What happens when there are multiple interferers?
We will give a few examples of potential applications of these ideas to wireless systems.






David Tse received the B.A.Sc. degree in systems design engineering from University of Waterloo, Canada in 1989, and the M.S. and Ph.D. degrees in electrical engineering from Massachusetts Institute of Technology in 1991 and 1994 respectively. From 1994 to 1995, he was a postdoctoral member of technical staff at A.T. & T. Bell Laboratories. Since 1995, he has been at the Department of Electrical Engineering and Computer Sciences in the University of California at Berkeley, where he is currently a Professor. He received a 1967 NSERC 4year graduate fellowship from the government of Canada in 1989, a NSF CAREER award in 1998, the Best Paper Awards at the Infocom 1998 and Infocom 2001 conferences, the Erlang Prize in 2000 from the INFORMS Applied Probability Society, the IEEE Communications and Information Theory Society Joint Paper Award in 2001, the Information Theory Society Paper Award in 2003, and the 2009 Frederick Emmons Terman Award from the American Society for Engineering Education.





He has given plenary talks at international conferences such as ICASSP in 2006, MobiCom in 2007, CISS in 2008, and ISIT in 2009. He was the Technical Program cochair of the International Symposium on Information Theory in 2004, and was an Associate Editor of the IEEE Transactions on Information Theory from 2001 to 2003. He is a coauthor, with Pramod Viswanath, of the text "Fundamentals of Wireless Communication", which has been used in over 60 institutions around the world. 















