Home

CFP

Organizers

Committee

Submission

Final Version

Posters

Registration

Program

Tutorials

Travel Grants

Hotel and Travel

Tutorial Program
Tuesday, Oct 2
9:00 AM - 10:30 AM Tutorial Track 1A   Location: Severin
Quantitative System Design
Mary K. Vernon (University of Wisconsin, USA)
Tutorial Track 2A   Location: Martin & Augustin
Stochastic Model Checking for Performance and Dependability Evaluation
Boudewijn R. Haverkort (University of Twente, The Netherlands)
11:00 AM - 12:30 PM Tutorial Track 1B   Location: Severin
Quantitative System Design
Mary K. Vernon (University of Wisconsin, USA)
Tutorial Track 2B   Location: Martin & Augustin
Routing in Wireless Networks
Don Towsley (University of Massachusetts, Amherst, USA)
2:00 PM - 5:30 PM Tutorial Track 3   Location: Severin
Algorithmic Self Assembly and Molecular Computing:
New Challenges in Performance Evaluation

Ed Coffman (Columbia University, USA)
Tutorial Track 4   Location: Martin & Augustin
Security and Cooperation in Wireless Networks
Levente Buttyan (BME, Hungary) and Jean-Pierre Hubaux (EPFL, Switzerland)

Tutorial Track 1A+B 
Quantitative System Design
Mary K. Vernon (University of Wisconsin, USA)
Abstract:
Over the past couple of decades relatively simple customized analytic models have been used to design a wide range of commercially relevant architectures and systems with complex behavior. These systems include (1) optimized high performance parallel system memory architectures, protocols, interconnection networks, and semaphores, (2) new near-optimal Internet streaming and transport protocols, (3) high performance storage systems, (4) optimized large-scale Grid applications, (5) modern Internet enterprise servers, and (6) flexible biotech and other manufacturing systems. In each case, the customized quantitative design models have four distinct advantages. First, the models are relatively easy to develop, typically requiring on the order of 10% of the detailed simulator development time. Second, the models are highly accurate - in many cases if not typically more accurate than detailed system simulations, and sometimes more accurate than the actual system implementation. Third, the customized models provide unique insight into system bottlenecks, implementation (or simulation) errors that need to be eliminated, the maximum feasible performance improvement, and/or new system designs that have the highest cost/performance pay-offs. Finally, the models have provided important new system functionality. In other words, customized models are a key tool for competitive systems design and engineering.

Track 1A:
Using a variety of important system and network design questions, Tutorial 1A will cover
  1. examples of the unique insights and impact of quantitative design,
  2. how relatively simple models are customized to capture complex system behavior, and
  3. how the quantitative models can be used during system operation.
Attendees with no background in analytic modeling will learn the basic customized analysis techniques for a range of modern systems with shared resource contention, asynchronous forking, highly bursty request or packet arrivals, non-exponential service times, pipelined messages with blocking across multiple consecutive switches with finite buffers, customer or packet loss due to server overload or buffer overflow, and resources that are configured for nearly 100% (e.g., 0.9999) probability of providing immediate service to each arriving client. Attendees with background in customized quantitative modeling will learn new techniques to add to their repertoire. The examples also illustrate a number of key principles that greatly facilitate quantitative systems design.

Track 1B:
Tutorial 1B will cover the related issue of high-fidelity system workload characterization and modeling. High-fidelity workload characteristizations yield unique insights as well, while before incomplete and/or inaccurate workload models is one of the pitfalls that can lead to inaccurate quantitative modeling as well as inaccurate simulation results. Using actual traces for four very different types of workloads - namely production parallel computing workloads at NCSA, production Internet media server workloads at Berkeley and Wisconsin, Gnutella peer to peer system workloads with clients from multiple continents, and Internet backbone and access link workloads - this tutorial will cover the insights as well as the following principles that facilitate high-fidelity workload models:
  1. Identify the workload parameters that have primary impact on the observed system performance.
  2. Avoid characterizing a mixture or aggregation of samples from different distributions that occur at different points in time or for different customer classes. For example, don't aggregate the measured job interarrival times from different periods of the day that have different average arrival rate.
  3. Use curve-fitting tailored to the parameter and the system design questions to determine the simplest accurate model distribution for each distinct parameter.
  4. Use conditional distributions to identify the key correlations among the parameters, and to capture the correlations in a form that (a) yields insight, and (b) can be used to generate representative synthetic workloads with the observed correlations.
Tutorials 1A and 1B will cover all concepts from a system operation and measurement perspective. A few mathematical results that are needed will be defined in the tutorial. Thus, this tutorial is appropriate for all attendees with an interest in optimized system and network design.
Biography:
Mary K. Vernon received a B.S. degree with Departmental Honors in chemistry and the Ph.D. degree in computer science from the University of California at Los Angeles. In 1983 she joined the Computer Science Department at the University of Wisconsin-Madison, where she is currently Professor of Computer Science and Industrial Engineering. She has held visiting positions at the University of Washington and at Institut Eurecom.
Her research interests include quantitative analysis techniques for high performance network and systems design, optimized parallel (CMP) hardware/software co-design, storage system design, network protocol design, network traffic analysis, and workload characterization. She has co-authored over 80 technical papers including seven award papers - most recently one of three "fast track to ToN" papers at Infocom 2004, and the Best Paper Award at the 2005 USENIX Security Symposium.
Prof. Vernon has served as Program Chair of the 1990 ACM SIGMETRICS Conference, Program Co-Chair of Performance '96, and as Chair of the ACM SIGMETRICS. She has also served on the editorial board of the IEEE Transactions on Parallel and Distributed Systems, the 1993 NSF Blue Ribbon Panel for High Performance Computing, the NSF CISE Advisory Board, and the Board of Directors of the NCSA.
She received the NSF Presidential Young Investigator Award in 1985, the ACM Fellow award in 1996, and the UW-Madison Vilas Award in 2000, and the UW-Madison Kellett Award in 2006. She is a member of the IFIP WG 7.3 on Information Processing System Modeling, Measurement and Evaluation.

Tutorial Track 2A 
Stochastic Model Checking for Performance and Dependability Evaluation
Boudewijn R. Haverkort (University of Twente, The Netherlands)
Abstract:
In this tutorial I will present recently developed techniques and algorithms for model checking Markovian models. These techniques bring together elements from the world of verification (model checking) and elements of classical performance and reliability analysis. In doing so, a unified and versatile framework for the quantitative evaluation of complex systems arises. The presented techniques have been developed over the last 8 years and have received widespread adoption and application. Next to outlining the main methodology, some example case studies will also be presented.
Biography:
Boudewijn Haverkort (1964) obtained an engineering and a Ph.D. degree in computer science, both from the University of Twente, in 1986 and 1991 respectively. Since 2003, he is chairholder for Design and Analysis of Communication Systems at the University of Twente, the Netherlands.
Prior to that, he was, among others, seven years professor for performance evaluation and distributed systems at the RWTH Aachen, Germany, 5 years lecturer in computer science at the University of Twente in the Netherlands, and visiting researcher in the Teletraffic Research Centre at the University of Adelaide. His research interests emcompass the design and performance and dependability evaluation of computer-communication systems, model checking, parallel and distributed computing, and fault-tolerant computer systems.
He has published close to 100 papers in international journals and Conference proceedings, edited several books and conference proceedings and wrote a monograph on model-based performance evaluation of computer and communication systems. Since 2005, he serves on the editorial board of Performance Evaluation. He is a member of the German GI, the ACM, and was recently appointed Fellow of the IEEE for contributions in performance and dependability evaluation of computer and communication systems.

Tutorial Track 2B 
Routing in Wireless Networks
Don Towsley (University of Massachusetts, Amherst, USA)
Abstract:
In this tutorial we survey routing algorithms that have been developed for wireless networks. Specifically we survey four different classes of algorithms. These are:
  1. proactive and reactive routing algorithms, the latter of which include DSR and AODV
  2. backpressure routing algorithms
  3. data driven routing algorithms for wireless sensor networks such as diffusion routing
  4. epidemic-based routing for disruption tolerant networks
The first set of algorithms are in the process of being standardized. The backpressure algorithm, unlike the previous algorithms, exhibits provable optimality properties that make it highly desirable for future wireless networks. Last, sensor networks present unique characteristics because of their data-centric nature. We will assess how this has affected the development of routing algorithms appropriate to this environment. Last, we will survey the use of epidemic style routing algorithms appropriate for disconnected networks.
This tutorial is appropriate for researchers interested in obtaining a view of the state of routing algorithms in wireless networks in preparation for doing research in wireless ad hoc networks.
Biography:
Don Towsley holds a B.A. in Physics (1971) and a Ph.D. in Computer Science (1975) from University of Texas. From 1976 to 1985 he was a member of the faculty of the Department of Electrical and Computer Engineering at the University of Massachusetts, Amherst. He is currently a Distinguished Professor at the University of Massachusetts in the Department of Computer Science. He has held visiting positions at IBM T.J. Watson Research Center, Yorktown Heights, NY; Laboratoire MASI, Paris, France; INRIA, Sophia-Antipolis, France; AT&T Labs - Research, Florham Park, NJ; and Microsoft Research Lab, Cambridge, UK. His research interests include networks and performance evaluation.
He currently serves as Editor-in-Chief of IEEE/ACM Transactions on Networking and on the editorial boards of Journal of the ACM, and IEEE Journal on Selected Areas in Communications and has previously served on numerous other editorial boards. He was Program Co-chair of the joint ACM SIGMETRICS and PERFORMANCE '92 conference and the Performance 2002 conference. He is a member of ACM and ORSA, and Chair of IFIP Working Group 7.3.
He has received the 2007 IEEE Koji Kobayashi Award, 2007 ACM SIGMETRICS Achievement Award, 1998 IEEE Communications Society William Bennett Best Paper Award, and numerous best conference/workshop paper awards. Last, he has been elected Fellow of both the ACM and IEEE.

Tutorial Track 3 
Algorithmic Self Assembly and Molecular Computing: New Challenges in Performance Evaluation
Ed Coffman (Columbia University, USA)
Abstract:
Many claim that a glimpse of the future would show a major effort in evaluating the performance of computing paradigms in which myriad particles function autonomously in a process known as algorithmic self assembly. Currently, the most successful such process, originating in the famous experiments of Adleman, is a biochemical one in that molecules fashioned from DNA are the particles of choice, since they are manipulable and abundant in nature. The material covered by this tutorial draws from biology, chemistry, computer science, and mathematics, but the presentation will be pitched at a low level, with an undergraduate degree in any of the disciplines served by the Working Group being sufficient background. The mathematics will be lightweight, with proofs rarely needed; when one does appear, it is brief and intuitive. A dominant theme of performance analysis is the formulation of fluid limits.
Topics along with brief synopses are as follows:
Part I - Molecular computation by self assembly
  1. Synthesis of DNA molecules that perform elementary logic, the double crossover (DX) molecule as an example, the encodings of four sticky ends (short DNA strand segments) determine the operation.
  2. Modeling the DX molecule by a Wang-like tile, and hence computation as tilings of geometric figures.
  3. Programs as sets of tile types. Illustration of elementary computations: the XOR molecule, parity check (mod-2sum) of a bit string, and counting, implementing general logic circuits, Turing universality.
  4. Issues of program-size and time complexity. Abstraction to tiling rectangles, extension of fixed borders to growing borders.
  5. Evaluating performance in terms of computing (e.g., rectangle tiling) times, The TASEP-correspondence to tiling, uid-limit results, explicit measures of parallelism.
  6. Dealing with errors, a pulsing method applied to rectangle tiling, associating computing times with maximal-length monotone subsequences.
Part II - Self assembly and chemical kinetics
  1. The linear-polymerization model of Adleman, et al.
  2. Reaction rate equations (nonlinear, first-order ODEs) applied to self assembly of target polymers from an all-monomer initial state. Notions of yield and waste.
  3. A discrete event simulation model and its properties.
  4. Triangle self-assembly example, explicit results for self assembly times.
  5. The existence of phase transitions where waste in the absorbing state vanishes, sufficient conditions.
  6. Extension to reversible self assembly, controls to minimize self assembly times.

Tutorial Track 4 
Security and Cooperation in Wireless Networks
Levente Buttyan (BME, Hungary) and Jean-Pierre Hubaux (EPFL, Switzerland)
Abstract:
Enter the era of wireless networks. The number of wireless phones surpasses the number of wired ones; millions of nomadic users connect routinely to wireless Local Area Networks (LANs); wireless devices are commonplace in private houses, factories and hospitals; ubiquitous computing is envisioned, with myriads of sensing and actuating devices which communicate wirelessly and enable applications that change our living and working environment.
At the same time, a new networking paradigm emerges. So far, wireless networks, such as cellular networks, interconnected devices of no or limited programmability in a highly centralized manner. Nowadays, wireless networks comprise powerful and versatile devices with an increasingly active role in the network operation. Often, such user devices become the wireless network, as is the case for self-organizing multi-hop ad hoc networks and, for example, mesh or vehicular networks.
Unfortunately, this evolution creates new vulnerabilities. Security weaknesses are discovered even in existing wireless networks, e.g., wireless LANs, with some of them painstakingly addressed a posteriori. As solutions devised for wired networks cannot be used as such to protect wireless networks, we believe their protection requires additional attention and a more systematic approach. In this tutorial, we explain how to redesign security and safeguard wireless networks against malicious attacks, and then how to thwart selfish user behavior and stimulate cooperation in wireless networks.
Tutorial content:
  1. Wireless Networks and New Challenges
    In the first part, we explain what is changing in wireless networks and why security must be redesigned accordingly. The evolution from centralized to self-organized operation and the programmability of end user devices open the door to sophisticated and hard-to-prevent attacks, and render greedy behavior a serious threat. Communication across multiple wireless links (hops) requires cooperative route discovery and packet forwarding. Embedded systems (e.g., sensors or cars) imply that human beings are not necessarily involved in communication anymore, while miniaturization leads to limited resources (computing power, energy, and bandwidth) that are too valuable to expend on sophisticated security mechanisms. Finally, the proliferation of wireless-enabled devices and the pervasiveness of these emerging technologies raise major privacy concerns. We motivate the material presented in Parts 2 and 3 by discussing all these challenges and the crucial role of trustworthiness for the deployment of such systems; we present mesh, vehicular, and sensor networks, as well Radio Frequency Identification (RFID) tags as examples.
  2. Thwarting Malicious Behavior
    In the second part, we focus on mechanisms thwarting malicious attacks. We present basic concepts and illustrate them with examples taken from concrete proposals in the literature. In particular, we concentrate on fundamental security issues, such as the establishment of secure associations among nodes, the secure discovery of communication paths in the network, including the security of neighbor and route discovery, the security of data communication, and the protection of the end-user privacy.
  3. Thwarting Selfish Behavior
    In this last part, we focus on the danger of greedy user behavior. We provide the appropriate theoretical background to model this problem, and we illustrate this topic by two examples: the first at the network layer, and the second at the MAC layer.
Biography:
Levente Buttyan was born in 1970 in Salgotarjan, Hungary. He received the M.Sc. degree in Computer Science from the Budapest University of Technology and Economics (BME) in 1995, and earned the Ph.D. degree from the Swiss Federal Institute of Technology -- Lausanne (EPFL) in 2002. From 1997 to 2002, Levente Buttyan worked in the group of Prof. Jean-Pierre Hubaux in the Laboratory of Computer Communications and Applications at EPFL. In 2003, he joined the Department of Telecommunications at BME, where he currently holds a position as an Associate Professor and works in the Laboratory of Cryptography and Systems Security (CrySyS). His research interests are in the design and analysis of security protocols for wired and wireless networks, including wireless sensor networks (WSN), vehicular networks (VANET), and delay tolerant networks (DTN).

Jean-Pierre Hubaux joined the faculty of EPFL in 1990. His research activity is focused on wireless networks, with a special interest in security and cooperation issues. In 1991, he designed the new curriculum in communication systems. He was promoted to full professor in 1996. In 1999, he defined some of the main ideas of the National Competence Center in Research named "Mobile Information and Communication Systems" (NCCR/MICS); this center (still very active nowadays) is often nicknamed "the Terminodes project". In 2003, he identified the security of vehicular networks as one of the main research challenges for real-world mobile ad hoc networks. In 2007, he completed a graduate textbook entitled "Security and Cooperation in Wireless Networks", with Levente Buttyan. He is the chairman of the steering committees of ACM Mobihoc and of ACM WiSec. He is a member of the Federal Communications Commission (ComCom), the "Swiss FCC". He held visiting positions at the IBM T.J. Watson Research Center and at the University of California at Berkeley.