PaperPicker

Online Journal Club for Networking Researchers

Jim Kurose gets us thinking

Posted by Michael Gellman on December 4, 2006

The morning started off with a fantastic talk by Jim Kurose from University of Massachusetts. He is really a terrific speaker, with a very dynamic style.

Update! I spoke to Jim and he is fine with me posting the audio from his talk. The only caveat is that if you want to re-post this anywhere, you have to get his permission first.
[odeo http://studio.odeo.com/audio/3672363%5D

It’s the users, stupid!

I really like he emphasizes the users in his work. In his most recent Sensor Nets project, he is very candid and forth-coming that on the first go, the system they delivered was not adequate for its intended users, but that they went back, re-engineered it, and finally delivered a system that the users could really use. This focus on a practical problems and usable systems is one of the things that got me interested in research, and Jim Kurose really embodies these aspects in his work.

Stovepipes vs Layers

He has a lot of interesting thoughts on larger issues for Sensor Networks, including a converged protocol stack which I think is very compelling. Current approaches are what he calls “stovepipe networks” where every solution is custom-engineered to meet the specific requirements. This could be improved upon if common functionality could be extracted into layers that multiple implementations could leverage — much in the same way IP is a great solution to a number of networking problems.

By working on two very different physical implementations with similar goals, I believe he has gained a great deal of experience in what would be required in a layered architecture, and makes a compelling argument for one.

Other Topics

In addition to his own work, Jim also drew attention to a body of work he referred to as x-ities, which seems to be a way of evaluating seemingly qualitative properties such as “robustness” in a quantitative way. I need to look into this a bit more after the conference. He also briefly touched on his thoughts on IP showing its age. He has a set of slides on his website which depict IP as an aging old man getting “love handles” as it reaches middle age getting ever more “fat”.

Jim Kurose is a great speaker, and if anyone has the opportunity to hear him speak, I recommed it highly. I can understand why he has been awarded the oustanding teacher award so many times!

I have an (unedited) audio recording of the talk. It’s not the best quality as I didn’t get the microphone that close, but it is surely audible. After I get permission from Jim, I’ll post it here.

Powered by Qumana

Advertisements

Posted in CoNEXT, Conference | 1 Comment »

CoNEXT is underway!

Posted by Michael Gellman on December 4, 2006

The student workshop at CoNEXT 2006 is underway in Lisbon! I’ve been taking notes, and it’s been a very interesting day. I’ve got a series of posts planned which I’ll put up as I can.

Unfortunately, my digital camera seems to have crashed, so no photos. However, we got free MP3 players in our goodie bags, and I’ve recorded all the talks today. If the quality is any good, I’ll see if I can post them!

Powered by Qumana

Posted in CoNEXT, Conference | Leave a Comment »

Controlling routing oscillations in adaptive routing protocols

Posted by Michael Gellman on November 20, 2006

I’ve been doing some work on ways to eliminate routing oscillations in an adaptive routing protocol. There’s a very good analysis using simulations in [4], but the current question I am investigating is whether there exists some analytical method for showing that the protocol will not oscillate, and, if so, how long does that take. One (very) well-developed field that has the potential to answer this question is that of Game Theory.

Price of Anarchy

One of the best introductions to this topic that I found was a research monograph of Tim Roughgarden’s Ph.D. Thesis Selfish Routing and the Price of Anarchy. The pace is good for setting the stage of how game theory, and specifically notions of Nash equilibrium, can be used to analyze networks. However, one of the assumptions that the analysis makes is that the routing system will tend to equilibrium — which is exactly what we are questioning. Thus, I continued my search into some more recent work.

Convergence to Wardrop Equilibrium

It is here that I came across the work of Simon Fischer, et al which uses notions going back to the 50’s in traffic networks to demonstrate convergence of on-line routing algorithms. At STOC 2006, they published a paper [1] analyzing an adaptive routing protocol, giving an upper bound on the number of rounds it takes to reach equilibrium "that is independent of the size and the structure of the network". It depends principally on a parameter of the latency function which they call its relative slope. They then extend this theoretical work with a set of simulations [2] to be presented at this year’s CoNEXT conference. Using bursty traffic on both artificial and real topologies, they show that an implementation of their protocol in a traffic engineering context can lead to good on-line performance without oscillations (for suitable parameters).

No-regret Routing

In a similar vein of analysis, Blum, et al show in [3] that no regret online routing algorithms approach a Nash equilibrium "at a rate that depends polynomially on the players’ regret bounds and the maximum slope of any latency function".

Closing Thoughts

The question of whether adaptive routing protocols converge to an equilibrium or whether they oscillate ad infinitum is an important research question that is being addressed in different ways (e.g. experiments, simulation [4], and mathematical analysis [1-3]). One traditional argument against adaptive routing protocols (going back to ARPANet days [5]) is that they are unstable due to their selfish nature. This new body of evidence is gathering support for truly adaptive routing protocols which give good QoS to traffic in a completely distributed manner, all the while having good convergence properties.

References

  • [1] – Fischer, S.; Räcke, H. & Vöcking, B. "Fast Convergence to Wardrop Equilibria by Adaptive Sampling Methods" Proc. 38th Ann. ACM. Symp. on Theory of Comput. (STOC), ACM, 2006, 653-662
  • [2] – Fischer, S.; Kammenhuber, N. & Feldmann, A. "REPLEX — Dynamic Traffic Engineering Based on Wardrop Routing Policies" Proc. 2nd Conference on Future Networking Technologies (CoNext), 2006
  • [3] – Blum, A.; Even-Dar, E. & Ligett, K. "Routing without regret: on convergence to nash equilibria of regret-minimizing algorithms in routing games" PODC ’06: Proceedings of the twenty-fifth annual ACM symposium on Principles of distributed computing, ACM Press, 2006, 45-52
  • [4] – Gao, R.; Dovrolis, C. & Zegura, E. "Avoiding Oscillations due to Intelligent Route Control Systems" IEEE Infocom, 2006
  • [5] – McQuillan, J.; Richer, I. & Rosen, E. "The New Routing Algorithm for the ARPANET" Communications, IEEE Transactions on [legacy, pre – 1988], 1980, 28, 711-719

Please comment to this post if you know of other references that are of interest, or additional points about the references I selected!

Posted in Equilibrium, Game Theory, Oscillations, Paper Review, research, Routing | 1 Comment »

CoNEXT 2006

Posted by Michael Gellman on November 19, 2006

CoNEXT 2006 Logo

I’m going to be attending CoNext 2006 in Portugal this year. CoNEXT describes itself as (from the Call for Papers):

CoNext 2006 will be a major forum in the area of future networking technologies. CoNext emphasizes synergies between various international and technical communities. The conference will feature a single-track, high quality technical program with significant opportunities for technical and social interaction among a closeknit community of participants. CoNext aims to be open and accommodating to multiple viewpoints and is committed to fairness in the review process and to returning deep and sound technical feedback to authors of submitted paper.

The program seems really compelling, with talks by a lot of researchers at the forefront of Networking research. I’m really looking to the talk by Jon Crowcroft and also the paper by Simon Fischer at RWTH Aachen which I’ve read, and I’ll be posting some thoughts on soon.

I’m going to be presenting a student poster on my Ph.d. work dealing with QoS routing in Peer-to-Peer overlay networks. If you’re attending this conference, be sure to post a comment — it would be great to chat about it in advance!

There’s also a great new Web 2.0-ish site which is dedicated to conferences called ConFabb where they have a dedicated entry for CoNEXT 2006.

Posted in Conference, QoS, research | 1 Comment »

Pursuing Research in Communications Networks

Posted by David Mayer on November 14, 2006

Introduction

Research in communications networks is a challenging enterprise many find worthy of endeavour. Suppose we have answered “why” to do networking research in the first place and let us move to the second most important question, that of “how”.

Communications research can be seen as having attributes of both science and engineering. I think this also corresponds to the main reasons of “why”; it is interesting on its own (the science bit, just as it is interesting to research the age of universe and the structure of our genes) and there may be a practical benefit for the society (the engineering bit, “I’m doing something useful”). On the one hand, some networking research is very theoretical (i.e. unlikely to benefit any implementation) and yet interesting, just as it is interesting to research the age of the universe and the structure of our genes. On the other hand, the benefit to society best stems from a practically-oriented research.

In this article, I would like to present some ideas for constructing a worthy research topic and research work in the field of communications networks. In particular, we will use network Quality of Service (QoS) research as an example.

3 Strategic Considerations

We have identified the following strategic considerations:

1. Timeliness

As noted by J. Crowcroft et al. in [1], the time required to research, engineer and retrofit a specific solution is typically longer than the remaining life-span of the congested core. If research commences only when there is an immediate need for it, it is too late. The ability to envisage future needs is essential here. We will present three drivers of new networks which help in predicting the future.

2. Deployability

The lack of attention to non-research issues was the main reason behind the failure of network QoS research. Deployability encompasses various operational (practical) and economical issues.

3. Functionality

Given the need for a new mechanism, what is the best thing for it to do? To illustrate the problem, consider that there is a general consensus on the need of QoS, but there is much less agreement on the specific functionality, as seen from the existence of proposals as different as IntServ and DiffServ. We will try to show that functionality is tightly connected to deployability and recommend that a proposed mechanism should maximise benefits subject to constraints given by deployability considerations.

1 – Timeliness

Research must commence a long time before there is a need for it [1]. Obviously, this requires us to predict the future. We shall now look at some examples of how this prediction could be done. The key question is: What causes a need for an improvement? The answer enables us to start relevant research at the right time. In principle, the driving forces are, according to K. Cambron, president and CEO of AT&T Labs [2]:

A) Changing demand

B) Decreasing entropy

C) Changing technology

A – Changing demand

This includes volume of traffic, but also its type, patterns, emergence of new applications. As an example, let us take a look at GoogleTech talk by Van Jacobson [3]. Jacobson reviews the history of telecommunication networks, the success (disaster) of TCP/IP and finally argues that our current networking protocols are inadequate, because they were designed for a conversational network, where 2 people/machines talk to each other, while today over 99% of network traffic comes from a machine acquiring named chunks of data (web pages, emails). The main point is we are doing dissemination over a conversational network, which has numerous inefficiencies. That was an example of a driving force behind new networks – a change in demand.

B – Decreasing entropy

This term is used in [2] to describe the ability of a network to deal with new problems. Cambron argues that in order to meet new demands, changes to the network are made and “every design choice removes a degree of freedom, solving an immediate problem, but eliminating potential solutions to other problems that lie in the future”. At some point the network becomes incapable of dealing with new demands and must be replaced by a new network.

C – Changing technology

Obviously, the increasing bandwidth of core and access links has a fundamental impact on when and what QoS is needed.

2 – Deployability

The failure of QoS research to deliver a practical solution is ascribed to researchers’ lack of attention to practical issues. Most of all, it is the lack of feedback from operational networks environment (or the unwillingness of academia to interact). An excellent example of some practical problems is [4]. Here we list some key points.

Operational considerations

  • Rift between networking operations and protocol design.
    There seems to be a range of difficulties in the operational network that researchers have never heard of and therefore will not consider in their design. The list includes complexity from the configuration point of view, proneness to operator error, debugging tools, incremental deployability and so on.
  • Complexity. In [4], G. Bell argues that IP multicast defines the limit-case for deployable complexity. Quoting [4],

    “Eventually, we concluded that implementing IP multicast required a substantial commitment of engineering resources, and that it would inevitably impair the stability of unicast routing due to the need for frequent OS upgrades and intrusive testing. Many enterprises do not have the mandate, or see a justification, for making this commitment.”

    A good example of operational issues arising with the deployment of QoS is Davie’s 2003 paper “Deployment experience with differentiated services [5]”.
    (Network administrators will not risk destabilising their network)

Economic considerations

Quoting [6],

“Solutions require path paved with incentives to motivate diverse agents to adopt it, implement it, use it, interface with it, or just tolerate it. Without that path you’ve wasted your time”.

The bottom line that justifies QoS research from the economic point of view is that QoS provides network operators with a means to extract value from their network by charging for different level of service [1]. However, this mere fact does not guarantee embracement by operators and QoS research must account for all practical issues to become successful.

For an in-depth analysis of different political and socio-economic forces in the Internet, see [7] by D. Clark et al. and [8].

Note on deployability

The ease at which one can deploy an idea depends very much on the network layer in question. It is easy to deploy a novelty at the application level, but very difficult at Layer 3, and nearly impossible on layers below that. When choosing a research topic while aiming at its practical impact, one could take this dependency into consideration. (There are probably many people of the calibre of Google guys Larry Page and Sergey Brin working on lower layers, but it takes a Cisco-sized company to make a difference there.)

3 – Functionality

Researching and engineering particular mechanisms can be seen as finding the best trade-off between functionality and operational feasibility. For example, it is fairly established that the most desirable feature in the current Internet is to have some QoS notion on an end-to-end basis (i.e. from source to the destination, across domains). However, if the research community were asked what the QoS should be, there would not be a certain answer. Guaranteed performance metrics? Differentiated service? How differentiated? Simple congestion feedback?

Perhaps the answer should be: The most beneficial solution subject to time and deployability constraints.

Summary

To propose and conduct research in networking requires profound and wide understanding of the theoretical and practical constraints involved therein.

We have identified 3 critical considerations: timeliness, functionality and deployability, presenting a few points that can help in assessing them. We argued that design should maximise benefit subject to practical constraints stemming from deployability issues.

On the case of QoS research functionality on its own is only a small bit in the recipe for success. Only proposals fully respecting real-world issues have a chance to make it to the operational network.

For those not yet in a position to make judgements about its future development and the operational and economical aspects and are forced to invent a research topic (such as a PhD student with a freedom/requirement to come up with a topic), an alternative safe approach is possible:

  1. Choose a research area according to your interest, opportunities to contribute or instinct.
  2. Spend several months reading and writing simulations to obtain a profound understanding of the problem in question.
  3. Find a point of potential improvement and propose the improvement. From there on you can slowly start getting the whole picture, while working and contributing.

I hope this article provides some rough guidelines as to what to look out for when commencing a research project in the area of communications networks.

References

[1] Jon Crowcroft, Steven Hand, Richard Mortier, Timothy Roscoe, and Andrew Warfield. Qos’s downfall: at the bottom, or not at all! In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 109–114, New York, NY, USA, 2003. ACM Press.

[2] G. K. Cambron. The next generation network and why we’ll never see it. Communications Magazine, IEEE, 44(10):8–10, 2006.

[3] Van Jacobson. A new way to look at networking, 2006. Google Tech Talks. http://video.google.com/videoplay?docid=-6972678839686672840&q=van+jacobson&pr=goog-sl.

[4] Gregory Bell. Failure to thrive: QoS and the culture of operational networking. In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 115–120, New York, NY, USA, 2003. ACM Press.

[5] Bruce Davie. Deployment experience with differentiated services. In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 131–136, New York, NY, USA, 2003. ACM Press.

[6] K. C. Claffy. Top problems of the Internet and how to help solve them, 2005. http://www.caida.org/publications/presentations/2005/topproblemsnet/topproblemsnet.pdf.

[7] David D. Clark, John Wroclawski, Karen R. Sollins, and Robert Braden. Tussle in cyberspace: defining tomorrow’s internet. In SIGCOMM ’02: Proceedings of the 2002 conference on Applications, technologies, architectures, and protocols for computer communications, pages 347–356, New York, NY, USA, 2002. ACM Press.

[8] L. Burgstahler, K. Dolzer, C. Hauser, J. Jahnert, S. Junghans, C. Macian, and W. Payer. Beyond technology: the missing pieces for qos success. In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 121–130, New York, NY, USA, 2003. ACM Press.

Posted in deployability, QoS, research | 18 Comments »

Packet Reordering

Posted by Michael Gellman on November 7, 2006

I’ve been doing some literature surveys recently on the topic of measuring Packet Reordering, and, for what should be a simple thing to quantify, it has a surprisingly large body of research. In this post, I’ll talk about packet reordering, why it’s important to consider, and approaches to quantify it.

What is Packet Reordering?

When a sender generates a traffic stream (this could be anything, from TCP traffic, to real-time Voice or Video) it generates an in-order sequence of data packets. Due to any number of causes which we discuss below, there may be a chance that the ordering of the packets received at the destination is different from that which the sender generated. This phenomenon we refer to as reordering.

In their IMW 2002 paper [1], Jaiswal, et al cite 3 main causes for packets being received in a different order than they were generated by the source:

  • Retransmission. In this case a sender infers that a packet has been lost and retransmits the packet. The retransmitted packet will have a sequence number that is smaller than previously observed packets at the measurement point and hece will be deemed “out-of-sequence.”
  • Network duplication. In this case, a non-sender-retransmitted copy of a packet is observed. This can occur when the measurement point is within a routing loop (and hence the same packet is observed more than once), or if the network itself creates a duplicate copy of the packet
  • In network-reordering. In this case, the network inverts the order of two packet in a connection (for example, because of parallelism within a router or a route change

In the full version of their paper, delivered at INFOCOMM 2003 [2], Jaiswal, et al present the results of a large measurement study into the extent of reordering in the Internet. Their work has a number of methodological contributions for the classifying TCP packets observed in the network backbone. From their study, they found that many of the out-of-order packets observed in the network were due to retransmissions, and only a small fraction of packets (on the order of 1%, on average) were re-ordered by the network.

Why is it a problem?

Bennett, et al [3] enumerate five negative consequences of reordering on TCP traffic:

  1. Reordering causes unnecessary retransmissions
  2. Reordering makes it difficult or impossible to grow TCP’s congestion window
  3. Reordering can cause actual packet losses to be obscured, so that TCP retransmits lost packets later than it normally would
  4. Reordering may cause the round-trip estimator to poorly estimate the round-trip time
  5. Reordering may reduce the efficiency of the receiving TCP

How can we measure it?

A straight-forward method for measuring the number of out-of-order packets in the network is to consider any packet that has a sequence number that is smaller than that of a previously observed packet in that connection [1]. This is also known as the Type-P-Reordered metric in [4].

This approach has the advantage that it is simple to compute, and it results in a single metric. However, this simplicity results in subtle problems that have been highlighted by various researchers. One particular drawback to is that a single out-of-order packet with high sequence number will classify many subsequent packets as reordered, skewing the results. This has caused some researchers to call for a more sophisticated metric to take into account the degree of reordering.

One approach proposed by Piratla et al in [5] is known as reorder density, which, instead of a single value for desequencing, results in a histogram of reordering, whereby the degree to which a packet arrived reordered is taken into account. They compare their method with many other proposed reordering metrics in [6].

Conclusion

I hope this has given you an overview of what Packet Reordering is, why it is harmful to network flows, and some of the ways that it can be measured.

For further references on this topic, a great resource has been created here by Dr. Piratla.

References

  • [1] – Jaiswal, S.; Iannaccone, G.; Diot, C.; Kurose, J. & Towsley, D. “Measurement and classification of out-of-sequence packets in a tier-1 IP backbone” IMW ’02: Proceedings of the 2nd ACM SIGCOMM Workshop on Internet measurment, ACM Press, 2002, 113-114
  • [2] – Jaiswal, S.; Iannaccone, G.; Diot, C.; Kurose, J. & Towsley, D. “Measurement and classification of out-of-sequence packets in a tier-1 IP backbone” INFOCOM 2003. Twenty-Second Annual Joint Conference of the IEEE Computer and Communications Societies. IEEE, 2003, 2, 1199-1209
  • [3] – Bennett, J. C. R.; Partridge, C. & Shectman, N. “Packet reordering is not pathological network behavior” IEEE/ACM Trans. Netw., IEEE Press, 1999, 7, 789-798
  • [4] – Morton, A.; Ciavattone, L.; Ramachandran, G.; Shalunov, S. & Perser, J. “Packet Reordering Metric for IPPM” 2006
  • [5] – Piratla, N. M.; Jayasumana, A. P. & Bare, A. A. “Reorder Density (RD): A Formal, Comprehensive Metric for Packet Reordering.” Proc. IFIP Networking Conference, 2005, 78-89
  • [6] – Piratla, N. M.; Jayasumana, A. P. & Bare, A. “A Comparative Analysis of Packet Reordering Metrics” COMSWARE, 2006

Posted in Paper Review | 3 Comments »

PaperPicker Launches!

Posted by paperpicker on October 18, 2006

Welcome to our contribution to the Blogosphere, PaperPicker!

What is this?

We envision this blog as an online Journal Club aimed at networking and telecommunications researchers. This is a tool used in science to keep up-to-date with new and exciting developments in the field, and we’d like to see how well it works for us.

We’ll pick interesting articles that appeal to us for one reason or another, and write a review of why we liked it, and what we feel its overall contribution is with respect to other works that we’re aware of.

Who are we?

We’ll introduce who we are as we go along, but a brief bio is that we are two Ph.D. students involved in different areas of networking who enjoy reading academic articles, and keeping abreast of what others are doing.

What Now?

We’ve picked a few recent papers that we like to start off the blog, and we’ll be posting some reviews in the coming days. If you come across any papers you think we would find interesting, please just drop us a note at paperpicker@googlemail.com. Also, we welcome any and all comments you have so feel free to leave them on any of the reviews!

Cheers!

Mike and David

Posted in News | Leave a Comment »