Online Journal Club for Networking Researchers

Archive for the ‘research’ Category

Vint Cerf on Current Internet Research Problems

Posted by David Mayer on October 1, 2008

In September this year the British Computer Society organised an international academic conference, chaired by Prof.  Erol Gelenbe, featuring talks and papers under the theme of ”Visions of Computer Science”. Among the key speakers appeared also Vint Cerf, one of the founders of Internet’s underlying mechanics, currently holding the position of “Internet Evangelist” at Google.  Vint Cerf has given a speech on the history of Internet, its current issues and the concept of Interplanetary Internet.  In this article we would like to provide a handy list of what Vint Cerf considers as the most important research problems concerning current Internet.

List of research problems concerning the Internet

  • Security at all levels
  • Internet Erlang Formula
    Erlang formulas were used in telephony, whereby the call blocking probability could be calculated based on calls arrivals, duration and the number of lines. By “Internet Erlang formula” Vint means some tool which could relate network parameters to the network performance perceived by users or applications. Vint identifies the problem in the lack of up-to-date models. The never ending innovation of applications makes any model assumptions go quickly wrong, unlike in telephone networks.
  • Mobility is something that we have not handled well in the Internet
    Vint said that he had made a mistake in the design of TCP/IP by binding too tightly TCP end-point identifier to the physical location/IP address. Currently, a TCP user moving elsewhere destroys the TCP connection.
  • We are not exploiting the possibilities of
    • Multihoming – how do we take advantage of being connected to the Internet via multiple service providers at the same time (hence given several IP addresses simultaneously)
    • Multipath routing whereby multiple routes would be use simultaneously, rather than using one route after another breaks
    • Broadcasting, especially in wireless networks where broadcast is a natural feature of the medium
  • Semantic web
    Vint asserts that there is a vast potential in creating machinable semantic relationships rather than mere hyperlinks. A searched expressions may exist in several different contexts and Internet users can help increase the semantic clarity of Internet content.
  • The problem of rotten bits
    If we do not retain the software that had been used to create content in the past, it will become invisible and meaningless in the times to come. Think of a proprietary formats which are no longer supported by the company that developed them, or just older formats not longer supported.
  • Energy consumption
    One of Google data centers consumes 128 megawatts of energy. Engineers increasingly take into account energy consumption as one of the design criteria when developing algorithms and computer architectures.

A note on IPTV

Most people associate video with streaming video. However, only 15% of the video we watch constitutes real-time video. Vint asserts that the rest can and should be thought of as a mere file transfer, since the source is not real-time. This consideration lightens the otherwise stringent demands a real-time video transfer impose on networks. Vint predicts that streaming will be only a minor aspect of video on the Internet.

A note on intellectual property

Vint notices that the Internet works by copying content, hence touching on the issue at heart of intellectual property protection. Vint suggests that copyright infringment in the Internet context will need to be defined differently from a mere copying.

A note on running out of capacity

Vint asserts that the backbone links composed of optical fibres are far from running out of capacity, but problems are real in the network edges. He sees this as a regulatory and economical issue.

A closing note

During the talk Vint presented a slide with a picture of a 1977 network demonstration he co-designed. The network managed to carry a real-time telephone call across 3 completely different networks – radio, landline and satellite – networks differing in modulation, bit-rate, loss-rate (kind of VoIP in 1977!). How much has the Internet really changed since then? Apart from its unprecedented growth, little seems to have changed in its underlying foundations. This article presented a list of current research challenges as seen by Vint Cerf, a visionary who co-fathered these foundations, a visionary with the most amazing track record.


Posted in research, Uncategorized | Tagged: , , | Leave a Comment »

Network Congestion Control, M. Welzl, Wiley 2005. (A Book Review)

Posted by David Mayer on August 31, 2007

Network Congestion Control: Managing Internet Traffic — Michael Welzl (Wiley Series on Communications Networking & Distributed Systems 2005).

The control of Internet congestion is a field entangling numerous research, engineering and economic topics, each of which, on itself, offers only a limited perspective on the problem. Some books on this topic treat it as a part of a broader field of network communications, others provide a narrow formal mathematical exposition. Michael Welzl’s book is an exception. It exposes congestion control in several facets, always succinctly explaining the principles and anticipating reader’s questions. Aimed mainly at starting PhD students and interested networking people outside the research community, it avoids formal mathematics and builds up the material through common sense. But it is far from trivial: the author provides a profound overview of the principles, problems and solutions and manages to cover a vast number of diverse topics, focusing on underlying principles, practicality, drawbacks. The author co-chairs the Internet Congestion Control Research Group and works at the University of Innsbruck in Austria.

Essential introduction into the problematics is developed in chapter 2, in which the less informed reader becomes familiar with the basic concepts such as control feedback, stability, queue management, scalability, incentives, fairness. Present technology is the topic of chapter 3, 70% of which takes a fairly detailed description of TCP. The exposition is incremental and problem-motivated. The rest of this chapter briefly describes SCTP, RED and ATM’s Available Bit Service.

The main value of the book I see in its latter part. Chapter 4 is an excellent exposition of very recent experimental enhancements to congestion control. Most of this chapter is dedicated to TCP enhancements and active queue management enhancements. About 10 recent queue management techniques are presented, including such specialities as RED with Preferential Dropping and Stochastic Fair BLUE. A view from the Internet provider’s side is briefly treated in chapter 5. The topics here share one property: they operate on a much larger timescale than those of other chapters. They include Traffic Engineering, MPLS and QoS architectures. Although the level of detail here is much lower than in other chapters, the chapter puts the others into a perspective. The last chapter makes for an exciting read: it presents a mixture of open problems waiting for the hungry PhD mind. It also contains the author’s personal view on the subject. It could be characterised as common-sense reflections of a well-informed pragmatic.

Two appendices conclude the book. Appendix A shows some practical teaching techniques, Appendix B introduces the workings of the IETF.

As great as the book seems, I spotted a few over-simplifications:
Section 2.17.3 describes the concept of proportional fairness. The author states here that the maximization of total utility maximises financial gain. I think this is quite misleading because the term financial gain remains undefined and it is not clear from the text where the revenue comes from. Even if the reader knew it is here meant to come from congestion prices, this would be still problematic. It is true that revenue is maximised if prices charged equal to the congestion prices corresponding to the utility maximisation, but congestion prices are typically not meant to generate revenue as they serve as a control mechanism.
Further the author states that the maximisation of utility functions can be seen as an “optimisation (linear programming)”. But only the constraints are linear and the objective function is non-linear, concave, hence the optimization problem is nonlinear.
In section 5.1 the author explains long-range dependence of Internet traffic. It is here stated that Poisson process’s distribution flattens out as the timescale grows. Surely it is meant that the autocorrelation function flattens out. (As opposed to that of self-similar Internet traffic.) [Update: See comments below by the book’s author. ]

A word of warning for PhD students
This book is a very good one but I think one should be cautious about using some of the open problems as a basis of one’s PhD. The problems border with engineering rather than research and in some colleges solving these problems cannot satisfy PhD requirements by definition. Ideally, solutions to these problems will come about as a byproduct of a more fundamental framework called the thesis. Perhaps these problems can motivate and trigger a PhD topic, but they should not constitute it.

Any alternative books out there?
A book treating congestion control in detail is The Mathematics of Internet Congestion Control by R. Srikant, 2004, Birkhäuser, but be aware: the exposition is quite straightforward and very formal. Another adept is High Performance TCP/IP Networking by M. Hassan and R. Jain, 2003, Prentice Hall, which is limited to TCP and focuses on performance evaluation.

In summary, the material in this unique book is excellently exposed, a good balance between depth and clarity is kept throughout and the book should be keenly received by its audience.

Posted in Book review, congestion, Of Interest, QoS, research | 2 Comments »

Jon Crowcroft on Opportunistic Networks and Human Mobility

Posted by David Mayer on February 22, 2007

Social networkJon Crowcroft is a computer science researcher at the University of Cambridge. You can get a fairly good idea about his virtues and attitudes by reading through his webpage and papers, notably “Qos’s downfall: at the bottom, or not at all!” [1] and “Report from the Clean Slate Network Research post-SIGCOMM 2006 Workshop” [2]. Jon gave a talk last week at Imperial College, London, about a daring vision of a ubiquitous network formed by humans shown feasible by extensive, breath-taking experiments. In this article we will try to convey the key points of the talk.

All Together We Can Be One Big Mobile Ad-Hoc Wireless Network

The project Jon and his colleagues are working on concerns the delivery of data in a network with absolutely no infrastructure by making use of human mobility and local forwarding between wireless devices carried by people.

As Jon says:

[It’s about a] network that doesn’t have an end-to-end connectivity at any given moment, but you can construct a path by storing the data and then waiting a while, until things change and then forwarding them.

This can be also also expressed as a STORE – CARRY – FORWARD paradigm with the following meaning:

  • “STORE” – in a portable devices we carry around (e.g a mobile phone)
  • “CARRY” – carry the data in your device while you walk and travel around
  • “FORWARD” – send data to another device on a short distance when appropriate using some wireless protocol (e.g. Bluetooth) and a forwarding protocol

Note that this idea falls into the area of “delay-tolerant networking” and “opportunistic communications”.

The rest of the talk aims at two fundamental questions:

  • How does such network ever work?


  • How to design a suitable forwarding protocol?

To answer the former question, we have to understand human mobility. Put simply – how many people we meet and what people they meet – parameters which allow us to estimate the chances that a message is delivered to its destination in a given time.

Adding Social Aspects to Human Mobility Modelling

As Jon notices, wireless networks have a dual nature – a physical one and a social one. Every person has multiple affiliations: few members of the family, colleagues at work, your friends who you meet occasionally, and, importantly, strangers who just happen to be in your proximity. It turns out that the social structures are vital to the feasibility of the network.

The iMote

(Picture is taken from “Pocket Switched Networks: Real-world mobility and its consequences for opportunistic forwarding” by Chaintreau, A., et al., available online at

To obtain some confidence in how a network described above would perform, an amazing experiment has been carried out. (See [3] for details.) A large group of people were given a small Bluetooth-enabled device called iMote (in the picture above), which they carried around for several days. All the iMote does is it records contacts with other iMotes (time and ID). By processing the data from the iMotes at the end of the experiment, authors could map them to the personal data of the participants (with their informed consent) which included their affiliations, social connections, and other elements pertaining to their life style.

It is a little bit hard to describe the results of the experiments concisely. But one thing is for sure: we are all connected pretty well. It is perfectly realistic to expect that a message from any sender reaches any destination. The property to optimise is the delivery time. Note that these results are conservative (i.e. they are likely to be better in real world) as they stem from experiments with much smaller number of participants compared to the real world where every one carries a wireless device.

The reader may be interested in learning that the social structure of the network can be characterised using the concept of community-cliques and centrality, and that the parameter crucial to the functionality of the opportunistic network is the inter-contact time defined as the time interval between two consecutive contacts between a pair. The experiments show that the inter-contact probability distribution is heavy-tailed and similar to the approximate power law shape. Quoting [3], “This is contrary to the exponential decay of many mobility models put forward in the literature. As a result, opportunistic networking algorithms which have been designed around exponential models must be re-evaluated in the light of our observations.”

Having obtained a solid confidence in the feasibility of such network from iMote experiments, the problem to tackle now is the design of a suitable forwarding protocol.

Designing a Forwarding Protocol

Imagine you are standing in a crowd, your next generation phone storing some data to be sent. A forwarding protocol will decide which device in your proximity will be used to propagate your data further. Note that there isn’t any holy grail in the search for a forwarding protocol and author do not claim the proposed protocol is optimal.

When routing a packet in a wired network, we choose the next hop according to some parameter describing the next hop (e.g. its distance from the destination). What are the key parameters describing a node (a person) in terms of its suitability as the next hop?

  • Node’s popularity — how many other nodes it encounters
  • Node’s affiliations — what social groups it encounters

The forwarding protocol can be based on these characteristics in a number of ways. Jon takes us through simple protocols such as the RANK protocol which forwards data to highly centric nodes (the guy on the corner selling peanuts would be an absolutely brilliant forwarding node) or the LABEL protocol which uses nodes with the same affiliation as the destination node, showing the probabilities of successful delivery as a function of delivery time. The BUBBLE protocol uses popularity to “bubble up” through different social groups until it finds a node with a good connection to the target group. Then it travels between social groups using nodes’ affiliation labels.

In all this Jon emphasises the need for modeling of heterogeneity at multiple levels. For example, a postman is a highly central node in the sense that he meets a lot of people, but not any particular social group, whereas a conference chair may be an asocial loner with high centrality in geeky circles. The BUBBLE protocol reflects this.

Issues, Bits & Pieces and a Conclusion

User privacy is a major concern here. Even though the forwarding protocol does not need concrete information about your affiliations, the idea that others might know about your social structure is disturbing. Another open issue is the way people would be motivated to participate in the network. A highly popular node will surely drain its battery sooner for example, and needs to be compensated for this. How to include incentives in the network? And, naturally, how to deal with malicious use?

Issues aside, we saw this exciting proposal of an opportunistic network shown feasible by extensive emulations. Reading through the paper Network Architecture Research Considerations Or The Internet Conspiracy at the online edition of the Computer Communication Review , you can notice that this project goes well in line with the philosophy of no-network-architecture-at-all.

But most importantly, this work presents, in my opinion, a great example of a bold, visionary and yet pragmatic research. If you are after fresh research ideas in the post-Internet era, keep your eyes on Crowcroft.

The presentation slides for the talk are now available here.


[1] Jon Crowcroft, Steven Hand, Richard Mortier, Timothy Roscoe, and Andrew Warfield “Qos’s downfall: at the bottom, or not at all!” In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 109–114, New York, NY, USA, 2003. ACM Press.

[2] Crowcroft, J. and Key, P. 2007. “Report from the clean slate network research post-sigcomm 2006 workshop” SIGCOMM Comput. Commun. Rev. 37, 1 (Jan. 2007), 75-78. DOI=

[3] Chaintreau, A., Hui, P., Crowcroft, J., Diot, Ch., Gass, R., Scott, J. “Pocket Switched Networks: Real-world mobility and its consequences for opportunistic forwarding”. Technical Report, Number 617. ISSN 1476-2986. University of Cambridge Computer Laboratory, February 2005. Available online at

Posted in Crowcroft, human mobility, lecture, opportunistic network, research | 4 Comments »

“How Fair Is Your Queue?” – Notes on talk by Prof. Levy

Posted by David Mayer on January 7, 2007

The task of inventing a good fairness metric for queues is a tough and important one. Such a metric would aid the design of wide range of systems, policies and applications including web servers, P2P applications, call centers, and others. In 2006, though, there still is not one single, widely accepted criterion.

In December 2006, Prof. Hanoch Levy from Tel Aviv University, Israel, gave a gripping talk on the topic of fairness in queues. The talk took place at Imperial College, London. In this article, based on Prof. Levy’s talk, we would like to draw the attention of networking researchers and system designers to this useful and practical topic.

What is queue fairness?

Before exposing the centrepiece of Prof. Levy’s talk, we should make clear what “queue fairness” actually is. Most researchers will be familiar with “flow fairness”. Flow fairness attempts to make flows (streams of packets) fair to each other. As an example, take “flow rate fairness”, which makes flows fair to each other in terms of their rates, or a “cost fairness” which makes flows fair with regard to the cost they incur. By contrast, “queue fairness” asks: How fair is the service received by a job (or a customer) in one queue?

As an example, consider the following scenario. A large job arrives at a queue first and a very short job arrives at a queue a bit later. Is it more fair to serve the first job or the short job first? That obviously depends on the type of service, context, culture, but in any case it would be useful to have a metric quantifying the fairness of the policy.

Why bother with queue fairness at all?

  • The question is in fact redundant because the very purpose of queues is fairness! Queues, as opposed to crowds, appeal to our notion of fairness. Therefore, fairness is inherent part of queues.
  • People do care about fairness. A company can get sued if a customer loses money due to being treated unfairly.
  • In order to design a policy, we need a metric to evaluate it. However, there is no widely accepted metric for queue fairness as of yet.

The problem is, there is no established fairness criteria for jobs in a queue and it is difficult to devise one.

Resource Fairness: Solving the Seniority vs. Size Dilemma

At the heart of the difficulty lies the dilemma between seniority and size. Seniority refers to the time a customer spent in the queue waiting. Size refers to the service requirement, which is proportional to the size of the job. Previous proposals based themselves on either seniority [2] or size [3], which results in limited applicability and non-intuitive outcomes. For example, a seniority-based criteria ranks FCFS (First-Come-First-Serve) policy as the most fair policy and LCFS (Last-Come-First-Serve) as the most unfair policy, while a size-based criteria gives results exactly opposite.

In Prof. Levy’s talk, it was argued that a fairness measure should account for both. To this end, authors Raz, Levy and Avi-Itzhak in [1] focus on the server resources and how they are allocated to the jobs. Specifically, the fairness measure is based on the belief that “at every epoch all jobs present in the system deserve an equal share of the server’s attention” [1]. The accumulated deviations from the fair share over time define discrimination. And discrimination is then taken as the measure of unfairness.

Thanks to this definition, we can promptly obtain

  • an individual measure of discrimination, defined as the expected discrimination experienced by a customer and conditioned on the number of customers the customer finds upon its arrival (This metric can take positive and negative values, corresponding to positive and negative discrimination, respectively), and
  • a system measure of discrimination, defined as the variance of individual discrimination (Zero variance will signify a fair system as everyone is discriminated the same.)

Case analysis: Individual Discrimination in M/M/1 queue with FCFS policy

It is interesting to see how this measure quantifies different extreme situations in a M/M/1/K queue running a FCFS discipline.
Consider the following 4 cases:

Case Customer finds Fairness Criteria outcome
a) Long queue in a lightly loaded system Negative discrimination
b) Long queue in a heavily loaded system Zero discrimination
c) Short queue in a lightly loaded system Zero discrimination
d) Short queue in a heavily loaded system Positive discrimination

We see these results are in agreement with the feeling we would have in given situations. (Imagine queueing in a supermarket at busy/dead hour.)

What about system unfairness? It is shown that the proposed fairness metric consistently ranks unfairness of policies as follows (in a M/M/1/K queue):

Processor Sharing is always fairer than FCFS,
FCFS is always fairer than Random Order of Service,
Random Order of Service is always fairer than LCFS.

This orderliness is actually remarkable compared to single-metric based criteria.

The proposed fairness criterion is named “Resource Allocation Queuing Fairness Measure (RAQFM)”. That is because it is based on the amount of resources the queue grants to customers.

When to use which criteria?

Seniority-based fairness [2] should be used in cases when getting a job done is far more important than the waiting time. An example is queuing for a scarce service such as a limited number of bargain tickets. There being pushed away by 1 place makes all the difference to the customer.

Fairness based on service time [3] is suitable where resources consumed by the customers do not “run out” (e.g. a web server or a supermarket). An example is the Shortest Job First Policy.

Whenever there is a concern about both seniority and job size, resource-based fairness serves as a suitable measure.

Ideas for future work

Although the problem of queue fairness has been with us as long as queueing theory itself, the attention it has received from the research community has been quite modest. This leaves many interesting problems to explore. We list here a few:

  • Suppose a system policy is designed that satisfies certain fair criteria including job size. How will this affect customers in their behaviour? Will they modify their job size and arrivals to improve their treatment?
  • Would it be possible to say that simple policies such as FCFS have the advantage of offering less opportunities for customers to exploit the policy?
  • Customers with larger jobs can generate more revenue and could be entitled to a priority treatment. How could revenue management be incorporated to the framework of queue fairness? (As heard in the after-presentation discussion.)


[1] Raz, D., Levy, H., Avi-Itzhak, B. “A Resource-Allocation Queueing Fairness Measure”. In ACM SIGMETRICS/Performance’04, June 2004.

[2] Avi-Itzhak, B., Levy, H. “On measuring fairness in queues”. Advances of Applied Probability, 36:3 (2004), pp. 919-936, 2004.

[3] Wierman, A.,Harchol-Balter, A. “Classifying scheduling policies with respect to unfairness in an M/GI/1”. In Proceedings of ACM Sigmetrics 2003 Conference on Measurement and Modeling of Computer Systems, San Diego, CA, June 2003.

PowerPoint presentation

If I get permission from Prof. Levy, I’ll post here the PowerPoint presentation used for the talk.

This is the PowerPoint presentation Prof. Levy used for his lecture at Imperial College:

PowerPoint presentation – Prof. Levy – Imperial College lecture

Posted in fairness, research | 1 Comment »

Dismantling a Religion: Is TCP wrong?

Posted by David Mayer on December 21, 2006

Fairness in communication networks is a topic of great interest. It has been treated in hundreds of papers and led to design of some of the most widely used protocols (e.g., TCP). But Bob Briscoe in his Internet Draft from October 2006 [1] argues that the fairness criteria used by the networking world today is fundamentally wrong.

Specifically, Briscoe

  • Rejects flow rate fairness (widely accepted and used) as completely inappropriate, wrong & ignorant. This also applies to the TCP algorithm and fair queueing algorithms.
  • Suggests that all proposals using flow rate fairness be discarded or someone should write a defence of flow rate fairness.
  • Argues that weighted proportional fairness is a correct fairness criteria. It also notices that the paper introducing weighted proportional fairness rejected flow rate fairness, but the community did not notice.
  • Proposes that weighted proportional fairness is implemented and deployed.
  • Accuses the research community of creating a hegemony in this case and ignoring voices from outside.

To give a short introduction, let us say that fairness is a measure characterising division of resources among users. Once a fairness criteria is defined, a policy can be evaluated with respect to that criteria and policies can be designed which achieve fairness in terms of a given fairness criteria.

The most widely used fairness is based on bit rates of individual flows. It is called flow rate fairness and it attempts at making flows fair to each other in terms of their bit rates. An example of a flow rate fairness is max-min fairness. It is defined as an allocation of rates where no rate can be increased without decreasing some other, already lower, rate.

Flow Rate vs. Weighted Proportional Fairness

Most importantly, Briscoe says that flow rate fairness allocates a wrong measure among wrong entities, namely rate among flows, and that the right way to do it is to allocate cost among economic entities as it is done in Weighted Proportional Fairness proposal by Kelly [2].

Kelly’s paper [2] “struck an intellectual death blow” to the idea of flow rate fairness, but the dominant belief system did not notice, since cost fairness proposal stated its case in terms of the dominant belief system.

Dismantling the Flow Rate Fairness religion involves analysing 2 basic questions: what should be allocated (rates vs. cost) and among what (flows vs. economic entities) should that something be allocated. Following is the compilation of Briscoe’s arguments against Flow Rate Fairness and pro Cost Fairness.

1. What should be allocated? (Fairness of what?)
Regarding this question, Briscoe rejects fairness between flows and justifies cost fairness by arguing that:

  • Arguments against fairness between flows:
    • Flow is not a good measure of either benefit or cost.
    • Equalising rates does not equalise benefits or cost.
    • “Fair allocation of rates isn’t based on any respected definition of fairness from philosophy or the social sciences.”
  • Arguments for cost fairness:
    • Kelly showed that, using cost, each user will want their congestion algorithm to adjust its rate to maximise their net utility—benefit minus cost. Users will allocate themselves rates in proportion to the share of the cost that they cause – Weighted Proportional Fairness.
    • Cost fairness maximises total utility across the network.
    • “Cost in any part of the world has an exchange value with cost in any other part, because, wherever there’s an Internet attachment, ther’s a connection with the global economy.”

Further it is argued that it is not realistic to define fairness within the system (the Internet) without reference to fairness outside the system (society and market). But cost fairness naturally relates to monetary cost. And monetary cost allows to proceed with the concept “you get what you pay for” — a fairness concept we live in.

2. What entities should it be allocated among? (Fairness among what?)
Briscoe argues that fairness should be enforced among economic entities and not flows, as

  • “There is no evidence that fairness between flows relates in any way to fairness between any real-world entities that one would expect to treat fairly, such as people or organisations.”
  • Flow rate fairness is trivial to cheat by creating arbitrary number of smaller flows.
  • “Fairness can only be meaningful between entities with real-world identities—humans, organisations, institutions, businesses.”
  • Allocating resources among economic entities reflects the commercial reality and prevents multiple flows cheating.


I think Briscoe’s draft brings up a few extremely important points.

The accusation of research community creating a hegemony ignoring voices from outside is a serious one and I would encourage readers to express their views about this.

The lesson for beginning researchers is that one should question everything, even the seemingly fundamental truths in our field. If these are truths indeed, one gains a deeper understanding, if not, one gains a huge potential to publish.

I find a bit vague the statement that cost fairness would achieve outcomes “hopelessly unlike” the outcomes that result from flow rate fairness [1, p.3]. Although the intellectual superiority of Cost Fairness matters a great deal, maybe the practical difference would be negligible due to some large-scale effects, and thus not worth deployment. It would be nice to see a simulation illustrating the difference, but that does not have to be Briscoe’s job, of course. It is the networking research community who should react.

Unfortunately, the page dedicated to FAQ about this draft is empty as of this writing. A suitable place for a timely respond could be the online Computer Communication Review or this very journal (PaperPicker).

It is noteworthy that the draft was written with some help of such networking luminaries like Scott Shenker or Jon Crowcroft.


[1] B. Briscoe. Flow Rate Fairness: Dismantling a Religion. (Internet-Draft. Expires: April 18, 2007), October 2006.

[2] F. Kelly. Charging and Rate Control for Elastic Traffic. European Transactions on Telecommunications, 8(1):33-37, 1997.

Posted in fairness, Internet Draft, research, Uncategorized | 3 Comments »

Controlling routing oscillations in adaptive routing protocols

Posted by Michael Gellman on November 20, 2006

I’ve been doing some work on ways to eliminate routing oscillations in an adaptive routing protocol. There’s a very good analysis using simulations in [4], but the current question I am investigating is whether there exists some analytical method for showing that the protocol will not oscillate, and, if so, how long does that take. One (very) well-developed field that has the potential to answer this question is that of Game Theory.

Price of Anarchy

One of the best introductions to this topic that I found was a research monograph of Tim Roughgarden’s Ph.D. Thesis Selfish Routing and the Price of Anarchy. The pace is good for setting the stage of how game theory, and specifically notions of Nash equilibrium, can be used to analyze networks. However, one of the assumptions that the analysis makes is that the routing system will tend to equilibrium — which is exactly what we are questioning. Thus, I continued my search into some more recent work.

Convergence to Wardrop Equilibrium

It is here that I came across the work of Simon Fischer, et al which uses notions going back to the 50’s in traffic networks to demonstrate convergence of on-line routing algorithms. At STOC 2006, they published a paper [1] analyzing an adaptive routing protocol, giving an upper bound on the number of rounds it takes to reach equilibrium "that is independent of the size and the structure of the network". It depends principally on a parameter of the latency function which they call its relative slope. They then extend this theoretical work with a set of simulations [2] to be presented at this year’s CoNEXT conference. Using bursty traffic on both artificial and real topologies, they show that an implementation of their protocol in a traffic engineering context can lead to good on-line performance without oscillations (for suitable parameters).

No-regret Routing

In a similar vein of analysis, Blum, et al show in [3] that no regret online routing algorithms approach a Nash equilibrium "at a rate that depends polynomially on the players’ regret bounds and the maximum slope of any latency function".

Closing Thoughts

The question of whether adaptive routing protocols converge to an equilibrium or whether they oscillate ad infinitum is an important research question that is being addressed in different ways (e.g. experiments, simulation [4], and mathematical analysis [1-3]). One traditional argument against adaptive routing protocols (going back to ARPANet days [5]) is that they are unstable due to their selfish nature. This new body of evidence is gathering support for truly adaptive routing protocols which give good QoS to traffic in a completely distributed manner, all the while having good convergence properties.


  • [1] – Fischer, S.; Räcke, H. & Vöcking, B. "Fast Convergence to Wardrop Equilibria by Adaptive Sampling Methods" Proc. 38th Ann. ACM. Symp. on Theory of Comput. (STOC), ACM, 2006, 653-662
  • [2] – Fischer, S.; Kammenhuber, N. & Feldmann, A. "REPLEX — Dynamic Traffic Engineering Based on Wardrop Routing Policies" Proc. 2nd Conference on Future Networking Technologies (CoNext), 2006
  • [3] – Blum, A.; Even-Dar, E. & Ligett, K. "Routing without regret: on convergence to nash equilibria of regret-minimizing algorithms in routing games" PODC ’06: Proceedings of the twenty-fifth annual ACM symposium on Principles of distributed computing, ACM Press, 2006, 45-52
  • [4] – Gao, R.; Dovrolis, C. & Zegura, E. "Avoiding Oscillations due to Intelligent Route Control Systems" IEEE Infocom, 2006
  • [5] – McQuillan, J.; Richer, I. & Rosen, E. "The New Routing Algorithm for the ARPANET" Communications, IEEE Transactions on [legacy, pre – 1988], 1980, 28, 711-719

Please comment to this post if you know of other references that are of interest, or additional points about the references I selected!

Posted in Equilibrium, Game Theory, Oscillations, Paper Review, research, Routing | 1 Comment »

CoNEXT 2006

Posted by Michael Gellman on November 19, 2006

CoNEXT 2006 Logo

I’m going to be attending CoNext 2006 in Portugal this year. CoNEXT describes itself as (from the Call for Papers):

CoNext 2006 will be a major forum in the area of future networking technologies. CoNext emphasizes synergies between various international and technical communities. The conference will feature a single-track, high quality technical program with significant opportunities for technical and social interaction among a closeknit community of participants. CoNext aims to be open and accommodating to multiple viewpoints and is committed to fairness in the review process and to returning deep and sound technical feedback to authors of submitted paper.

The program seems really compelling, with talks by a lot of researchers at the forefront of Networking research. I’m really looking to the talk by Jon Crowcroft and also the paper by Simon Fischer at RWTH Aachen which I’ve read, and I’ll be posting some thoughts on soon.

I’m going to be presenting a student poster on my Ph.d. work dealing with QoS routing in Peer-to-Peer overlay networks. If you’re attending this conference, be sure to post a comment — it would be great to chat about it in advance!

There’s also a great new Web 2.0-ish site which is dedicated to conferences called ConFabb where they have a dedicated entry for CoNEXT 2006.

Posted in Conference, QoS, research | 1 Comment »

Pursuing Research in Communications Networks

Posted by David Mayer on November 14, 2006


Research in communications networks is a challenging enterprise many find worthy of endeavour. Suppose we have answered “why” to do networking research in the first place and let us move to the second most important question, that of “how”.

Communications research can be seen as having attributes of both science and engineering. I think this also corresponds to the main reasons of “why”; it is interesting on its own (the science bit, just as it is interesting to research the age of universe and the structure of our genes) and there may be a practical benefit for the society (the engineering bit, “I’m doing something useful”). On the one hand, some networking research is very theoretical (i.e. unlikely to benefit any implementation) and yet interesting, just as it is interesting to research the age of the universe and the structure of our genes. On the other hand, the benefit to society best stems from a practically-oriented research.

In this article, I would like to present some ideas for constructing a worthy research topic and research work in the field of communications networks. In particular, we will use network Quality of Service (QoS) research as an example.

3 Strategic Considerations

We have identified the following strategic considerations:

1. Timeliness

As noted by J. Crowcroft et al. in [1], the time required to research, engineer and retrofit a specific solution is typically longer than the remaining life-span of the congested core. If research commences only when there is an immediate need for it, it is too late. The ability to envisage future needs is essential here. We will present three drivers of new networks which help in predicting the future.

2. Deployability

The lack of attention to non-research issues was the main reason behind the failure of network QoS research. Deployability encompasses various operational (practical) and economical issues.

3. Functionality

Given the need for a new mechanism, what is the best thing for it to do? To illustrate the problem, consider that there is a general consensus on the need of QoS, but there is much less agreement on the specific functionality, as seen from the existence of proposals as different as IntServ and DiffServ. We will try to show that functionality is tightly connected to deployability and recommend that a proposed mechanism should maximise benefits subject to constraints given by deployability considerations.

1 – Timeliness

Research must commence a long time before there is a need for it [1]. Obviously, this requires us to predict the future. We shall now look at some examples of how this prediction could be done. The key question is: What causes a need for an improvement? The answer enables us to start relevant research at the right time. In principle, the driving forces are, according to K. Cambron, president and CEO of AT&T Labs [2]:

A) Changing demand

B) Decreasing entropy

C) Changing technology

A – Changing demand

This includes volume of traffic, but also its type, patterns, emergence of new applications. As an example, let us take a look at GoogleTech talk by Van Jacobson [3]. Jacobson reviews the history of telecommunication networks, the success (disaster) of TCP/IP and finally argues that our current networking protocols are inadequate, because they were designed for a conversational network, where 2 people/machines talk to each other, while today over 99% of network traffic comes from a machine acquiring named chunks of data (web pages, emails). The main point is we are doing dissemination over a conversational network, which has numerous inefficiencies. That was an example of a driving force behind new networks – a change in demand.

B – Decreasing entropy

This term is used in [2] to describe the ability of a network to deal with new problems. Cambron argues that in order to meet new demands, changes to the network are made and “every design choice removes a degree of freedom, solving an immediate problem, but eliminating potential solutions to other problems that lie in the future”. At some point the network becomes incapable of dealing with new demands and must be replaced by a new network.

C – Changing technology

Obviously, the increasing bandwidth of core and access links has a fundamental impact on when and what QoS is needed.

2 – Deployability

The failure of QoS research to deliver a practical solution is ascribed to researchers’ lack of attention to practical issues. Most of all, it is the lack of feedback from operational networks environment (or the unwillingness of academia to interact). An excellent example of some practical problems is [4]. Here we list some key points.

Operational considerations

  • Rift between networking operations and protocol design.
    There seems to be a range of difficulties in the operational network that researchers have never heard of and therefore will not consider in their design. The list includes complexity from the configuration point of view, proneness to operator error, debugging tools, incremental deployability and so on.
  • Complexity. In [4], G. Bell argues that IP multicast defines the limit-case for deployable complexity. Quoting [4],

    “Eventually, we concluded that implementing IP multicast required a substantial commitment of engineering resources, and that it would inevitably impair the stability of unicast routing due to the need for frequent OS upgrades and intrusive testing. Many enterprises do not have the mandate, or see a justification, for making this commitment.”

    A good example of operational issues arising with the deployment of QoS is Davie’s 2003 paper “Deployment experience with differentiated services [5]”.
    (Network administrators will not risk destabilising their network)

Economic considerations

Quoting [6],

“Solutions require path paved with incentives to motivate diverse agents to adopt it, implement it, use it, interface with it, or just tolerate it. Without that path you’ve wasted your time”.

The bottom line that justifies QoS research from the economic point of view is that QoS provides network operators with a means to extract value from their network by charging for different level of service [1]. However, this mere fact does not guarantee embracement by operators and QoS research must account for all practical issues to become successful.

For an in-depth analysis of different political and socio-economic forces in the Internet, see [7] by D. Clark et al. and [8].

Note on deployability

The ease at which one can deploy an idea depends very much on the network layer in question. It is easy to deploy a novelty at the application level, but very difficult at Layer 3, and nearly impossible on layers below that. When choosing a research topic while aiming at its practical impact, one could take this dependency into consideration. (There are probably many people of the calibre of Google guys Larry Page and Sergey Brin working on lower layers, but it takes a Cisco-sized company to make a difference there.)

3 – Functionality

Researching and engineering particular mechanisms can be seen as finding the best trade-off between functionality and operational feasibility. For example, it is fairly established that the most desirable feature in the current Internet is to have some QoS notion on an end-to-end basis (i.e. from source to the destination, across domains). However, if the research community were asked what the QoS should be, there would not be a certain answer. Guaranteed performance metrics? Differentiated service? How differentiated? Simple congestion feedback?

Perhaps the answer should be: The most beneficial solution subject to time and deployability constraints.


To propose and conduct research in networking requires profound and wide understanding of the theoretical and practical constraints involved therein.

We have identified 3 critical considerations: timeliness, functionality and deployability, presenting a few points that can help in assessing them. We argued that design should maximise benefit subject to practical constraints stemming from deployability issues.

On the case of QoS research functionality on its own is only a small bit in the recipe for success. Only proposals fully respecting real-world issues have a chance to make it to the operational network.

For those not yet in a position to make judgements about its future development and the operational and economical aspects and are forced to invent a research topic (such as a PhD student with a freedom/requirement to come up with a topic), an alternative safe approach is possible:

  1. Choose a research area according to your interest, opportunities to contribute or instinct.
  2. Spend several months reading and writing simulations to obtain a profound understanding of the problem in question.
  3. Find a point of potential improvement and propose the improvement. From there on you can slowly start getting the whole picture, while working and contributing.

I hope this article provides some rough guidelines as to what to look out for when commencing a research project in the area of communications networks.


[1] Jon Crowcroft, Steven Hand, Richard Mortier, Timothy Roscoe, and Andrew Warfield. Qos’s downfall: at the bottom, or not at all! In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 109–114, New York, NY, USA, 2003. ACM Press.

[2] G. K. Cambron. The next generation network and why we’ll never see it. Communications Magazine, IEEE, 44(10):8–10, 2006.

[3] Van Jacobson. A new way to look at networking, 2006. Google Tech Talks.

[4] Gregory Bell. Failure to thrive: QoS and the culture of operational networking. In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 115–120, New York, NY, USA, 2003. ACM Press.

[5] Bruce Davie. Deployment experience with differentiated services. In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 131–136, New York, NY, USA, 2003. ACM Press.

[6] K. C. Claffy. Top problems of the Internet and how to help solve them, 2005.

[7] David D. Clark, John Wroclawski, Karen R. Sollins, and Robert Braden. Tussle in cyberspace: defining tomorrow’s internet. In SIGCOMM ’02: Proceedings of the 2002 conference on Applications, technologies, architectures, and protocols for computer communications, pages 347–356, New York, NY, USA, 2002. ACM Press.

[8] L. Burgstahler, K. Dolzer, C. Hauser, J. Jahnert, S. Junghans, C. Macian, and W. Payer. Beyond technology: the missing pieces for qos success. In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 121–130, New York, NY, USA, 2003. ACM Press.

Posted in deployability, QoS, research | 18 Comments »