PaperPicker

Online Journal Club for Networking Researchers

Archive for the ‘fairness’ Category

The Role of Accountability in Home Broadband Provisioning

Posted by David Mayer on August 31, 2008

As of July 2008, some 20 years after the first Internet Service Providers (ISP) were formed and some 50 years after packet switching was invented, the home broadband customer still has no guarantee of any quality parameter, no way of holding his ISP accountable for interruptions and low speed, no choice of quality and no way to fairly share resources with other customers in times of congestion.

Any home broadband customer will be familiar with the situation when something is wrong with their connection, be it unavailability, high data loss or failed name servers. Yet upon contacting the provider one is taken through a bemusing routine of resetting their PC, router, modem, and whatever else has buttons to press, the ISP rarely saying: “It is our fault, we’ll refund you.”

What exactly determines the quality of a broadband connection? All performance characteristics will depend on

  • Network dimensioning and management
  • Behaviour of other customers
  • Resource sharing mechanism in place
  • The rest of the Internet.

While the rest of the Internet is not in the operator’s powers and the behaviour of the users only partly, the network dimensioning and resource sharing facilitation certainly are. Let us now briefly describe what these issues encompass and how today’s operators implement them.

(The lack of) Resource sharing mechanisms

Resource sharing is a mechanism determining the division of network resources between the operator’s customers, especially in times of high demand and congestion. Certain resource sharing is facilitated by medium access protocols. For example, the DOCSIS protocol used in cable networks has extensive means of controlling the total usage of resources of every user at any time. Also the transport layer offers some resource sharing mechanism; TCP controls the rate of every connection but not the total rate of a user. Besides, it cares about resources along the end-to-end path rather than the resources in the operator’s network only.
The medium access protocols and even transport layer protocols are fully controllable by the operator who could use them to facilitate resource sharing.

So how do some of UK’s major broadband providers deal with fair sharing? Virgin Media cable broadband penalizes rough customers after they have committed a high-usage activity. The ISP will reduce the speed of the user’s connection for the duration of 5 hours after the customer exceeds a limit on the amount of data he transmits or receives in a given time interval.
This means that

  • The punishment becomes ineffective if the user is not interested in using the connection after the incident.
  • The punishment does not protect others at the time of congestion.

British Telecom employs a “Fair Usage Policy” according to which a customer’s bit rate will be “restricted” when the customer becomes a “heavy user”.

Please note that the cable standard DOCSIS offers excellent facilities when it comes to QoS. It has 6 quality classes and 8 priority levels. It has total control over upstream (hence can regulate TCP connections by regulating the protocol acknowledgments) and it can also schedule the downstream. The DOCSIS standard is well documented and a number of QoS-related research papers have been written on the topic, most of them focusing on real-world implementation.

Network dimensioning and management

It is the number of customers that largely determines the operator’s income. The infrastructure acquired to realise the sold service reduces profit. It is only natural that an operator tries to increase the former and reduce the latter, given that the service quality is hardly monitored at all and that it is extremely difficult for a customer to claim refunds for bad service. It is obvious that a network operator has great incentives not to support any initiative aimed to objectively measure the quality of the offered service.

It is not practical to infer an operator’s network management because the ISP is not obliged to disclose this information and even if it did, its effect on quality is non-trivial. What is left then is the pursuit of a mechanism which would hold the operator accountable for the service it provides, hence possibly motivating the provider to change their network management.

The rest of this article complains about the lack of accountability and presents 2 hypothetical strategies to increase the accountability.

Operator’s accountability

The mere availability of a data connection is the core of the sold service but it is the quality of broadband provisioning that determines the utility customers derive from it. Yet, it is extremely difficult to prove that a service has been inadequate at some point in time. It is practically impossible for a common user to show that there was for example a packet loss-rate of 40% for the duration of 50 minutes last Saturday afternoon and that it was not the customer’s fault.

Why is it that we can take a broken watch back to the shop and have our money refunded, we can even raise legally sound complaints about complex services such as consulting or insurance but it is almost impossible for a non-techy broadband user to show that something was wrong with their broadband service? What is it about Internet service which makes it so hard to complain about?

For one thing, Internet provisioning is a multi-dimensional service. It spans across several time-scales, across space, it has complex statistical properties and it is perceived subjectively (unlike the quality of electricity or gas). But take the complexity of water quality standards, for example.  They comprise over 50 quantitative parameters every liter of drinking water must satisfy. Yet, utility companies do satisfy these standards and the state usually takes care of their monitoring.

One can argue that an additional set of requirements on home broadband provisioning would increase the cost and in turn the price, having a negative effect on the service proliferation. But surely one can use the same argument in the case of water quality, car safety, and many other areas in which more stringent legal requirements clearly brought benefits far outweighing the cost.

Proving unsatisfactory quality

Part of the Consumer Law related to the provision of broadband is the Sale of Goods & Services Act. Under the Sale of Goods Act 1979 and The Supply of Goods and Services Act 1982 service providers must sell goods that are, among other things,

  • of satisfactory quality and
  • delivered as described.

Of course, satisfactory quality remains undefined and the service is described only vaguely. Furthermore, even if some well-defined quality was breached, there would not be an easy way for a customer to prove it.

Let’s now design a simple hypothetical accountability strategy for today’s home broadband service. The aim here is to create a piece of evidence immune to the operator suggesting that perhaps there was a fault on the customer’s side.

A) Massively deployed application-layer metering

There are 2 ways we can think of, both solid for a court room. Since judges operate on the basis of reasonable belief, we’ll employ some statistics: If a large number of customers at the same time experience, measure and document a bad connection, it is unlikely that this is due to all of them having misconfigured their firewalls or having their cables cut at the same time. Naturally, in several cases it could have been the customer’s fault, but it is beyond reasonable belief that a large group of customers of a particular operator erred at the same time. Hence an application running on the customer’s computer can be used to generate evidence about the service when put together with a large number of other data.

A basic version of this concept is already in place. There are Internet forums where users post their measurements, obtained often from the excellent Speedtest.net. Now if these measurements were made automatically and the results processed to extract geographical and spatial commonalities, that would be a big step towards holding the ISP accountable.

B) Certified hardware monitors attached to customer modems

This method would rely on a device connected between the modem and user equipment. This device, let’s call it a monitor, will have a certification from some regulatory body such as the UK Ofcom. Operators will be legally obliged to allow this device be connected to the modem they provide. Another party – a governmental organisation or a regulated private company – will collect data from these monitors and make them available to the public. Since monitors are standardized and cannot be modified by the customer, there need not be a large number of them in order to rule out the customer’s fault.

Similarly to the way TV and radio ratings are collected from several customers, so measurement data will be collected from several monitors in a given locality, as illustrated in the figure below.

Both strategies require a start-up investment and some low long-term funding. Most customer will also need incentives to participate in the strategy.

Conclusion

Broadband speeds continue to grow together with the necessity of Internet access and the demands of new applications. Advances on the physical and medium access layer are being adopted by the industry. Innovative Internet applications are booming. Yet the experience of any more-than-slightly advanced Internet users is marred by poor network management, failures, oversubscription and unfair usage. At the same time, the progress on the intermediate layers – IP, transport, management – seems to stagnate. Thousands of research papers on quality provisioning and resource allocation seem to have been written in vain.

Be it a lobby group, customer iniative or the government themselves who will push the idea of accountability into practice, we think it is the key for a better home Internet provisioning.

Advertisement

Posted in congestion, fairness, QoS | Tagged: , , | 2 Comments »

“How Fair Is Your Queue?” – Notes on talk by Prof. Levy

Posted by David Mayer on January 7, 2007

The task of inventing a good fairness metric for queues is a tough and important one. Such a metric would aid the design of wide range of systems, policies and applications including web servers, P2P applications, call centers, and others. In 2006, though, there still is not one single, widely accepted criterion.

In December 2006, Prof. Hanoch Levy from Tel Aviv University, Israel, gave a gripping talk on the topic of fairness in queues. The talk took place at Imperial College, London. In this article, based on Prof. Levy’s talk, we would like to draw the attention of networking researchers and system designers to this useful and practical topic.

What is queue fairness?

Before exposing the centrepiece of Prof. Levy’s talk, we should make clear what “queue fairness” actually is. Most researchers will be familiar with “flow fairness”. Flow fairness attempts to make flows (streams of packets) fair to each other. As an example, take “flow rate fairness”, which makes flows fair to each other in terms of their rates, or a “cost fairness” which makes flows fair with regard to the cost they incur. By contrast, “queue fairness” asks: How fair is the service received by a job (or a customer) in one queue?

As an example, consider the following scenario. A large job arrives at a queue first and a very short job arrives at a queue a bit later. Is it more fair to serve the first job or the short job first? That obviously depends on the type of service, context, culture, but in any case it would be useful to have a metric quantifying the fairness of the policy.

Why bother with queue fairness at all?

  • The question is in fact redundant because the very purpose of queues is fairness! Queues, as opposed to crowds, appeal to our notion of fairness. Therefore, fairness is inherent part of queues.
  • People do care about fairness. A company can get sued if a customer loses money due to being treated unfairly.
  • In order to design a policy, we need a metric to evaluate it. However, there is no widely accepted metric for queue fairness as of yet.

The problem is, there is no established fairness criteria for jobs in a queue and it is difficult to devise one.

Resource Fairness: Solving the Seniority vs. Size Dilemma

At the heart of the difficulty lies the dilemma between seniority and size. Seniority refers to the time a customer spent in the queue waiting. Size refers to the service requirement, which is proportional to the size of the job. Previous proposals based themselves on either seniority [2] or size [3], which results in limited applicability and non-intuitive outcomes. For example, a seniority-based criteria ranks FCFS (First-Come-First-Serve) policy as the most fair policy and LCFS (Last-Come-First-Serve) as the most unfair policy, while a size-based criteria gives results exactly opposite.

In Prof. Levy’s talk, it was argued that a fairness measure should account for both. To this end, authors Raz, Levy and Avi-Itzhak in [1] focus on the server resources and how they are allocated to the jobs. Specifically, the fairness measure is based on the belief that “at every epoch all jobs present in the system deserve an equal share of the server’s attention” [1]. The accumulated deviations from the fair share over time define discrimination. And discrimination is then taken as the measure of unfairness.

Thanks to this definition, we can promptly obtain

  • an individual measure of discrimination, defined as the expected discrimination experienced by a customer and conditioned on the number of customers the customer finds upon its arrival (This metric can take positive and negative values, corresponding to positive and negative discrimination, respectively), and
  • a system measure of discrimination, defined as the variance of individual discrimination (Zero variance will signify a fair system as everyone is discriminated the same.)

Case analysis: Individual Discrimination in M/M/1 queue with FCFS policy

It is interesting to see how this measure quantifies different extreme situations in a M/M/1/K queue running a FCFS discipline.
Consider the following 4 cases:

Case Customer finds Fairness Criteria outcome
a) Long queue in a lightly loaded system Negative discrimination
b) Long queue in a heavily loaded system Zero discrimination
c) Short queue in a lightly loaded system Zero discrimination
d) Short queue in a heavily loaded system Positive discrimination

We see these results are in agreement with the feeling we would have in given situations. (Imagine queueing in a supermarket at busy/dead hour.)

What about system unfairness? It is shown that the proposed fairness metric consistently ranks unfairness of policies as follows (in a M/M/1/K queue):

Processor Sharing is always fairer than FCFS,
FCFS is always fairer than Random Order of Service,
Random Order of Service is always fairer than LCFS.

This orderliness is actually remarkable compared to single-metric based criteria.

The proposed fairness criterion is named “Resource Allocation Queuing Fairness Measure (RAQFM)”. That is because it is based on the amount of resources the queue grants to customers.

When to use which criteria?

Seniority-based fairness [2] should be used in cases when getting a job done is far more important than the waiting time. An example is queuing for a scarce service such as a limited number of bargain tickets. There being pushed away by 1 place makes all the difference to the customer.

Fairness based on service time [3] is suitable where resources consumed by the customers do not “run out” (e.g. a web server or a supermarket). An example is the Shortest Job First Policy.

Whenever there is a concern about both seniority and job size, resource-based fairness serves as a suitable measure.

Ideas for future work

Although the problem of queue fairness has been with us as long as queueing theory itself, the attention it has received from the research community has been quite modest. This leaves many interesting problems to explore. We list here a few:

  • Suppose a system policy is designed that satisfies certain fair criteria including job size. How will this affect customers in their behaviour? Will they modify their job size and arrivals to improve their treatment?
  • Would it be possible to say that simple policies such as FCFS have the advantage of offering less opportunities for customers to exploit the policy?
  • Customers with larger jobs can generate more revenue and could be entitled to a priority treatment. How could revenue management be incorporated to the framework of queue fairness? (As heard in the after-presentation discussion.)

Bibliography

[1] Raz, D., Levy, H., Avi-Itzhak, B. “A Resource-Allocation Queueing Fairness Measure”. In ACM SIGMETRICS/Performance’04, June 2004.

[2] Avi-Itzhak, B., Levy, H. “On measuring fairness in queues”. Advances of Applied Probability, 36:3 (2004), pp. 919-936, 2004.

[3] Wierman, A.,Harchol-Balter, A. “Classifying scheduling policies with respect to unfairness in an M/GI/1”. In Proceedings of ACM Sigmetrics 2003 Conference on Measurement and Modeling of Computer Systems, San Diego, CA, June 2003.

PowerPoint presentation

If I get permission from Prof. Levy, I’ll post here the PowerPoint presentation used for the talk.

This is the PowerPoint presentation Prof. Levy used for his lecture at Imperial College:

PowerPoint presentation – Prof. Levy – Imperial College lecture

Posted in fairness, research | 1 Comment »

Dismantling a Religion: Is TCP wrong?

Posted by David Mayer on December 21, 2006


Fairness in communication networks is a topic of great interest. It has been treated in hundreds of papers and led to design of some of the most widely used protocols (e.g., TCP). But Bob Briscoe in his Internet Draft from October 2006 [1] argues that the fairness criteria used by the networking world today is fundamentally wrong.

Specifically, Briscoe

  • Rejects flow rate fairness (widely accepted and used) as completely inappropriate, wrong & ignorant. This also applies to the TCP algorithm and fair queueing algorithms.
  • Suggests that all proposals using flow rate fairness be discarded or someone should write a defence of flow rate fairness.
  • Argues that weighted proportional fairness is a correct fairness criteria. It also notices that the paper introducing weighted proportional fairness rejected flow rate fairness, but the community did not notice.
  • Proposes that weighted proportional fairness is implemented and deployed.
  • Accuses the research community of creating a hegemony in this case and ignoring voices from outside.

To give a short introduction, let us say that fairness is a measure characterising division of resources among users. Once a fairness criteria is defined, a policy can be evaluated with respect to that criteria and policies can be designed which achieve fairness in terms of a given fairness criteria.

The most widely used fairness is based on bit rates of individual flows. It is called flow rate fairness and it attempts at making flows fair to each other in terms of their bit rates. An example of a flow rate fairness is max-min fairness. It is defined as an allocation of rates where no rate can be increased without decreasing some other, already lower, rate.

Flow Rate vs. Weighted Proportional Fairness

Most importantly, Briscoe says that flow rate fairness allocates a wrong measure among wrong entities, namely rate among flows, and that the right way to do it is to allocate cost among economic entities as it is done in Weighted Proportional Fairness proposal by Kelly [2].

Kelly’s paper [2] “struck an intellectual death blow” to the idea of flow rate fairness, but the dominant belief system did not notice, since cost fairness proposal stated its case in terms of the dominant belief system.

Dismantling the Flow Rate Fairness religion involves analysing 2 basic questions: what should be allocated (rates vs. cost) and among what (flows vs. economic entities) should that something be allocated. Following is the compilation of Briscoe’s arguments against Flow Rate Fairness and pro Cost Fairness.

1. What should be allocated? (Fairness of what?)
Regarding this question, Briscoe rejects fairness between flows and justifies cost fairness by arguing that:

  • Arguments against fairness between flows:
    • Flow is not a good measure of either benefit or cost.
    • Equalising rates does not equalise benefits or cost.
    • “Fair allocation of rates isn’t based on any respected definition of fairness from philosophy or the social sciences.”
  • Arguments for cost fairness:
    • Kelly showed that, using cost, each user will want their congestion algorithm to adjust its rate to maximise their net utility—benefit minus cost. Users will allocate themselves rates in proportion to the share of the cost that they cause – Weighted Proportional Fairness.
    • Cost fairness maximises total utility across the network.
    • “Cost in any part of the world has an exchange value with cost in any other part, because, wherever there’s an Internet attachment, ther’s a connection with the global economy.”

Further it is argued that it is not realistic to define fairness within the system (the Internet) without reference to fairness outside the system (society and market). But cost fairness naturally relates to monetary cost. And monetary cost allows to proceed with the concept “you get what you pay for” — a fairness concept we live in.

2. What entities should it be allocated among? (Fairness among what?)
Briscoe argues that fairness should be enforced among economic entities and not flows, as

  • “There is no evidence that fairness between flows relates in any way to fairness between any real-world entities that one would expect to treat fairly, such as people or organisations.”
  • Flow rate fairness is trivial to cheat by creating arbitrary number of smaller flows.
  • “Fairness can only be meaningful between entities with real-world identities—humans, organisations, institutions, businesses.”
  • Allocating resources among economic entities reflects the commercial reality and prevents multiple flows cheating.

Summary

I think Briscoe’s draft brings up a few extremely important points.

The accusation of research community creating a hegemony ignoring voices from outside is a serious one and I would encourage readers to express their views about this.

The lesson for beginning researchers is that one should question everything, even the seemingly fundamental truths in our field. If these are truths indeed, one gains a deeper understanding, if not, one gains a huge potential to publish.

I find a bit vague the statement that cost fairness would achieve outcomes “hopelessly unlike” the outcomes that result from flow rate fairness [1, p.3]. Although the intellectual superiority of Cost Fairness matters a great deal, maybe the practical difference would be negligible due to some large-scale effects, and thus not worth deployment. It would be nice to see a simulation illustrating the difference, but that does not have to be Briscoe’s job, of course. It is the networking research community who should react.

Unfortunately, the page dedicated to FAQ about this draft is empty as of this writing. A suitable place for a timely respond could be the online Computer Communication Review or this very journal (PaperPicker).

It is noteworthy that the draft was written with some help of such networking luminaries like Scott Shenker or Jon Crowcroft.

Bibliography

[1] B. Briscoe. Flow Rate Fairness: Dismantling a Religion. (Internet-Draft. Expires: April 18, 2007), October 2006. http://www.ietf.org/internet-drafts/draft-briscoe-tsvarea-fair-00.pdf.

[2] F. Kelly. Charging and Rate Control for Elastic Traffic. European Transactions on Telecommunications, 8(1):33-37, 1997. http://www.statslab.cam.ac.uk/~frank/elastic.html.

Posted in fairness, Internet Draft, research, Uncategorized | 3 Comments »