PaperPicker

Online Journal Club for Networking Researchers

Archive for the ‘QoS’ Category

Independent Measurement of Broadband Provisioning Quality by SamKnows: A Step Towards Providers’ Accountability?

Posted by David Mayer on January 2, 2009

Some time ago on this blog we wrote about the lack of accountability in home broadband provisioning. We noted how difficult it is for a typical broadband customer to give evidence about poor service. One of the solutions to generate legally sound evidence we mentioned was to deploy hardware monitors attached to customer modems. This is exactly what SamKnows Limited have done, although the reasons behind their project are probably different. The project was partly backed up by Ofcom.

A report describing the project has been published by Sam Crawford here.  In this article, we briefly characterise the measurement scheme, pick out some interesting results and finally comment whether this scheme could be used to gather evidence about quality of broadband provisioning for individual users.

One of the objectives of Sam’s report is to enlighten the public about the myth that speed is by far the most important property of a broadband service. The results show that speed is only one of many properties which affect the quality of the service and that other properties can have larger effect on the quality as perceived by the user.

We at PaperPicker look at this project as an example of how Internet providers could be made accountable. A typical user can never find out, leave alone quantify, what is wrong with their connection. An ISP can always blame user’s equipment for the fault and it is extremely difficult for users to show evidence of poor service. A measurement scheme such as this is a real-world example of generating statistically and legally sound evidence.

What was done and how

At the heart of the scheme lie hardware monitoring units, installed at homes of volunteers and equipped with measurement routines. The monitoring units were deployed all over UK with the help of Ofcom. Over the course of 6 weeks in 2008 these units were generating multiple types of traffic, recording measurements and sending the results to SamKnows servers. The test included 223 devices and covered 12 UK-based Internet service providers (ISP).

What makes this project unique is the fact that the collected measurements are both independent of ISPs and statistically sound. The measurements are independent of ISPs because they do not require any cooperation from ISPs. Statistical confidence comes from the number of units deployed and the number of measurements carried out.

However, since SamKnows do not quite disclose the reasons for doing this project nor they specify how it is financed, one can have doubts over the independence. On the other hand, the project is backed up by Ofcom, the UK telecom regulator.

So how does it work?

A monitoring unit (Linksys WRT54GL with embedded Linux) is connected between user’s modem or router and user’s computers as shown in the figure below. The unit generates traffic only when the user’s equipment does not. A pre-programmed set of tests runs according to a given schedule and results are collected at SamKnows servers.

 

diagram1

 

Metrics and results

Sam’s report presents and analyses a number of different measurements, but we will comment on a few selected ones in the following table.

Metric Result Comment
Latency Provider-average of about 40ms. Virgin.net did particular badly; at peak hours latency goes up by 180% compared to quiet hours, while the cross-provider average increases in peak hours by only 25%.
Loss Very small across providers (under 0.6%).  –
Response time of DNS queries Very good across providers (average of 46ms) with the exception of Be Unlimited which occasionally exceeded 4-times the average response time. The report shows how DNS query times affect web browsing even when latency is low.
DNS failures Very low with a failure rate average of 0.81%, apart from Be Unlimited at 2.82%. – 
Perception quality of VoIP Most providers achieved a score close to the theoretical maximum, with the exception of Virgin Media, despite its very small latency. Measurements show how jitter (the variance of delay) lowers the perception quality of VoIP. Virgin Media has excellent latency results but suffers low VoIP quality due to high jitter.
Speed as a percentage of implied line speed Most providers achieve 75%, but from a more detailed graph we see that both ADSL and Cable Virgin drop significantly in speed during peak hours. Implied line speed is defined as the maximum throughput achieved across all speed tests within a two day period.

 

Another interesting finding is the evidence of ISP performing traffic shaping for traffic at ports other than 80. This is because ISPs are trying to cap peer-to-peer protocols traffic and those use non-80 ports.
 
A small objection against the time averaging of results: Take the 75% line speed result. As such it may not reflect user’s satisfaction very well. For example, if for most of the day the speed is high but most users are not using the connection (they are not at home), and then evening brings a significant drop when many users are at home, the result may be 75% just because the speed ishigh for large part of the day.

So for example Virgin.net exhibits a 50% drop in implied line speed at peak hours. For a user who connects only in the evening (after work), this 50% drop would certainly translate into a larger drop in satisfaction than the three quarters suggested by the 75% time-averaged drop.

Would this scheme suffice to provide evidence of poor quality?

Although this measurement scheme was probably not developed with the aim of gathering evidence against ISPs, one could imagine it could be used in such way. Of course, only large-scale metrics could be reported, such as averages across large number of locations, metrics of commoly accessed DNS servers and such. A more granular evidence would require several monitoring devices per location (e.g., a street) and this might be financially unfeasible. Another issue would be the fact that the monitoring devices measure only when users are idle, while many faults occur when many users are using the service. And so while this is the first independent and statistically solid measurement scheme there is, its use as a monitor of quality for geographically clustered or individual users is limited.

The report certainly fulfills its premise by showing that connection speed has only a limited effect on user’s perception of quality and that the Internet experience is affected by many other factors which providers are in control of. Could this report serve as a hint to regulators that ISPs should enrich their contracts by other properties than mere speed?

Advertisement

Posted in accountability, congestion, Of Interest, QoS | Tagged: , , , | 9 Comments »

The Role of Accountability in Home Broadband Provisioning

Posted by David Mayer on August 31, 2008

As of July 2008, some 20 years after the first Internet Service Providers (ISP) were formed and some 50 years after packet switching was invented, the home broadband customer still has no guarantee of any quality parameter, no way of holding his ISP accountable for interruptions and low speed, no choice of quality and no way to fairly share resources with other customers in times of congestion.

Any home broadband customer will be familiar with the situation when something is wrong with their connection, be it unavailability, high data loss or failed name servers. Yet upon contacting the provider one is taken through a bemusing routine of resetting their PC, router, modem, and whatever else has buttons to press, the ISP rarely saying: “It is our fault, we’ll refund you.”

What exactly determines the quality of a broadband connection? All performance characteristics will depend on

  • Network dimensioning and management
  • Behaviour of other customers
  • Resource sharing mechanism in place
  • The rest of the Internet.

While the rest of the Internet is not in the operator’s powers and the behaviour of the users only partly, the network dimensioning and resource sharing facilitation certainly are. Let us now briefly describe what these issues encompass and how today’s operators implement them.

(The lack of) Resource sharing mechanisms

Resource sharing is a mechanism determining the division of network resources between the operator’s customers, especially in times of high demand and congestion. Certain resource sharing is facilitated by medium access protocols. For example, the DOCSIS protocol used in cable networks has extensive means of controlling the total usage of resources of every user at any time. Also the transport layer offers some resource sharing mechanism; TCP controls the rate of every connection but not the total rate of a user. Besides, it cares about resources along the end-to-end path rather than the resources in the operator’s network only.
The medium access protocols and even transport layer protocols are fully controllable by the operator who could use them to facilitate resource sharing.

So how do some of UK’s major broadband providers deal with fair sharing? Virgin Media cable broadband penalizes rough customers after they have committed a high-usage activity. The ISP will reduce the speed of the user’s connection for the duration of 5 hours after the customer exceeds a limit on the amount of data he transmits or receives in a given time interval.
This means that

  • The punishment becomes ineffective if the user is not interested in using the connection after the incident.
  • The punishment does not protect others at the time of congestion.

British Telecom employs a “Fair Usage Policy” according to which a customer’s bit rate will be “restricted” when the customer becomes a “heavy user”.

Please note that the cable standard DOCSIS offers excellent facilities when it comes to QoS. It has 6 quality classes and 8 priority levels. It has total control over upstream (hence can regulate TCP connections by regulating the protocol acknowledgments) and it can also schedule the downstream. The DOCSIS standard is well documented and a number of QoS-related research papers have been written on the topic, most of them focusing on real-world implementation.

Network dimensioning and management

It is the number of customers that largely determines the operator’s income. The infrastructure acquired to realise the sold service reduces profit. It is only natural that an operator tries to increase the former and reduce the latter, given that the service quality is hardly monitored at all and that it is extremely difficult for a customer to claim refunds for bad service. It is obvious that a network operator has great incentives not to support any initiative aimed to objectively measure the quality of the offered service.

It is not practical to infer an operator’s network management because the ISP is not obliged to disclose this information and even if it did, its effect on quality is non-trivial. What is left then is the pursuit of a mechanism which would hold the operator accountable for the service it provides, hence possibly motivating the provider to change their network management.

The rest of this article complains about the lack of accountability and presents 2 hypothetical strategies to increase the accountability.

Operator’s accountability

The mere availability of a data connection is the core of the sold service but it is the quality of broadband provisioning that determines the utility customers derive from it. Yet, it is extremely difficult to prove that a service has been inadequate at some point in time. It is practically impossible for a common user to show that there was for example a packet loss-rate of 40% for the duration of 50 minutes last Saturday afternoon and that it was not the customer’s fault.

Why is it that we can take a broken watch back to the shop and have our money refunded, we can even raise legally sound complaints about complex services such as consulting or insurance but it is almost impossible for a non-techy broadband user to show that something was wrong with their broadband service? What is it about Internet service which makes it so hard to complain about?

For one thing, Internet provisioning is a multi-dimensional service. It spans across several time-scales, across space, it has complex statistical properties and it is perceived subjectively (unlike the quality of electricity or gas). But take the complexity of water quality standards, for example.  They comprise over 50 quantitative parameters every liter of drinking water must satisfy. Yet, utility companies do satisfy these standards and the state usually takes care of their monitoring.

One can argue that an additional set of requirements on home broadband provisioning would increase the cost and in turn the price, having a negative effect on the service proliferation. But surely one can use the same argument in the case of water quality, car safety, and many other areas in which more stringent legal requirements clearly brought benefits far outweighing the cost.

Proving unsatisfactory quality

Part of the Consumer Law related to the provision of broadband is the Sale of Goods & Services Act. Under the Sale of Goods Act 1979 and The Supply of Goods and Services Act 1982 service providers must sell goods that are, among other things,

  • of satisfactory quality and
  • delivered as described.

Of course, satisfactory quality remains undefined and the service is described only vaguely. Furthermore, even if some well-defined quality was breached, there would not be an easy way for a customer to prove it.

Let’s now design a simple hypothetical accountability strategy for today’s home broadband service. The aim here is to create a piece of evidence immune to the operator suggesting that perhaps there was a fault on the customer’s side.

A) Massively deployed application-layer metering

There are 2 ways we can think of, both solid for a court room. Since judges operate on the basis of reasonable belief, we’ll employ some statistics: If a large number of customers at the same time experience, measure and document a bad connection, it is unlikely that this is due to all of them having misconfigured their firewalls or having their cables cut at the same time. Naturally, in several cases it could have been the customer’s fault, but it is beyond reasonable belief that a large group of customers of a particular operator erred at the same time. Hence an application running on the customer’s computer can be used to generate evidence about the service when put together with a large number of other data.

A basic version of this concept is already in place. There are Internet forums where users post their measurements, obtained often from the excellent Speedtest.net. Now if these measurements were made automatically and the results processed to extract geographical and spatial commonalities, that would be a big step towards holding the ISP accountable.

B) Certified hardware monitors attached to customer modems

This method would rely on a device connected between the modem and user equipment. This device, let’s call it a monitor, will have a certification from some regulatory body such as the UK Ofcom. Operators will be legally obliged to allow this device be connected to the modem they provide. Another party – a governmental organisation or a regulated private company – will collect data from these monitors and make them available to the public. Since monitors are standardized and cannot be modified by the customer, there need not be a large number of them in order to rule out the customer’s fault.

Similarly to the way TV and radio ratings are collected from several customers, so measurement data will be collected from several monitors in a given locality, as illustrated in the figure below.

Both strategies require a start-up investment and some low long-term funding. Most customer will also need incentives to participate in the strategy.

Conclusion

Broadband speeds continue to grow together with the necessity of Internet access and the demands of new applications. Advances on the physical and medium access layer are being adopted by the industry. Innovative Internet applications are booming. Yet the experience of any more-than-slightly advanced Internet users is marred by poor network management, failures, oversubscription and unfair usage. At the same time, the progress on the intermediate layers – IP, transport, management – seems to stagnate. Thousands of research papers on quality provisioning and resource allocation seem to have been written in vain.

Be it a lobby group, customer iniative or the government themselves who will push the idea of accountability into practice, we think it is the key for a better home Internet provisioning.

Posted in congestion, fairness, QoS | Tagged: , , | 2 Comments »

Network Congestion Control, M. Welzl, Wiley 2005. (A Book Review)

Posted by David Mayer on August 31, 2007

Network Congestion Control: Managing Internet Traffic — Michael Welzl (Wiley Series on Communications Networking & Distributed Systems 2005).

The control of Internet congestion is a field entangling numerous research, engineering and economic topics, each of which, on itself, offers only a limited perspective on the problem. Some books on this topic treat it as a part of a broader field of network communications, others provide a narrow formal mathematical exposition. Michael Welzl’s book is an exception. It exposes congestion control in several facets, always succinctly explaining the principles and anticipating reader’s questions. Aimed mainly at starting PhD students and interested networking people outside the research community, it avoids formal mathematics and builds up the material through common sense. But it is far from trivial: the author provides a profound overview of the principles, problems and solutions and manages to cover a vast number of diverse topics, focusing on underlying principles, practicality, drawbacks. The author co-chairs the Internet Congestion Control Research Group and works at the University of Innsbruck in Austria.

Essential introduction into the problematics is developed in chapter 2, in which the less informed reader becomes familiar with the basic concepts such as control feedback, stability, queue management, scalability, incentives, fairness. Present technology is the topic of chapter 3, 70% of which takes a fairly detailed description of TCP. The exposition is incremental and problem-motivated. The rest of this chapter briefly describes SCTP, RED and ATM’s Available Bit Service.

The main value of the book I see in its latter part. Chapter 4 is an excellent exposition of very recent experimental enhancements to congestion control. Most of this chapter is dedicated to TCP enhancements and active queue management enhancements. About 10 recent queue management techniques are presented, including such specialities as RED with Preferential Dropping and Stochastic Fair BLUE. A view from the Internet provider’s side is briefly treated in chapter 5. The topics here share one property: they operate on a much larger timescale than those of other chapters. They include Traffic Engineering, MPLS and QoS architectures. Although the level of detail here is much lower than in other chapters, the chapter puts the others into a perspective. The last chapter makes for an exciting read: it presents a mixture of open problems waiting for the hungry PhD mind. It also contains the author’s personal view on the subject. It could be characterised as common-sense reflections of a well-informed pragmatic.

Two appendices conclude the book. Appendix A shows some practical teaching techniques, Appendix B introduces the workings of the IETF.

As great as the book seems, I spotted a few over-simplifications:
Section 2.17.3 describes the concept of proportional fairness. The author states here that the maximization of total utility maximises financial gain. I think this is quite misleading because the term financial gain remains undefined and it is not clear from the text where the revenue comes from. Even if the reader knew it is here meant to come from congestion prices, this would be still problematic. It is true that revenue is maximised if prices charged equal to the congestion prices corresponding to the utility maximisation, but congestion prices are typically not meant to generate revenue as they serve as a control mechanism.
Further the author states that the maximisation of utility functions can be seen as an “optimisation (linear programming)”. But only the constraints are linear and the objective function is non-linear, concave, hence the optimization problem is nonlinear.
In section 5.1 the author explains long-range dependence of Internet traffic. It is here stated that Poisson process’s distribution flattens out as the timescale grows. Surely it is meant that the autocorrelation function flattens out. (As opposed to that of self-similar Internet traffic.) [Update: See comments below by the book’s author. ]

A word of warning for PhD students
This book is a very good one but I think one should be cautious about using some of the open problems as a basis of one’s PhD. The problems border with engineering rather than research and in some colleges solving these problems cannot satisfy PhD requirements by definition. Ideally, solutions to these problems will come about as a byproduct of a more fundamental framework called the thesis. Perhaps these problems can motivate and trigger a PhD topic, but they should not constitute it.

Any alternative books out there?
A book treating congestion control in detail is The Mathematics of Internet Congestion Control by R. Srikant, 2004, Birkhäuser, but be aware: the exposition is quite straightforward and very formal. Another adept is High Performance TCP/IP Networking by M. Hassan and R. Jain, 2003, Prentice Hall, which is limited to TCP and focuses on performance evaluation.

In summary, the material in this unique book is excellently exposed, a good balance between depth and clarity is kept throughout and the book should be keenly received by its audience.

Posted in Book review, congestion, Of Interest, QoS, research | 3 Comments »

CoNEXT 2006

Posted by Michael Gellman on November 19, 2006

CoNEXT 2006 Logo

I’m going to be attending CoNext 2006 in Portugal this year. CoNEXT describes itself as (from the Call for Papers):

CoNext 2006 will be a major forum in the area of future networking technologies. CoNext emphasizes synergies between various international and technical communities. The conference will feature a single-track, high quality technical program with significant opportunities for technical and social interaction among a closeknit community of participants. CoNext aims to be open and accommodating to multiple viewpoints and is committed to fairness in the review process and to returning deep and sound technical feedback to authors of submitted paper.

The program seems really compelling, with talks by a lot of researchers at the forefront of Networking research. I’m really looking to the talk by Jon Crowcroft and also the paper by Simon Fischer at RWTH Aachen which I’ve read, and I’ll be posting some thoughts on soon.

I’m going to be presenting a student poster on my Ph.d. work dealing with QoS routing in Peer-to-Peer overlay networks. If you’re attending this conference, be sure to post a comment — it would be great to chat about it in advance!

There’s also a great new Web 2.0-ish site which is dedicated to conferences called ConFabb where they have a dedicated entry for CoNEXT 2006.

Posted in Conference, QoS, research | 1 Comment »

Pursuing Research in Communications Networks

Posted by David Mayer on November 14, 2006

Introduction

Research in communications networks is a challenging enterprise many find worthy of endeavour. Suppose we have answered “why” to do networking research in the first place and let us move to the second most important question, that of “how”.

Communications research can be seen as having attributes of both science and engineering. I think this also corresponds to the main reasons of “why”; it is interesting on its own (the science bit, just as it is interesting to research the age of universe and the structure of our genes) and there may be a practical benefit for the society (the engineering bit, “I’m doing something useful”). On the one hand, some networking research is very theoretical (i.e. unlikely to benefit any implementation) and yet interesting, just as it is interesting to research the age of the universe and the structure of our genes. On the other hand, the benefit to society best stems from a practically-oriented research.

In this article, I would like to present some ideas for constructing a worthy research topic and research work in the field of communications networks. In particular, we will use network Quality of Service (QoS) research as an example.

3 Strategic Considerations

We have identified the following strategic considerations:

1. Timeliness

As noted by J. Crowcroft et al. in [1], the time required to research, engineer and retrofit a specific solution is typically longer than the remaining life-span of the congested core. If research commences only when there is an immediate need for it, it is too late. The ability to envisage future needs is essential here. We will present three drivers of new networks which help in predicting the future.

2. Deployability

The lack of attention to non-research issues was the main reason behind the failure of network QoS research. Deployability encompasses various operational (practical) and economical issues.

3. Functionality

Given the need for a new mechanism, what is the best thing for it to do? To illustrate the problem, consider that there is a general consensus on the need of QoS, but there is much less agreement on the specific functionality, as seen from the existence of proposals as different as IntServ and DiffServ. We will try to show that functionality is tightly connected to deployability and recommend that a proposed mechanism should maximise benefits subject to constraints given by deployability considerations.

1 – Timeliness

Research must commence a long time before there is a need for it [1]. Obviously, this requires us to predict the future. We shall now look at some examples of how this prediction could be done. The key question is: What causes a need for an improvement? The answer enables us to start relevant research at the right time. In principle, the driving forces are, according to K. Cambron, president and CEO of AT&T Labs [2]:

A) Changing demand

B) Decreasing entropy

C) Changing technology

A – Changing demand

This includes volume of traffic, but also its type, patterns, emergence of new applications. As an example, let us take a look at GoogleTech talk by Van Jacobson [3]. Jacobson reviews the history of telecommunication networks, the success (disaster) of TCP/IP and finally argues that our current networking protocols are inadequate, because they were designed for a conversational network, where 2 people/machines talk to each other, while today over 99% of network traffic comes from a machine acquiring named chunks of data (web pages, emails). The main point is we are doing dissemination over a conversational network, which has numerous inefficiencies. That was an example of a driving force behind new networks – a change in demand.

B – Decreasing entropy

This term is used in [2] to describe the ability of a network to deal with new problems. Cambron argues that in order to meet new demands, changes to the network are made and “every design choice removes a degree of freedom, solving an immediate problem, but eliminating potential solutions to other problems that lie in the future”. At some point the network becomes incapable of dealing with new demands and must be replaced by a new network.

C – Changing technology

Obviously, the increasing bandwidth of core and access links has a fundamental impact on when and what QoS is needed.

2 – Deployability

The failure of QoS research to deliver a practical solution is ascribed to researchers’ lack of attention to practical issues. Most of all, it is the lack of feedback from operational networks environment (or the unwillingness of academia to interact). An excellent example of some practical problems is [4]. Here we list some key points.

Operational considerations

  • Rift between networking operations and protocol design.
    There seems to be a range of difficulties in the operational network that researchers have never heard of and therefore will not consider in their design. The list includes complexity from the configuration point of view, proneness to operator error, debugging tools, incremental deployability and so on.
  • Complexity. In [4], G. Bell argues that IP multicast defines the limit-case for deployable complexity. Quoting [4],

    “Eventually, we concluded that implementing IP multicast required a substantial commitment of engineering resources, and that it would inevitably impair the stability of unicast routing due to the need for frequent OS upgrades and intrusive testing. Many enterprises do not have the mandate, or see a justification, for making this commitment.”

    A good example of operational issues arising with the deployment of QoS is Davie’s 2003 paper “Deployment experience with differentiated services [5]”.
    (Network administrators will not risk destabilising their network)

Economic considerations

Quoting [6],

“Solutions require path paved with incentives to motivate diverse agents to adopt it, implement it, use it, interface with it, or just tolerate it. Without that path you’ve wasted your time”.

The bottom line that justifies QoS research from the economic point of view is that QoS provides network operators with a means to extract value from their network by charging for different level of service [1]. However, this mere fact does not guarantee embracement by operators and QoS research must account for all practical issues to become successful.

For an in-depth analysis of different political and socio-economic forces in the Internet, see [7] by D. Clark et al. and [8].

Note on deployability

The ease at which one can deploy an idea depends very much on the network layer in question. It is easy to deploy a novelty at the application level, but very difficult at Layer 3, and nearly impossible on layers below that. When choosing a research topic while aiming at its practical impact, one could take this dependency into consideration. (There are probably many people of the calibre of Google guys Larry Page and Sergey Brin working on lower layers, but it takes a Cisco-sized company to make a difference there.)

3 – Functionality

Researching and engineering particular mechanisms can be seen as finding the best trade-off between functionality and operational feasibility. For example, it is fairly established that the most desirable feature in the current Internet is to have some QoS notion on an end-to-end basis (i.e. from source to the destination, across domains). However, if the research community were asked what the QoS should be, there would not be a certain answer. Guaranteed performance metrics? Differentiated service? How differentiated? Simple congestion feedback?

Perhaps the answer should be: The most beneficial solution subject to time and deployability constraints.

Summary

To propose and conduct research in networking requires profound and wide understanding of the theoretical and practical constraints involved therein.

We have identified 3 critical considerations: timeliness, functionality and deployability, presenting a few points that can help in assessing them. We argued that design should maximise benefit subject to practical constraints stemming from deployability issues.

On the case of QoS research functionality on its own is only a small bit in the recipe for success. Only proposals fully respecting real-world issues have a chance to make it to the operational network.

For those not yet in a position to make judgements about its future development and the operational and economical aspects and are forced to invent a research topic (such as a PhD student with a freedom/requirement to come up with a topic), an alternative safe approach is possible:

  1. Choose a research area according to your interest, opportunities to contribute or instinct.
  2. Spend several months reading and writing simulations to obtain a profound understanding of the problem in question.
  3. Find a point of potential improvement and propose the improvement. From there on you can slowly start getting the whole picture, while working and contributing.

I hope this article provides some rough guidelines as to what to look out for when commencing a research project in the area of communications networks.

References

[1] Jon Crowcroft, Steven Hand, Richard Mortier, Timothy Roscoe, and Andrew Warfield. Qos’s downfall: at the bottom, or not at all! In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 109–114, New York, NY, USA, 2003. ACM Press.

[2] G. K. Cambron. The next generation network and why we’ll never see it. Communications Magazine, IEEE, 44(10):8–10, 2006.

[3] Van Jacobson. A new way to look at networking, 2006. Google Tech Talks. http://video.google.com/videoplay?docid=-6972678839686672840&q=van+jacobson&pr=goog-sl.

[4] Gregory Bell. Failure to thrive: QoS and the culture of operational networking. In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 115–120, New York, NY, USA, 2003. ACM Press.

[5] Bruce Davie. Deployment experience with differentiated services. In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 131–136, New York, NY, USA, 2003. ACM Press.

[6] K. C. Claffy. Top problems of the Internet and how to help solve them, 2005. http://www.caida.org/publications/presentations/2005/topproblemsnet/topproblemsnet.pdf.

[7] David D. Clark, John Wroclawski, Karen R. Sollins, and Robert Braden. Tussle in cyberspace: defining tomorrow’s internet. In SIGCOMM ’02: Proceedings of the 2002 conference on Applications, technologies, architectures, and protocols for computer communications, pages 347–356, New York, NY, USA, 2002. ACM Press.

[8] L. Burgstahler, K. Dolzer, C. Hauser, J. Jahnert, S. Junghans, C. Macian, and W. Payer. Beyond technology: the missing pieces for qos success. In RIPQoS ’03: Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS, pages 121–130, New York, NY, USA, 2003. ACM Press.

Posted in deployability, QoS, research | 19 Comments »