PaperPicker

Online Journal Club for Networking Researchers

Posts Tagged ‘Internet’

Independent Measurement of Broadband Provisioning Quality by SamKnows: A Step Towards Providers’ Accountability?

Posted by David Mayer on January 2, 2009

Some time ago on this blog we wrote about the lack of accountability in home broadband provisioning. We noted how difficult it is for a typical broadband customer to give evidence about poor service. One of the solutions to generate legally sound evidence we mentioned was to deploy hardware monitors attached to customer modems. This is exactly what SamKnows Limited have done, although the reasons behind their project are probably different. The project was partly backed up by Ofcom.

A report describing the project has been published by Sam Crawford here.  In this article, we briefly characterise the measurement scheme, pick out some interesting results and finally comment whether this scheme could be used to gather evidence about quality of broadband provisioning for individual users.

One of the objectives of Sam’s report is to enlighten the public about the myth that speed is by far the most important property of a broadband service. The results show that speed is only one of many properties which affect the quality of the service and that other properties can have larger effect on the quality as perceived by the user.

We at PaperPicker look at this project as an example of how Internet providers could be made accountable. A typical user can never find out, leave alone quantify, what is wrong with their connection. An ISP can always blame user’s equipment for the fault and it is extremely difficult for users to show evidence of poor service. A measurement scheme such as this is a real-world example of generating statistically and legally sound evidence.

What was done and how

At the heart of the scheme lie hardware monitoring units, installed at homes of volunteers and equipped with measurement routines. The monitoring units were deployed all over UK with the help of Ofcom. Over the course of 6 weeks in 2008 these units were generating multiple types of traffic, recording measurements and sending the results to SamKnows servers. The test included 223 devices and covered 12 UK-based Internet service providers (ISP).

What makes this project unique is the fact that the collected measurements are both independent of ISPs and statistically sound. The measurements are independent of ISPs because they do not require any cooperation from ISPs. Statistical confidence comes from the number of units deployed and the number of measurements carried out.

However, since SamKnows do not quite disclose the reasons for doing this project nor they specify how it is financed, one can have doubts over the independence. On the other hand, the project is backed up by Ofcom, the UK telecom regulator.

So how does it work?

A monitoring unit (Linksys WRT54GL with embedded Linux) is connected between user’s modem or router and user’s computers as shown in the figure below. The unit generates traffic only when the user’s equipment does not. A pre-programmed set of tests runs according to a given schedule and results are collected at SamKnows servers.

 

diagram1

 

Metrics and results

Sam’s report presents and analyses a number of different measurements, but we will comment on a few selected ones in the following table.

Metric Result Comment
Latency Provider-average of about 40ms. Virgin.net did particular badly; at peak hours latency goes up by 180% compared to quiet hours, while the cross-provider average increases in peak hours by only 25%.
Loss Very small across providers (under 0.6%).  –
Response time of DNS queries Very good across providers (average of 46ms) with the exception of Be Unlimited which occasionally exceeded 4-times the average response time. The report shows how DNS query times affect web browsing even when latency is low.
DNS failures Very low with a failure rate average of 0.81%, apart from Be Unlimited at 2.82%. – 
Perception quality of VoIP Most providers achieved a score close to the theoretical maximum, with the exception of Virgin Media, despite its very small latency. Measurements show how jitter (the variance of delay) lowers the perception quality of VoIP. Virgin Media has excellent latency results but suffers low VoIP quality due to high jitter.
Speed as a percentage of implied line speed Most providers achieve 75%, but from a more detailed graph we see that both ADSL and Cable Virgin drop significantly in speed during peak hours. Implied line speed is defined as the maximum throughput achieved across all speed tests within a two day period.

 

Another interesting finding is the evidence of ISP performing traffic shaping for traffic at ports other than 80. This is because ISPs are trying to cap peer-to-peer protocols traffic and those use non-80 ports.
 
A small objection against the time averaging of results: Take the 75% line speed result. As such it may not reflect user’s satisfaction very well. For example, if for most of the day the speed is high but most users are not using the connection (they are not at home), and then evening brings a significant drop when many users are at home, the result may be 75% just because the speed ishigh for large part of the day.

So for example Virgin.net exhibits a 50% drop in implied line speed at peak hours. For a user who connects only in the evening (after work), this 50% drop would certainly translate into a larger drop in satisfaction than the three quarters suggested by the 75% time-averaged drop.

Would this scheme suffice to provide evidence of poor quality?

Although this measurement scheme was probably not developed with the aim of gathering evidence against ISPs, one could imagine it could be used in such way. Of course, only large-scale metrics could be reported, such as averages across large number of locations, metrics of commoly accessed DNS servers and such. A more granular evidence would require several monitoring devices per location (e.g., a street) and this might be financially unfeasible. Another issue would be the fact that the monitoring devices measure only when users are idle, while many faults occur when many users are using the service. And so while this is the first independent and statistically solid measurement scheme there is, its use as a monitor of quality for geographically clustered or individual users is limited.

The report certainly fulfills its premise by showing that connection speed has only a limited effect on user’s perception of quality and that the Internet experience is affected by many other factors which providers are in control of. Could this report serve as a hint to regulators that ISPs should enrich their contracts by other properties than mere speed?

Advertisements

Posted in accountability, congestion, Of Interest, QoS | Tagged: , , , | 8 Comments »

Vint Cerf on Current Internet Research Problems

Posted by David Mayer on October 1, 2008

In September this year the British Computer Society organised an international academic conference, chaired by Prof.  Erol Gelenbe, featuring talks and papers under the theme of ”Visions of Computer Science”. Among the key speakers appeared also Vint Cerf, one of the founders of Internet’s underlying mechanics, currently holding the position of “Internet Evangelist” at Google.  Vint Cerf has given a speech on the history of Internet, its current issues and the concept of Interplanetary Internet.  In this article we would like to provide a handy list of what Vint Cerf considers as the most important research problems concerning current Internet.

List of research problems concerning the Internet

  • Security at all levels
  • Internet Erlang Formula
    Erlang formulas were used in telephony, whereby the call blocking probability could be calculated based on calls arrivals, duration and the number of lines. By “Internet Erlang formula” Vint means some tool which could relate network parameters to the network performance perceived by users or applications. Vint identifies the problem in the lack of up-to-date models. The never ending innovation of applications makes any model assumptions go quickly wrong, unlike in telephone networks.
  • Mobility is something that we have not handled well in the Internet
    Vint said that he had made a mistake in the design of TCP/IP by binding too tightly TCP end-point identifier to the physical location/IP address. Currently, a TCP user moving elsewhere destroys the TCP connection.
  • We are not exploiting the possibilities of
    • Multihoming – how do we take advantage of being connected to the Internet via multiple service providers at the same time (hence given several IP addresses simultaneously)
    • Multipath routing whereby multiple routes would be use simultaneously, rather than using one route after another breaks
    • Broadcasting, especially in wireless networks where broadcast is a natural feature of the medium
  • Semantic web
    Vint asserts that there is a vast potential in creating machinable semantic relationships rather than mere hyperlinks. A searched expressions may exist in several different contexts and Internet users can help increase the semantic clarity of Internet content.
  • The problem of rotten bits
    If we do not retain the software that had been used to create content in the past, it will become invisible and meaningless in the times to come. Think of a proprietary formats which are no longer supported by the company that developed them, or just older formats not longer supported.
  • Energy consumption
    One of Google data centers consumes 128 megawatts of energy. Engineers increasingly take into account energy consumption as one of the design criteria when developing algorithms and computer architectures.

A note on IPTV

Most people associate video with streaming video. However, only 15% of the video we watch constitutes real-time video. Vint asserts that the rest can and should be thought of as a mere file transfer, since the source is not real-time. This consideration lightens the otherwise stringent demands a real-time video transfer impose on networks. Vint predicts that streaming will be only a minor aspect of video on the Internet.

A note on intellectual property

Vint notices that the Internet works by copying content, hence touching on the issue at heart of intellectual property protection. Vint suggests that copyright infringment in the Internet context will need to be defined differently from a mere copying.

A note on running out of capacity

Vint asserts that the backbone links composed of optical fibres are far from running out of capacity, but problems are real in the network edges. He sees this as a regulatory and economical issue.

A closing note

During the talk Vint presented a slide with a picture of a 1977 network demonstration he co-designed. The network managed to carry a real-time telephone call across 3 completely different networks – radio, landline and satellite – networks differing in modulation, bit-rate, loss-rate (kind of VoIP in 1977!). How much has the Internet really changed since then? Apart from its unprecedented growth, little seems to have changed in its underlying foundations. This article presented a list of current research challenges as seen by Vint Cerf, a visionary who co-fathered these foundations, a visionary with the most amazing track record.

Posted in research, Uncategorized | Tagged: , , | Leave a Comment »