News

  • Journal Paper
    IEEE/IFIP Network Operations and Management Symposium

    This paper is a development of the earlier ideas in PREFLEX -- http://www.richardclegg.org/node/18

    In this case the focus is resilience within a data centre. In particular resilience at the network layer. If several paths are available to a destination the system known as INFLEX can support fail over between paths seamlessly using OpenFlow. In this case the system is tested using Openvswitch.

  • Invited talk
    Location: 
    Queen Mary University of London
    Date: 
    2014-01-22
    Comments: 

    An updated version of this talk was given at Cambridge and can be seen here

    The key message of this paper is that TCP/IP does not work in the real world as it is generally taught. The idea of a connection when one side sends data as fast as possible controlled by loss to fill a pipe is not what happens in the real world.

    This work joins the two papers
    A longitudinal analysis of Internet rate limitations (INFOCOM 2014)
    and
    On the relationship between fundamental measurements in TCP flows (ICC 2013)

    The talk analyses passive traces with the aim of explaining what are the root causes of bandwidth on a connection. Theoretical results show that in equilibrium an unconstrained TCP flow has a bandwidth proportional to 1/RTT and 1/sqrt(p) where p is probability of packet loss. The experimental results here show different results, however. In particular, while the relationship with RTT is upheld, the relationship with loss is not found. A strong relationship with the length of flow is found. Longer flows have faster throughput in proportion to sqrt(L) where L is the length of the flow in packets.

    A follow up analysis looks at the causes of throughput. It is found that less than half of flows are governed by loss. Flow bandwidth is very often governed by applications -- for example you tube deliberately throttles traffic so that users do not download too far ahead. Some flows are governed by operating system restrictions which do not scale window sizes. Some flows are governed by middleboxes which manipulate the window size. It is these restrictions which, the network studied, are the primary method which restricts bandwidth on connections.

  • Conference paper
    Proceedings of INFOCOM

    This paper looks at when TCP is "not" TCP by analysis of five years of data on a Japanese data set. That is to say, when TCP throughput is limited by mechanisms other than traditional TCP rate control (loss or delay in the network feedback causing a reduction in window size).

    Other mechanisms are important:
    1) Application limiting where the sender "dribbles" out data more slowly, for example in the way that you tube does, to reduce their bandwidth.
    2) Window size limitations -- where hosts have an OS built in limitation on how large the TCP window can be.
    3) Middle box/receiver window tweaking -- where the receiver or (more likely) a middle box tweaks the advertised window size to reduce throughput.

    It is found that in the traces studied these three mechanisms account for more than half the packets. The traces include data from well known sites such as YouTube and it seems likely that the findings are more general than just applicability to this particular trace set.

    In general this paper finds that TCP in the wild is not behaving in the way it is traditionally taught... by a variety of mechanisms, TCP is not "filling a pipe" and "controlled by loss"... other mechanisms are at play beyond traditional TCP congestion control.

  • Invited talk
    Location: 
    UCL Statistics
    Date: 
    2014-01-15
    Comments: 

    This talk is the latest of my talks about FETA the framework for evolving topology analysis. This uses updated notation. The core of the work is a likelihood based model which can assess how likely it is that observations of the evolution of a graph arise from a particular probabilistic model, for example a model such as the Barabassi-Albert preferential attachment model. Analysis is given to data from Facebook and from Enron as well as from artificial models.

  • Conference paper
    International Conference on Computer Communications and Networks

    This paper analyses a large number of measurements of round trip times collected from DNS servers and looks at how the measurements vary across continents.

  • Blog post
    A new website for a not-that-new millennium

    Well, I thought it was time that this website was dragged into the 21st century. I have not redesigned this properly since the turn of the century. This site originally ran on an SGI Indigo graphics workstation hosted in the physics department at the University of York and was created some time I think in 1994.

  • Journal Paper
    Performance Evaluation 67(5)

    This paper looks at a markov chain based model and uses queuing theory to analyse its performance. The system is D-BMAP/D/1 and a closed form solution is found

  • Journal Paper
    Computer Communications, 33(3)

    The aim of this paper is to provide a summary and a critique of power law modelling in the internet. Long-range dependence and self-similarity are considered as well as scale-free topology analysis.

  • Conference paper
    Proceedings of IEEE/ICC Conference

    This paper looks at a mechanism related to Explicit Congestion Notification. It uses a single bit in the IP header to communicate the congestion at each hop in the path. Statistical estimators are used to work out the accuracy of the congestion estimation.

  • Journal Paper
    Journal of Computer and System Sciences, 77(5)

    This paper looks at the phenomenon of long-range dependence. It shows that certain long-range dependent models give answers which contain infinities and also that this behaviour will not be detected by a naive modelling approach. The work is an extension of an earlier published PMECT paper.

Pages