• Criticisms of modelling packet traffic using long-range dependence (extended version)

    Journal Paper
    Journal of Computer and System Sciences, 77(5)

    This paper looks at the phenomenon of long-range dependence. It shows that certain long-range dependent models give answers which contain infinities and also that this behaviour will not be detected by a naive modelling approach. The work is an extension of an earlier published PMECT paper.

  • Content Based Traffic Engineering in Software Defined Information Centric Networks

    Abhishek Chanda, Cedric Westphal and Dipankar Raychaudhuri

    This paper describes Information Centric Networks over Software Defined Networks. Applications are

    1. Traffic engineering in ICN

    2. Content based firewall

    3. Network wide cache management

    Per-flow granularity not sufficient for content based routing. Some routing can be done based on content length. Meta-data is extracted from ICN interest and data packets. This can be via

    1. Network layer extraction of content length – classified into mouse/ elephant.

  • The Controller Placement Problem

    Brandon Heller, Rob Sherwood and Nick McKeown

    Question: Given SDN topology,

    1. How many controllers are needed and

    2. where do they go?

    Problems especially in WAN with long propogation delay – affects convergence and availability and informs decision as to whehter control is “real time” or pushed out to forwarding elements.

    Motivating examples:

    • Internet2, placing SDN controllers in existing network

    • Aster*x distributed load balancer

    • Flowvizor centralised network slicing tool

    Placement metrics:

  • Deep Diving into BitTorrent Locality

    Ruben Cuevans -- Univ Carlos III de Madrid, Nikolaos Laoutaris, Xiaoyuan Yang, Georgos Siganos and Pablo Rodriguez

    This paper looks at P2P traffic over bittorrent from a large database of torrents. The paper considers the effects of localising bittorrent traffic on performance and ISP cost saving.

    Data: The data set is one of the impressive things about this paper. 100K torrents of which 40K active. Demographics from 3.9M concurrent users and 21M total users over a day from 11K ISPs. Speed test results from ookla and iplane.

  • Home is where the (fast) Internet is: Flat-rate compatible incentives for reducing peak load

    Parminder Chhabra, Nikolaos Laoutaris and Pablo Rodriguez

    This paper looks at a model of reducing peak-rate load by incentivising users to move from peak rate slots to off-peak time periods. It has its roots in their HotNets 2008 paper “Good things come to those who (can) wait”. (Users are granted bandwidth in the off-peak for good behaviour in the on-peak.)

  • On the predictability of large transfer TCP throughput

    Qi He, Constantine Dovrolis and Mostafa Ammar

    This paper looks at ways of predicting the TCP throughput of a connection. The assumption is that some information is available about the connection. A comparison is made between “formula based” (FB) prediction, that is using round-trip time and loss versus time series analysis prediction (referred to here as history based (HB)), that is using previous measurements on the same connection. Both approaches require some measurements from the connection already.

  • Outsourcing the Routing Control Logic: Better Internet Routing Based on SDN Principles

    Vasileios Kotronis, Xenofontas Dimitropoulos and Bernhard Ager

    Claim is for an outsourced control plane for routing which is backward compatible with BGP. Idea is that any “significant sized” networks outsource routing to a contractor specialising in this. Outsourcing party exports to contractor the following:

    • Policies, either directly or derived from SLAs.

    • Topologies and measurement data (jitter, load etc)

    • eBGP sessions – connections to other networks

  • Improving content delivery using provider-aided distance information

    Ingmar Poese, Benjamin Frank, Bernhard Ager, Georgios Smaragdakis and Anja Feldmann

    This paper looks at CDN networks and, in particular, suggests Provider-aided Distance Information System (PaDIS), which is a mechanism to rank client-host pairs based upon information such as RTT, bandwidth or number of hops. Headline figure, 70% of http traffic from a major european ISP can be accessed via multiple different locations. “Hyper giants” are defined as the large content providers such as google, yahoo and CDN providers which effectively build their own network and have content in multiple places.

  • The power of prediction -- Cloud bandwidth and cost reduction

    Eyal Zohar, Israel Cidon and Osnat Mokryn

    This paper deals with reducing costs for cloud computing users. Cloud customers use “Traffic Redundancy Elimination” (TRE) to reduce bandwidth costs. Redundant data chunks are detected and removed – cloud providers will not implement middleboxes for this as they have no incentive. The paper gives a TRE solution which does not require a server to maintain client status. The system is known as PACK “Predictive ACKnowledgements” which is receiver driven.

  • Towards predictable datacenter networks

    Hitesh Ballani, Paolo Costa, Thomas Karagiannis and Ant Rowstron

    This paper looks at the issue of reducing variability in performance in data centre networks. Variable network performance can lead to unreliable application performance in networked applications – this can be a particular problem for cloud apps. Virtual networks are proposed as a solution to isolate the “tenant” performance from the physical network infrastructure. The system presented is known as Oktopus. The system provides a tradeoff between guarantees to tenants, costs to tenants and profits to providers by mapping a virtual network to the physical network.