Ruben Cuevans -- Univ Carlos III de Madrid, Nikolaos Laoutaris, Xiaoyuan Yang, Georgos Siganos and Pablo Rodriguez
This paper looks at P2P traffic over bittorrent from a large database
of torrents. The paper considers the effects of localising bittorrent
traffic on performance and ISP cost saving.
Data: The data set is one of the impressive things about this paper.
100K torrents of which 40K active. Demographics from 3.9M
concurrent users and 21M total users over a day from 11K ISPs.
Speed test results from ookla and iplane.
Parminder Chhabra, Nikolaos Laoutaris and Pablo Rodriguez
This paper looks at a model of reducing peak-rate load by incentivising users
to move from peak rate slots to off-peak time periods. It has its roots in
their HotNets 2008 paper “Good things come to those who (can) wait”.
(Users are granted bandwidth in the off-peak for good behaviour in the on-peak.)
This paper looks at ways of predicting the TCP throughput of a connection.
The assumption is that some information is available about the connection.
A comparison is made between “formula based” (FB) prediction, that is using
round-trip time and loss versus time series analysis prediction (referred to
here as history based (HB)), that is
using previous measurements on the same connection. Both approaches
require some measurements from the connection already.
Vasileios Kotronis, Xenofontas Dimitropoulos and Bernhard Ager
Claim is for an outsourced control plane for routing which is backward
compatible with BGP. Idea is that any “significant sized” networks
outsource routing to a contractor specialising in this.
Outsourcing party exports to contractor the following:
Policies, either directly or derived from SLAs.
Topologies and measurement data (jitter, load etc)
Ingmar Poese, Benjamin Frank, Bernhard Ager, Georgios Smaragdakis and Anja Feldmann
This paper looks at CDN networks and, in particular, suggests
Provider-aided Distance Information System (PaDIS), which is a mechanism
to rank client-host pairs based upon information such as RTT,
bandwidth or number of hops. Headline figure, 70% of http traffic
from a major european ISP can be accessed via multiple different
locations. “Hyper giants” are defined as the large content providers
such as google, yahoo and CDN providers which effectively build
their own network and have content in multiple places.
This paper deals with reducing costs for cloud computing users. Cloud customers
use “Traffic Redundancy Elimination” (TRE) to reduce bandwidth costs.
Redundant data chunks are detected and removed – cloud providers will
not implement middleboxes for this as they have no incentive.
The paper gives a TRE solution which does not require a server
to maintain client status. The system is known as PACK “Predictive
ACKnowledgements” which is receiver driven.
Hitesh Ballani, Paolo Costa, Thomas Karagiannis and Ant Rowstron
This paper looks at the issue of reducing variability in performance
in data centre networks. Variable network performance can lead
to unreliable application performance in networked applications –
this can be a particular problem for cloud apps. Virtual networks
are proposed as a solution to isolate the “tenant” performance from
the physical network infrastructure. The system presented is known
as Oktopus. The system provides a tradeoff between guarantees to tenants,
costs to tenants and profits to providers by mapping a virtual network
to the physical network.
Monia Ghobadi, Soheil Hassas Yeganeh and Yashar Ganjali
The central suggestion is that centrally controlled SDN networks are
in a position to choose which version of TCP will work best. OpenTCP
tunes TCP for traffic and network conditions. Different TCP flavours
perform better at different times or in different conditions e.g.
datacentre TCP (DCTCP).