Networking Papers – SDN

This section contains summaries of papers about software defined networks The following papers are summarised so far:

Content Based Traffic Engineering in Software Defined Information Centric Networks

Abhishek Chanda, Cedric Westphal and Dipankar Raychaudhuri

Full paper available from arvix (cannot find details where paper is published) http://arxiv.org/abs/1301.7517

This paper describes Information Centric Networks over Software Defined Networks. Applications are

  1. Traffic engineering in ICN

  2. Content based firewall

  3. Network wide cache management

Per-flow granularity not sufficient for content based routing. Some routing can be done based on content length. Meta-data is extracted from ICN interest and data packets. This can be via

  1. Network layer extraction of content length – classified into mouse/ elephant.

  2. Application layer – reading of http headers gets content length. Just knowing length allows various engineering as described above but only based on length.

The engineering problem describes involves working out where to place a flow of length F given background traffic and capacities of all links.

Show bibtex

The Controller Placement Problem – HotSDN 2012

Brandon Heller (Stanford), Rob Sherwood (Bigswitch), Nick McKeown (Stanford)

Full paper at author website http://www.stanford.edu/~brandonh/papers/hot21-heller.pdf

Question: Given SDN topology,

  1. How many controllers are needed and

  2. where do they go?

Problems especially in WAN with long propogation delay – affects convergence and availability and informs decision as to whehter control is “real time” or pushed out to forwarding elements.

Motivating examples:

  • Internet2, placing SDN controllers in existing network

  • Aster*x distributed load balancer

  • Flowvizor centralised network slicing tool

Placement metrics:

  • Average case latency between nearest of k controllers and all nodes (problem known as minimum k-median)

  • Worst case latency from any node to k controllers (problem known as minimum k-centre)

  • Maximise no of nodes within latency bound (problem known as maximum cover)

Application is only for small networks since problem is “exponential for k”. Refer to literature for algorithms for larger tests.

Most results are shown initially for Internet 2 topology. Tradeoffs exist between metrics – e.g. worst case versus average case trade off.

Figure three shows results for “random” placement as ratio to optimal – not clear if random results are just one instantiation or average over many.

Later results draw more widely from topology zoo.

Discussion points include:

  • Optimising for robustness (distributed versus centralised control reliability)

  • State distribution problem (more than one controller must share state).

Show bibtex

Logically centralized?: state distribution trade-offs in software defined networks – HotSDN 2012

Dan Levin (TU Berlin), Andreas Wundsam (UC Berkeley) , Brandon Heller (Stanford), Nikhil Handigol (Stanford) and Anja Feldmann (TU Berlin)

Full paper at HotSDN web site http://conferences.sigcomm.org/sigcomm/2012/paper/hotsdn/p1.pdf

Assumption behind paper is that control plane is decentralised but logically centralised. Physically centralised dismissed as impossible due to:

  1. responsiveness

  2. reliability

  3. scalability

Controller component choices:

  • Strongly consistent – controller components always operate on the same world view. Imposes delay and overhead.

  • Eventually consistent – controller components incorporate information as it becomes available but may make decisions on different world views.

Problem formulation – physical layer P contains FIBs. State management layer S contains NIB (Network Information Base) – each controller in S has its own view of NIB.

Tradeoffs:

  1. Uncoordinated changes could give routing loops, sub-optimal balancing. Coordinated changes mean more coordination messages and slower responsiveness.

  2. Application logic is simpler if information is unaware of potential “staleness” of information but application aware of information problems may give better decision.

Example application is a load-balancer, two types:

  1. Link Balance Controller (LBC) – Global network view of paths and utilisations presented to controllers. Simple software assigns new flows to path with lowest max link utilisation.

    • Separate State Link Balancer Controller (SSLBC) – Fresh intra-domain network state kept separate from inter domain info. Logic ensures convergence of load balancing. Shifts only new flows not existing flows to minimise max link utilisation.

Simulation is released as an open-source tool. Initial experiments use simple topology with two domains and two controllers. Metric measured is RMSE of max link utilisation over all paths – 0 if all paths have same max link utilisation. Flow arrivals are governed by a sin function with different workloads to cover different utilisations. Most “realistic” workflow has exponential flow inter arrival times with rate governed by sin and Weibull flow durations. Simulation has timesteps (64 timesteps to one sin wave cycle). Controller info “staleness” varied. LBC performance decreases as info is more “stale” but SSLBC less so.

Key tradeoffs identified are:

  1. Staleness versus optimality – fresh info obviously better and more optimal but requires more resources.

  2. Application complexity versus robustness to inconsistency – more complex applications are necessary to cope with inconsistency.

Show bibtex

OpenRadio: a programmable wireless dataplane – HotSDN 2012

Manu Bansal, Jeffrey Mehlman, Sachin Katti and Philip Levis (Stanford)

Full Paper at HotSDN website http://conferences.sigcomm.org/sigcomm/2012/paper/hotsdn/p109.pdf

Open Radio is a design for a programmable wireless dataplane. Wireless protocols evolve quickly and infrastructure for wireless must support this. Protocol changes are continuous but many basestations deployed. OpenRadio exposes an interface to program PHY and MAC layers. Key contributions

  1. Decouple protocol definition from hardware

  2. Abstraction layer for protocols decoupling processing and decision

Design goals

  • 20MHz processing for OFDM complexity protocols

  • 25 mu sec response time for ACKs.

Multicore architectures for DSP will be used to get speed necessary.

Show bibtex

Outsourcing the Routing Control Logic: Better Internet Routing Based on SDN Principles – HotNets 2012

Vasileios Kotronis and Xenofontas Dimitropoulos and Bernhard Ager (ETH Zurich)

Full paper link at HotNets website http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final78.pdf

Claim is for an outsourced control plane for routing which is backward compatible with BGP. Idea is that any “significant sized” networks outsource routing to a contractor specialising in this. Outsourcing party exports to contractor the following:

  • Policies, either directly or derived from SLAs.

  • Topologies and measurement data (jitter, load etc)

  • eBGP sessions – connections to other networks

Centralisation gives advantages as controller can see traffic from many AS (referred to as a cluster) and hence:

  1. Optimise interdomain (benefits multiple ISPs even not part of cluster).

  2. Evolve interdomain routing – quickly adopt and try new interdomain policies.

  3. Collaborative security and troubleshooting – helps track source of failure within cluster.

Legacy API allows interaction with existing BGP and eBGP. Legacy control includes direct access to router FIB and RIB.

Economically, outsourcing reduces OPEX. Contractor can detect (and mediate?) tussle between clients.

Problems:

  • Failover – what if contractor can't contact client

  • Interfaces – not designed in this paper

  • Security/privacy

  • Evolved BGP – how can routing progress once centralised

  • Market – what is structure of new market?

  • Legal – what legal problems arise?

Show bibtex

Rethinking End-to-End Congestion Control in Software-Defined Networks – HotNets 2012

Monia Ghobadi, Soheil Hassas Yeganeh, Yashar Ganjali

Full paper link at author's homepage http://www.cs.utoronto.ca/~monia/hotnets12-final85.pdf

The central suggestion is that centrally controlled SDN networks are in a position to choose which version of TCP will work best. OpenTCP tunes TCP for traffic and network conditions. Different TCP flavours perform better at different times or in different conditions e.g. datacentre TCP (DCTCP).

Congestion Update Epistles sent to end hosts summarising network conditions. Congestion Control Agent at each end host is kernel module which selects appropriate TCP version and parameters. Congestion control policies allow network operator to specify conditions for various tuning TCP to network. Time step T slower than RTT needed for updates to policy. Stability conditions based on previous work given. Work is Pang et al.

Open TCP implemented on half hosts in a 4,000 host data centre (SciNet). 59% reduction in flow completion times simply by adapting init_cwnd and RTO. Network characterised by link utilisation < 50% around 80% of time (claimed typical for data centre) and high utilisation and packet loss the rest of the time. Overheads are measured and found to be low in terms of both CPU and network utilisation.

Show bibtex

Software-defined internet architecture: decoupling architecture from infrastructure – HotNets 2012

Barath Raghavan (ICSI) , Martin Casado (Nicira), Teemu Koponen (Nicira), Sylvia Ratnasamy (UC Berkley), Ali Ghodsi (UC Berkley) and Scott Shenker (ICSI,UC Berkley)

Full paper http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final76.pdf at HotNets website.

Paper advocates decoupling of network architecture from network infrastructure – Software Defined Internet Architecture (SDIA). Problem is to allow new architectures without falling back to a “clean slate” deployment.

Defintions:

  • Architecture: “the current IP protocol or any… convention on how packets are handled”.

  • Infrastructure: “physical equipment… (routers, switches, fibre, cables etc)”

Solutions like OpenFlow do not work as they are limited on what they can match in packets.

Note deviations from “standard internet design”:

  • MPLS – routers are not looking at IP label.

  • SDN – separates control plane from data.

  • Middleboxes – as numerous as routers in enterprise nets.

  • Software forwarding – normal in middleboxes and hypervisors.

Claim: these deviations applied systematically deploy architecture from infrastructure. Design core using internal addressing scheme and edge which uses software forwarding to map from internal to external addressing.

Connectivity from host X domain A to host Y domain B is following tasks:

  • Interdomain – carry packets from A to B possibly between multiple domains.

  • Intradomain transit – carry packets from domain ingress to domain egress.

  • Intradomain delivery – carry packet from X to edge of A and from edge of B to Y (also between two hosts in same domain).

Propose interdomain routing carried out using domain IDs without reference to host X or Y addresses. Now interdomain changes only involve changing software in edge routers (though this still seems a big ask!). Software forwarding allows a quite general “match” against addresses and if properly written could allow any address structure using a general “Match” function. Change from (say) IPv4 to IPv6 could be done domain by domain (although this might require that domain to buy new infrastructure).

Propose intradomain routing using edge/core architecture as described above. Only edge needs to understand interdomain so design is modular. Claim is that software forwarding at domain edges is feasable by parallel PCs using (for example) Valiant load balancing. “Interdomain Service Model” is the delivery service agreed between all domains. Deployment of new ISM requires:

  • Distributed algorithm among edge controllers.

  • Forwarding actions which can be sent to edge routers from edge controllers.

  • Measures to cope with partial deployment

Three examples are given to show how deployment might work:

  1. “Pathlets” – (unicast best effort packet service)

  2. ICN – content routing implemented interdomain supporting any naming scheme.

  3. Middlebox services – references Sherry et al “ Netcalls: End Host Function Calls to Network Traffic Processing Services”

Claim: In SDIA, edge routers are software allowing flexible protocol deployment. This means that intradomain infrastructure need only be bought to work with protocols if protocols achieve wide adoption.

Show bibtex


Home. For corrections or queries contact: Richard G. Clegg (richard@richardclegg.org)