Software Defined Networking (SDN) has been proposed as an open standard to specify network services. SDN separates the control plane out of the network equipment by defining an open protocol for the communication between the control plane and the data plane. This provides an open software platform to facilitate the innovation of computer networks.
The networking group explores hardware as well as software-based solutions to optimize the SDN data plane with respect to latency, throughput, and power efficiency. Our focus is to investigate novel algorithms, data structures, and architectures that exploit state-of-the-art technologies including heterogeneous multi-processor system-on-chip architectures (MPSoC), multi/many-core processors, and Programmable Gate Arrays (FPGA) to realize flexible designs for data plane kernels.
We are exploring novel solutions for SDN data plane kernels (e.g., large-scale IP lookup, multi-field packet classification) to achieve high performance. We also develop new techniques for network virtualization and data aggregation using hybrid trees and virtual engines to achieve high performance on various platforms.
Packet classification is performed by routers to classify network packets against a rule set. In SDN, routers require to perform packet classification against at least 12 fields and dynamically update the rule set.
We are looking into a 2-dimensional pipelined architecture for packet classification on FPGA; this architecture achieves high throughput while supporting dynamic updates. In this architecture, modular Processing Elements (PEs) are arranged in a 2-dimensional array. Each PE accesses its designated memory locally, and supports prefix match and exact match efficiently. The entire array is both horizontally and vertically pipelined. We exploit striding, clustering, dual-port memory, and power gating techniques to further improve the performance of our architecture. Our architecture sustains high clock rate even if we scale up (1) the length of each packet header, or/and (2) the number of rules in the rule set. The performance of the entire architecture does not depend on rule set features such as the number of unique values in each field. The PEs are also self-reconfigurable; they support dynamic updates of the rule set during run-time with very little throughput degradation. Experimental results show that, for a 1 K 15-tuple rule set, a state-of-the-art FPGA can sustain a throughput of 650 Million Packets Per Second (MPPS) with 1 million updates/second. Compared to TCAM, our architecture demonstrates at least 4-fold energy efficiency while achieving 2-fold throughput.
Internet traffic classification is an important network management task that requires high throughput. Virtualization is a technique sharing the same piece of hardware for multiple users. We are working on a high-throughput and virtualized architecture for online traffic classification. To explore massive parallelism, we provide a conversion from a decision-tree into a compact rule set table. Further, we employ modular processing elements and map the table to a 2-dimensional pipelined architecture. To support hardware virtualization, we architected a novel dynamic update mechanism, which requires only small resource overhead and has little impact on the overall throughput. We implement our online traffic classification engine on a state-of-the-art FPGA. Post place-and-route results show that, our classification engine achieves 5-fold throughput compared with state-of-the-art dynamically updatable online traffic classification engines.
A heavy hitter refers to an entity which accounts for more than a specified proportion of the total activity by all the entities. In computer networking, an entity can refer to a flow, a connection, an IP domain etc. Heavy hitter detection is the basis for many network management and security applications.
We designed an FPGA-based online heavy hitter detector that can achieve extremely high throughput in network processing. We use a packet flow as an entity and its bandwidth consumption as the activity measurement. Specifically, we considered the following two problems. First, in each packet stream, detect all the heavy hitters for a given threshold. Second, in each packet stream, report the top K heavy hitters where K is specified by the user at design time. We implement our heavy hitter detector design on a Virtex-6 FPGA. It sustains 100+ Gbps throughput while supporting various hierarchy sizes, stream sizes and accuracy requirements.
Measuring statistical flow features is the basis of many network management and security applications. Statistics of network flows are also essential inputs to machine learning based traffic classification algorithms.
We have developed a dynamically configurable online statistical flow feature extractor on FPGA which can compute a set of widely used flow features on-the-fly, such as sum, mean, variance, maximum, and minimum. To meet the requirements of various applications, the window size for feature extraction is dynamically configurable. The post place-and-route evaluation of our architecure on Virtex-6 FPGA shows that our design can achieve a throughput of 96 Gbps for supporting 64 K concurrent flows.
The following papers may have copyright restrictions. Downloads will have to adhere to these restrictions. They may not be reposted without explicit permission from the copyright holder.