Algorithm-Architecture
Trade-offs in Network Processor Design
Matthias Gries, PhD
thesis, ETH Zurich, Switzerland, ETH Diss No 14191, May 2001
The increasing use of computer networks for all kinds of information
exchange between autonomous computing resources is associated with a
number of side-effects. In the Internet, where computers all over the
globe are interconnected, the traffic volume grows faster than the
infrastructure improves, leading to congestion of networking routes. In
the application domain of embedded systems, networks can be used to
couple complex sensor systems with a computing core. The provision of
raw bandwidth may not be sufficient in such systems to allow control
with real-time constraints. The underlying requirement in both cases is
a network service with a defined quality, for instance, in terms of
traffic loss ratio and worst-case communication delay. The provision of
suitable communication services however requires a noticeable overhead
in terms of computing load. Therefore, application-specific hardware
accelerators - so-called network processors - have been introduced to
speed up or even enable the maintenance of certain network services.
The following issues have not yet been dealt with:
- Although there are network processors
for high-speed networks, no processor is available that considers the
requirements of the interface between networks of a service provider
and a customer.
- While each individual task of a
network processor is well understood, it is unclear how different
tasks, that potentially show interfering properties, should cooperate
to preserve the service quality.
The above issues are
addressed in this thesis and the major contributions in the research
area of algorithms and architectures for network processors are:
- A service scheme is defined which
takes care of the requirements at the interface between networks of a
service provider and a customer.
- Various combinations of network
processing tasks are explored for this service scheme by exhaustive
simulation. The exploration focuses on the preservation of service
quality parameters.
- The exploration of processing tasks
is combined with an evaluation of suitable building blocks for
architectures of network processors so that the interaction of
algorithm behavior and timing of hardware resources can be examined by
co-simulation of both aspects.
- Due to its impact on the overall
performance, refinements of a particular building block - the memory
controller - are evaluated. The influence of an applicationspecific
memory controller is explored by extensive simulation of various
benchmarks, dynamic RAMs, and memory access schemes incorporated into a
mature CPU simulator.
Keywords:
network processors, packet processing (queuing, scheduling),
system-level design, memory controller/DRAM functionality
- M. Gries: Algorithm-Architecture
Trade-offs in Network Processor Design,
Ph.D. thesis, Computer Engineering and Networks Laboratory (TIK), ETH
Zurich, Switzerland,
ETH Diss. No. 14191, examination date: May 21, 2001 Shaker Verlag, Aachen, Germany, ISBN
3-8265-9044-9
pdf: diss_gries.pdf , pdf with marks and links: diss_gries_hyper.pdf
Keywords:
network processing, policing, queue management, scheduling, design space exploration, system level design, triple play, customer premises equipment, quality of service
- M. Gries: Joint Evaluation of Architecture and Behavior for Network Processing Systems,
unpublished technical report, Computer Engineering and Networks Laboratory (TIK), ETH Zurich, Switzerland, June 2006
pdf: TR-network_processing_evaluation-mgries-2006.pdf (put online July 2018; the report summarizes and updates the case study and applied design flow described in my PhD thesis)