How communications networks in high-performance data centers are evolving
- Ethernet speeds surge
- Optics and DSP enable power-efficient scaling
- AI and HPC drive dense, low-latency fabrics
Bandwidth and data rates in high-performance data centers have surged from 1–10 GbE server links and 40/100 GbE fabrics a decade ago to 25–100 GbE at the access layer and 400–800 GbE at the core today. This shift is reshaping everything from network interface cards (NICs) and switch ASICs to optics, connectors and packaging as designers aim to move more data with less power, space and cost per bit.
Server connectivity
Around 2015, many enterprise data centers still connected most servers at 1 GbE, with 10 GbE reserved for high-traffic applications or used as part of a refresh cycle. Aggregation switches commonly offered 10 GbE downlinks and 40 GbE uplinks, while only a handful of large cloud operators experimented with 100 GbE in their spine layers. That architecture was adequate for virtualized workloads, databases and web front-ends, but it began to creak under the strain of large-scale analytics, real-time streaming and, later, AI training clusters that generated intense east–west traffic inside the facility.
A key step in the evolution was the move from 10 to 25 Gb/s per electrical lane on Ethernet PHYs, enabling a new class of 25 GbE NICs and switch ports that deliver 2.5 times the bandwidth of 10 GbE without radically changing the connector or board-level design. Using four of these 25 Gb/s lanes in parallel created 100 GbE links, which quickly became the preferred uplink speed from top-of-rack (ToR) switches into the leaf or aggregation layer. For operators, the attraction was straightforward: they could deploy 25 GbE to servers and 100 GbE to the fabric with relatively modest increases in power and cost compared with a pure 10/40 GbE design, while dramatically increasing aggregate throughput.
Today, 25 GbE remains an important sweet spot speed for many workloads, but it rarely stands alone. Dual-port 25 GbE NICs, 50 GbE variants using two 25 Gb/s lanes, and 100 GbE ports on GPU and storage nodes are increasingly common in modern builds, especially where consolidation and virtualization density are high. For operators planning new facilities, the starting point for access networking is often at least 25 GbE, with a clear roadmap to 50 or 100 GbE as server and accelerator performance climb.
Data center Ethernet speed evolution

Data center fabric evolution
Inside the fabric, the growth in bandwidth has been even more dramatic. Once 100 GbE became practical for large cloud providers, the industry quickly pivoted to defining 200 and 400 GbE standards, built from 50 or 100 Gb/s electrical and optical lanes. These 400 GbE links are now widely adopted in spine and core roles, particularly in hyperscale and colocation facilities where oversubscription must be kept low to support multi-tenant, high-traffic workloads.
The latest generation of high-end switch ASICs delivers 51.2 Tb/s of aggregate throughput, enough to support 128 ports of 400 GbE or 64 ports of 800 GbE in a single device, depending on how the lanes are configured. Designers can break those ports into multiple lower-speed connections, such as 4×100 GbE, to flexibly match switch radix, cabling constraints and rack-level architecture. This degree of flexibility allows operators to phase in higher port speeds as needed, often starting with 100 or 400 GbE and enabling 800 GbE on the same hardware later in the lifecycle.
AI and high-performance computing (HPC) clusters are at the leading edge of this adoption curve. Training large models or running tightly coupled simulations demands enormous bisection bandwidth and predictable, low-tail latency. As a result, such environments are moving toward 400 GbE leaf-to-spine links, and 800 GbE fabrics are beginning to appear in production deployments. Over the next few years, Ethernet roadmaps point to 1.6 Tb/s interfaces and switch ASICs exceeding 100 Tb/s, which will further compress network tiers and allow even larger GPU and CPU clusters to communicate at near line-rate.
Simplified Ethernet leaf-spine fabric for GPU aggregation

High-speed optical interconnects
None of this would be feasible using copper alone. As line rates push above 25 Gb/s per lane, the attenuation, reflections and crosstalk inherent in long copper traces and passive cables become increasingly difficult to manage without burning power in equalization and re-timing. Consequently, data center networks have steadily become more optical, first at longer reaches and now at shorter ones within the rack and between adjacent rows.
Pluggable optical transceivers have been the workhorse technology for this transition. Form factors such as SFP+, QSFP28, QSFP DD and OSFP encapsulate the optical engine, laser drivers, modulators and photodiodes in a compact module that plugs into a front-panel cage. This allows switch and router vendors to offer the same underlying hardware with a wide choice of reaches—from short reach multimode modules for a few tens of metres to single mode and coarse wavelength division multiplexing (CWDM) options that span hundreds of metres or a few kilometres. Operators can then mix passive copper cables, active copper, active optical cables and pluggable optics in the same chassis to match the physical layout of their facility.
As speeds rise to 400 and 800 GbE, optical design becomes more complex. Engineers turn to advanced modulation schemes such as PAM4 to fit more bits into each symbol and deploy sophisticated digital signal processing (DSP) in the modules to equalize the channel and correct errors. The upside is vastly higher bandwidth per fiber and per front-panel port, but the trade off is more heat and tighter power budgets, which drive further change in mechanical design, thermal management and rack-level layout.
Beyond the walls of the data center, optics are equally important. Data center interconnect (DCI) links across a campus or metro area increasingly rely on coherent optical modules that pack 400 or 800 Gb/s into a single wavelength, using high-order modulation and powerful DSP to ride over existing fiber pairs. This allows operators to scale capacity between sites for disaster recovery, cloud on-ramps or multi-region AI clusters without continually digging new ducts or lighting fresh dark fiber.
SerDes advances and co-packaged optics
At the silicon level, much of the story is about serializer/deserializer (SerDes) technology. Modern switch ASICs and NICs have progressed from 10 and 25 Gb/s NRZ lanes to 50 and 100 Gb/s PAM4 lanes and are now moving toward 200 Gb/s per lane, where the underlying SerDes runs at symbol rates up to about 112 Gbaud with Nyquist frequencies from roughly 20 to 60 gigahertz. These very high-speed electrical interfaces rely on increasingly sophisticated equalization, clock-and-data recovery and forward-error correction to preserve a usable eye opening at such extreme data rates. Each jump reduces the number of lanes required for a given port speed, saving pins, package area and board real estate. It simultaneously tightens the requirements on the channel that carries those signals.
This is where new packaging and integration approaches come in. To cope with the loss at 100 or 200 Gb/s, designers shorten the electrical paths between the switch ASIC and the optics, choose low-loss PCB materials, and refine connectors and cages to minimize reflections. Co-packaged optics (CPO) take the concept further by placing optical engines on the same substrate as the switch ASIC, or on closely coupled chiplets, so that only very short electrical connections are needed before the signal transitions to light in fiber. By eliminating the long traces from ASIC to front panel, CPO can deliver lower power per bit and enable high-radix 800 GbE or 1.6 Tb/s switches within a manageable thermal envelope. Where flyover and mezzanine cabling extend the reach of copper links inside the chassis, CPO eliminates most of that copper altogether by converting to optics close to the switch.
CPO is not a universal answer, because it complicates serviceability and module replacement compared with traditional pluggables. However, for extremely dense AI and HPC fabrics, where bandwidth per rack and power efficiency are paramount, it is emerging as a strong candidate for next-generation designs. In parallel, designers are exploring linear-drive optics, which move some of the DSP complexity from the module into the switch ASIC as another way to reduce overall power and cost while maintaining high data rates.
Integrated optics

Leaf-spine network topologies, low-latency Ethernet protocols
Raising link speeds is only one part of the puzzle. Architects also refine topology and traffic management to make effective use of that bandwidth. The leaf–spine model has largely replaced older three-tier designs in high-performance data centers, providing predictable hop counts and more uniform latency between any two endpoints. By deploying high-speed links and carefully managing oversubscription—often targeting 3:1 or lower for demanding workloads—operators can keep queueing delays under control even as east–west traffic volumes rise.
On top of these physical and logical designs, transport protocols are evolving. For conventional cloud applications, Ethernet with TCP and congestion control remains dominant, often enhanced with Explicit Congestion Notification (ECN) and larger buffers to smooth out microbursts. For AI and storage clusters that are extremely sensitive to packet loss, operators increasingly adopt RDMA over Converged Ethernet (RoCE) or alternative fabrics, using priority flow control, finely tuned buffers and strict traffic-class separation to achieve low tail latency.
Instrumentation has become just as important as raw throughput. Modern switches and NICs incorporate line-rate telemetry, in-band network monitoring and streaming counters that feed analytics platforms in near real time. This visibility allows operators to detect hot spots, cable issues and mis-configured links quickly, then adjust routing, rebalance workloads or schedule maintenance before problems escalate into outages.
Future of data center networking
Looking ahead, the trajectory is toward higher per-lane speeds, more integration between optics and switch silicon and smarter fabrics that blend high bandwidth with fine-grained observability. As AI, machine learning, real-time analytics and immersive media drive up east–west traffic, the data center network is evolving from a background utility into a strategic differentiator. For designers and operators, the challenge is to harness these new components—25-100 GbE access, 400-800 GbE fabrics, next-generation optics and advanced packaging—while keeping power, cost and complexity under control.
Handled well, the payoff is substantial. A network that can move vast volumes of data quickly and reliably supports the next decade of high-performance computing and cloud innovation.
Avnet Networking and Communications Solutions