Amphenol High Speed I/O Connectors

Today’s Innovations in High-Speed Interconnects Meet Tomorrow’s Data Center Communications Demands

Jim David – Portfolio Director; Amphenol ICC

Amphenol, High-Speed Interconnects Data Center Communications Demands

Dating back to the earliest days of the computing industry, data centers have evolved to meet the demands for increased data management capabilities with the introduction and widespread development of modern, large-scale facilities. However, these data center facilities have also increased in the complexity of their design and configuration, and have expanded requirements for higher signal speeds, higher bandwidth, and higher density connections. High-speed interconnection technology has steadily progressed to accommodate today’s data center requirements from the early onset of data center expansion.

New generations of server racks were initially designed to support early data center design and development, which presented an identified need for a common connector interface that could be used to link the various network components (i.e., switches, routers, storage devices, etc.) to each other within each server rack, as well as from rack to rack in a row or aisle of servers, and between multiple rows of servers within the data center. Due to the multiplicity of possible interfaces, suppliers, and hardware manufacturers, developing a standardization of these different interfaces was immensely challenging, but essential to the advancement of data center design and data management capabilities.

Broadening its objective to include the development of standardization within the storage industry in the early 1990s, the Small Form Factor (SFF) committee was one of the first organizations to undertake the task of formulating and releasing industry standards that define the connector and cable interfaces, in addition to defining interoperability specifications and electrical signaling protocols for network hardware and components. The SFF committee established a multi-source agreement (MSA), which enabled different connector manufacturers and network component suppliers to design and produce devices that are interoperable with one another.

Small form-factor pluggable (SFP) devices were one of the first high speed interfaces resulting from MSA agreements. This standard was developed to enable both copper and fiber optic networking cabling and connections to be incorporated between what were typically switch and server components. Primarily identified by their rated data transmission speed (e.g., 1Gb/s SFP), the SFP standard was upgraded to SFP+ in order to not only account for higher speeds as data transmission increased to 10Gb/s per lane, but also to include the use of EEPROM, which allowed for automatic identification of cable configuration, data transmission speed, and other pertinent information once the cable was plugged into the hardware. However, this high-speed standard upgrade didn’t address the limitations of primarily single-lane interfaces.

QSFP lays the foundation for increasing Gb/s

The upgraded SFP+ interface achieved tremendous success and widespread implementation in data center connectivity. Eventually, as data transmission rates climbed to 25Gb/s, increasing bandwidth capacity became a top priority. In response to this urgency, a 4-lane Quad Small Form-factor Pluggable (QSFP) interface was developed with each lane running at 10Gb/s, or 40Gb/s aggregate bandwidth. This solution was effective for several years, but the increasing need for greater bandwidth capability led to the development of 25Gb/s lane signal speeds. By combining four 25Gb/s ports, the QSFP interface enabled hardware designers to add I/O ports with 100Gb/s aggregate bandwidth capability.

Connector and cable suppliers encountered a number of challenges to support the leap in bandwidth, density, and capability of the QSFP interface. In order to accommodate the increased heat dissipation demands, special measures had to be taken in order to maintain high performance signal integrity, PCB footprint compatibility, EMI shielding, as well as the compatibility to have both copper and optical based solutions.

By integrating enhanced, floating heat sinks into the module housing, airflow is increased through and around each port, improving heat dissipation performance. Additionally, high-speed signal integrity was improved through the strategic incorporation of advanced plastics and metal materials within the connector design, and for the cable assembly side, highly engineered PCB, wire stripping, wire management, and wire termination techniques aided in improving signal integrity.

OSFP evolves and expands speed & density capabilities

The introduction of the QSFP’s innovative four-lane interface laid the groundwork for future requirements in connector bandwidth capacity, data transfer speeds, and capabilities. Following the successful implementation of the QSFP interface, which supported 100Gb/s per port and universal operation across a wide range of applications, the next step was in doubling the aggregate bandwidth capability to 200Gb/s. To achieve this milestone, the Octal (8x) Small Form-factor Pluggable (OSFP) eight-lane interface was developed, with 25Gb/s of continued channel support per lane. The advanced OSFP connector design incorporates 60 contacts/port on a 0.6mm pitch, with two rows of 16 high-speed differential pairs and 10 power/control contacts.

In addition to doubling the number of data lanes, the OSFP interface supports a boost in data transmission speed by shifting from a non-return to zero (NRZ) binary modulation signal communications format to a pulse amplitude modulation (PAM-4) communications protocol, which essentially doubles the connection bandwidth through multi-level signaling. Designers might find that the increased complexity of the encoding hardware on either end of the data connection brought on by PAM-4 protocol is outweighed by benefits of the OSFP interfacing, including the expansion in data bandwidth and reduction in cabling costs, both of which facilitate a data transmission rate of 400Gb/s by effectively doubling each of the eight 25Gb/s ports to 50Gb/s.

Furthering the connector’s heat dissipation capability (up to 15W of power), OSFP connectors allow for copper-based cabling solutions and both short- and long-reach optical applications. This development clarifies the port’s heat dissipation properties for designers, and liberates them from the constraints of limiting the optical cable or transceiver solution that is plugged into the port.

The OSFP standard is supported by an MSA development group consisting of 49 member companies. Initial OSFP interface connectors and cables in single (1 x 1) and eight port (2 x 4) configurations are expected to be available in mid-2017. Other custom configurations to meet application-specific needs are likely to follow the deployment of those standard configurations.

DD-QSFP furthers interconnect advancement with backward compatibility

Amphenol, High-Speed Interconnects Data Center Communications Demands

The high-speed 400Gb/s per port data transmission milestone can alternatively be achieved by increasing the density of the standard 100Gb/s QSFP module by a factor of two. Double-density, or DD-QSFP interface, are designed with 76 contacts on an 0.8mm pitch, with 16 high speed differential pairs and 13 power/control contacts arranged in two rows, and are able to employ either the NRZ modulation to deliver 8 lanes of 25Gb/s data transmission (200Gb/s), or the PAM-4 modulation to deliver 8 lanes of 50Gb/s (400Gb/s) in a 2x1 stacked integrated cage/connector module.

While the DD-QSFP interface may be restricted in some optical transmission applications due to its inability to provide the same level of heat dissipation capability as the OSFP system, the DD-QSFP provides an advantage in its backwards, pluggable compatibility with existing QSFP hardware. The advanced interface is also able to achieve double the bandwidth for the same given space.

DD-QSFP interface connection devices are scheduled for implementation in early 2018 for single (1 x 1) and double (2 x 1) configurations. Following the deployment of those standard configurations, a surface mount version is anticipated later in 2018.

RCx advances intra-rack connections

An emerging MSA between several manufacturers, the RCx interface offers a high-density, low-cost, passive copper connector and cabling system that is specifically design to provide short-run, intra-rack connectivity up to 3m for 25Gb/s, 50Gb/s, and 100Gb/s Ethernet applications. Streamlining the electrical design for switches and adapters, this cost-effective, simple, flexible, passive copper interconnect system eliminates the need for active electrical components (e.g., EEPROMs, fiber optic transceivers, and re-timers).

RCx connectors allow for multiple cable assembly configurations and splitter/ breakout cable configurations with each lane supporting 25Gb/s signal transmission. The single lane RCx1 cable design features 10 contacts per 25/Gb/s lane, including two differential pairs, four surrounding ground contacts, and two ID pin contacts on a 0.8mm pitch. The dual lane RCx2 and quad lane RCx4 configurations are multiples of the RCx1 configuration, and feature 20 and 40 contacts respectively. To prevent mis-mating of the connector orientation and improper lane-shifting errors, a mechanical keying slot is fabricated into the center of the bottom of the cable connector shield.

Additional RCx receptacle configurations are also available in 1 x 2 and 1 x 8 SMT versions and 2 x 8 press-fit versions. Multi-row connection configurations can be achieved by combining stacked 1 x 8 and 2 x 8 receptacles depending on the specific application requirements.

The RCx1 interface provides several advantages over legacy SFP28 interfaces used for intra-rack connections, including significantly reduced board space and linear real estate behind the faceplate, reduced active and passive component count, and higher port density. Compared to alternative QSFP28 interfaces, the RCx interface provides higher density, requires less board real estate behind the faceplate, and doesn’t require breakout cables to go from 100Gb/s ports to 25Gb/s or 50Gb/s network adapter cards.

The RCx interface connectors and cable assemblies are currently available in sample and pre-production volumes, with full production capacities anticipated in 2017.


Amphenol High Speed I/O

With our design creativity, simulation and testing capability, and cost effectiveness, AHSI® leads the way in interconnect development for internet equipment, infrastructure, enterprise networks, and appliances.