Data usage is increasing every year, and the communications industry is working diligently to support the increased demand. This article discusses why we need more data, what data center physical layer architecture changes are needed to support higher data rates, and how Amphenol is well positioned to support higher data rate systems.

Why do we need faster data transfer?

Internet usage is up

Many people around the world are working and learning from home due to the Coronavirus pandemic, and the remote paradigm has increased internet usage. Specifically, the data is used for video meetings, remote access to servers, large file transfers, online gaming and social media. Today, there are approximately 2.7 billion Facebook and 2.3 billion YouTube users, and in 2021 humanity will spend 420 million years on these social media platforms. Cell phones will release this year with new capabilities such as 8k and 360° video, and this large data content will be shared on social media platforms and streamed live. In 2020, average households used 350GB of data per month and many were at or above 1TB, the data cap for most internet providers, and data usage will increase moving forward.

Internet-of-Things will thrive from 5G

The rise of 5G brings many new technologies to life. Precision agriculture uses 5G connected sensors, drones, and automated hardware to waste less and produce more. Autonomous vehicles communicate updates to data centers via 5G every two feet when driving at highway speeds. Drones using the 5G network are being used to make deliveries. UPS has already teamed up with Verizon to receive certification for delivering vital healthcare supplies via drones, and says 5G makes this possible. Finally, augmented reality with 5G will enable us to shop from home like never before. To fully enable 5G and all its glorious by-products, we need an upgraded infrastructure, and this infrastructure will include 112Gb/s transmission per differential pair. For data centers, the change will be to implement the IEEE 400GBASE-KR4 and 400GBASE-CR4 protocol into their servers and switches, respectively.

How do we meet the data demand?

Data centers and edge data centers need to transition to higher speed architectures to support the services discussed above. The current generation in most data center servers today is IEEE’s 100GBASE-CR4 and 100GBASE-KR4 described in IEEE 802.3 clause 92 and 94, respectively. These protocols, which released in 2014, utilize signal speeds of 25.78125 Gbaud/s with NRZ modulation.  The next move to 200GBASE-KR4 is happening today. This protocol operates at 26.5625 Gbaud/s with PAM4 modulation. The rate of each symbol (BAUD) has not changed dramatically, but each symbol now carries two bits instead of one. That translates to less signal available for each bit, and with less signal, the system signal-to-noise ratio is decreased.

To illustrate the difference, let’s consider a 100GBASE-KR4 backplane.

This backplane has about 25 dB of insertion loss (blue line) at the 25.78125 Gbaud/s Nyquist frequency (12.89 GHz), and about 25 to 35 dB of signal-to-noise ratio depending on the wiring pattern. If we plot the equalized eye diagram of the channel only, without crosstalk, at 25.78125 Gbaud/s with NRZ modulation, we see a wide open eye with an eye height of approximately 40 millivolts and eye width of almost the entire unit interval. If we do the same thing at 26.5625 Gbaud/s, with PAM4 modulation, the situation is much worse.  The eye height is approximately 13 millivolts and the eye width is only about 50% of the unit interval.

Even though the signal level is significantly worse at 200GBASE-KR4 than 100GBASE-KR4, it is clear doubling the bandwidth is still possible with the same interconnect system. That is great news for integrators and data center owners who are looking for an easy upgrade path. Let’s see what happens when we consider 400GBASE-KR4, the next generation protocol for high-speed data centers, which operates at 53.125 Gbaud/s (26.56 GHz Nyquist Frequency). This protocol is synonymous to the OIF 112G standards.

The statistical eye has completely collapsed. Meaning, the current hardware does not work at 400GBASE-KR4. Another way to look at it is in terms of the industry standard metric of a working channel called Channel Operating Margin, better known as COM. COM takes the electrical performance of the channel and adds the detriments of the IC all in one number, representing a signal-to-noise ratio in voltage decibels. In most cases, COM greater than 3 dB is passing the interoperability requirement.

Just as the eye diagrams suggest, the backplane passes the 100GBASE-KR4 electrical requirement easily, passes the 200GBASE-KR4 requirement with less margin, and fails the 400GBASE-KR4 requirement by a great margin. It is time for an upgrade, but what do we need to do?

What are the technical challenges with 112G?

Signal Integrity: Insertion loss, reflections, and crosstalk

 

 

The first obvious issue is the high frequency needed for 400GBASE-KR4. The protocol is designed to accommodate 28 dB channels at 26.56 GHz. The current channel has around 52 dB of loss at that frequency. Clearly, the backplane architecture needs to change. That can be done by making shorter channels, using better printed circuit board materials, or replacing the traditional backplanes with cabled solutions. Amphenol is prepared to support cabled backplanes with ExaMAX®2, ExtremePort Swift, Paladin®, and micro-LinkOVER connector systems.

Let’s start by simply making the backplane have less loss. This is accomplished by making some concessions for trace length and using the best PCB material available. You can see the loss is now within the limitation of the 400GBASE-KR4 specification: 21 dB at 26.56 GHz. If we analyze this backplane with COM, it still fails, but why? Digging deeper, we find there is simply too much noise in the system. To pass COM, the signal-to-noise ratio needs to be higher than 1.41, and the new backplane has more noise than signal! Looking deeper, we see both reflections and crosstalk are causing equal concern. However, the crosstalk seems to be primarily coming from NEXT, and FEXT has a rather small contribution.

If we change the connector system to something designed for 400GBASE-KR4 transmission, we see the noise is reduced and the signal is slightly higher from removing radiation.  This results in a working 400GBASE-KR4 channel! We see a working system at these frequencies needs to have insertion loss under 28 dB at 26.56 GHz and implement interconnect solutions with low reflections and near-end crosstalk.

Conclusion

Amphenol recognizes the need for higher speed connectors and understands how to make connector solutions to enable these speeds. We have the tools and expertise to help our customers get there, electrically and mechanically. We also have every type of connector needed for 112G integration.