How 400G Has Transformed Data Centers

With the rapid technological adoption witnessed in various industries across the world, data centers are adapting on the fly to keep up with the rising client expectations. History is also pointing to a data center evolution characterized by an ever-increasing change in fiber density, bandwidth, and lane speeds.

Data centers are shifting from 100G to 400G technologies in a bid to create more powerful networks that offer enhanced experiences to clients. Some of the factors pushing for 400G deployments include recent advancements in disruptive technologies such as AI, 5G, and cloud computing.

Today, forward-looking data centers that want to maximize cost while ensuring high-end compatibility and convenience have made 400G Ethernet a priority. Below, we have discussed the evolution of data centers, the popular 400G form factors, and what to expect in the data center switching market as technology continues to improve.

Evolution of Data Centers

The concept of data centers dates back to the 1940s, when the world’s first programmable computer, the Electronic Numerical Integrator and Computer, or ENIAC, was the apex of computational technology. The latter was primarily used by the US army to compute artillery fire during the Second World War. It was complex to maintain and operate and was only operated in a particular environment.

This saw the development of the first data centers centered on intelligence and secrecy. Ideally, a data center would have a single door and no windows. And besides the hundreds of feet of wiring and vacuum tubes, huge vents and fans were required for cooling. Refer to our data center evolution infographic to learn more about the rise of modern data centers and how technology has played a huge role in shaping the end-user experience.data center evolution

The Limits of Ordinary Data Centers

Some of the notable players driving the data center evolution are CPU design companies like Intel and AMD. The two have been advancing processor technologies, and both boost exceptional features that can support any workload.

And while most of these data center processors are reliable and optimized for several applications, they aren’t engineered for the specialized workloads that are coming up like big data analytics, machine learning, and artificial intelligence.

How 400G Has Transformed Data Centers

The move to 400 Gbps drastically transforms how data centers and data center interconnect (DCI) networks are engineered and built. This shift to 400G connections is more of a speculative and highly-dynamic game between the client and networking side.

Currently, two multisource agreements compete for the top spot as a form-factor of choice among consumers in the rapidly evolving 400G market. The two technologies are QSFP-DD and OSFP optical/pluggable transceivers.

OSFP vs. QSFP-DD

QSFP-DD is the most preferred 400G optical form factor on the client-side, thanks to the various reach options available. The emergence of the Optical Internetworking Forum’s 400ZR and the trend toward combining switching and transmission in one box are the two factors driving the network side. Here, the choice of form factors narrows down to power and mechanics.

The OSFP being a bigger module, provides lots of useful space for DWDM components, plus it features heat dissipation capabilities up to 15W of power. When putting coherent capabilities into a small form factor, power is critical. This gives OSFP a competitive advantage on the network side.

And despite the OSFP’s power, space, and enhanced signal integrity performance, it’s not compatible with QSFP28 plugs. Additionally, its technology doesn’t have the 100Gbps version, so it cannot provide an efficient transition from legacy modules. This is another reason it has not been widely adopted on the client side.

However, the QSFP-DD is compatible with QSFP28 and QSFP plugs and has seen a lot of support in the market. The only challenge is its low power dissipation, often capped at 12 W. This makes it challenging to efficiently handle a coherent ASIC (application-specific integrated circuit) and keep it cool for an extended period.

The switch to 400GE data centers is also fueled by the server’s adoption of 25GE/50GE interfaces to meet the ever-growing demand for high-speed storage access and a vast amount of data processing.400G OSFP vs. QSFP-DD

The Future of 400G Data Center Switches

Cloud service provider companies such as Amazon, Facebook, and Microsoft are still deploying 100G to reduce costs. According to a report by Dell’Oro Group, 100G is expected to peak in the next two years. But despite 100G dominating the market now, 400G shipments are expected to surpass 15M million switch ports by 2023.

In 2018, the first batch of 400G switch systems based on 12.8 Tbps chips was released. Google, which then was the only cloud service provider, was among the earliest companies to get into the market. Fast-forward, other cloud service providers have entered the market helping fuel the transformation even further. Today, cloud service companies make a big chunk of 400G customers, but service providers are expected to be next in line.

Choosing a Data Center Switch

Data center switches are available in a range of form factors, designs, and switching capabilities. Depending on your unique use cases, you want to choose a reliable data center switch that provides high-end flexibility and is built for the environment in which they are deployed. Some of the critical factors to consider during the selection process are infrastructure scalability and ease of programmability. A good data center switch is power efficient with reliable cooling and should allow for easy customization and integration with automated tools and systems. Here is an article about Data Center Switch Wiki, Usage and Buying Tips.

Article Source: How 400G Has Transformed Data Centers

Related Articles:

What’s the Current and Future Trend of 400G Ethernet?

400ZR: Enable 400G for Next-Generation DCI

400G Optics in Hyperscale Data Centers

Since their advent, data centers have been striving hard to address the rising bandwidth requirements. A look at the stats reveals that 3.04 Exabytes of data is being generated on a daily basis. Whenever a hyperscale data center is taken into consideration, the bandwidth requirements are massive as the relevant applications require a preemptive approach due to their scalable nature. As the introduction of 400G data centers has taken the data transfer speed to a whole new level, it has brought significant convenience in addressing various areas of concern. In this article, we will dig a little deeper and try to answer the following questions:

  • What are the driving factors of 400G development?
  • What are the reasons behind the use of 400G optics in hyperscale data centers?
  • What are the trends in 400G devices in large-scale data centers?

What Are the Driving Factors For 400G Development?

The driving factors for 400G development are segregated into video streaming services and video conferencing services. These services require pretty high data transfer speeds in order to function smoothly across the globe.

Video Streaming Services

Video streaming services were already taking a toll on the bandwidth requirements. That, combined with the COVID-19 pandemic, forced a large population to stay and work from home. This automatically increased the usage of video streaming platforms. A look at the stats reveals that a medium-quality stream on Netflix consumes 0.8 GB per hour. See that in relation to over 209 million subscribers. As the traveling costs came down, the savings went to improved quality streams on Netflix like HD and 4K. What stood at 0.8 GB per hour rose to 3 and 7 GB per hour. This evolved the need for 400G development.

Video Conferencing Services

As COVID-19 made working from home the new norm, video conferencing services also saw a major boost. Till 2021, 20.56 million people have been reported to be working from home in the US alone. As video conferencing took center stage, Zoom, which consumes 500 MB per hour, saw a huge increase in its user base. This also puts great pressure on the data transfer needs.

What Makes 400G Optics the Ideal Choice For Hyperscale Data Centers?

Significant Decrease in Energy and Carbon Footprint

To put it simply, 400G raises the data transfer speed four times. 400G reduces the cost of 100G ports as breakouts when comparing a 4 x 100G solution to facilitate 400GbE with a single 400G solution to do the same. A single node at the output minimizes the risk of failures as well as lower the energy requirement. This brings down the ESG footprint that has become a KPI for the organizations going forward.

Reduced Operational Cost

As mentioned earlier, a 400G solution requires a single 400G port, whereas addressing the same requirement via a 100G solution requires four 100G ports. On a router, four ports cost way more than a single port that can facilitate rapid data transfer. The same is the case with power. Combined together, these two bring the operational cost down to a considerable extent.400G Optics

Trends of 400G Optics in Large-Scale Data Centers—Quick Adoption

The introduction of 400G solution in large-scale data centers has reshaped the entire sector. This is due to a humongous increase in the data transfer speeds. According to research, 400G is expected to replace 100G and 200G deployments way faster than its predecessors. Since its introduction, more and more vendors are upgrading to network devices that support 400G. The following image truly depicts the technology adoption rate.Trends of 400G Optics

Challenges Ahead

Lack of Advancement in the 400G Optical Transceivers sector

Although the shift towards such network devices is rapid, there are a number of implementation challenges. This is because it is not only the devices that need to be upgraded but also the infrastructure. Vendors are trying to upgrade them in order to stay ahead of the curve but the cost of the development and maturity of optical transceivers is not at the expected benchmark. The same is the case with their cost and reliability. As optical transceivers are a critical element, this comes as a major challenge in the deployment of 400G solutions.

Latency Measurement

In addition, the introduction of this solution has also made network testing and monitoring more important than ever. Latency measurement has always been a key indicator when evaluating performance. Data throughput combined with jitter and frame loss also comes as a major concern in this regard.

Investment in Network Layers

Lastly, the creation of a plug-and-play environment for this solution also needs to be more realistic. This will require a greater investment in the physical, higher level, and network-IP components layers.

Conclusion

Rapid technological advancements have led to concepts like the Internet of Things. These implementations require greater data transfer speeds. That, combined with the world going to remote work, has exponentially increased the traffic. Hyperscale data centers were already feeling the pressure and the introduction of 400G data centers is a step in the right direction. It is a preemptive approach to address the growing global population and the increasing number of internet users.

Article Source: 400G Optics in Hyperscale Data Centers

Related Articles:

How Many 400G Transceiver Types Are in the Market?

Global Optical Transceiver Market: Striding to High-Speed 400G Transceivers