What Is Network Edge?

The concept of the network edge has gained prominence with the rise of edge computing, which involves processing data closer to the source of data generation rather than relying solely on centralized cloud servers. This approach can reduce latency, improve efficiency, and enhance the overall performance of applications and services. In this article, we’ll introduce what the network edge is, explore how it differs from edge computing, and describe the benefits that network edge brings to enterprise data environments.

What is Network Edge?

At its essence, the network edge represents the outer periphery of a network. It’s the gateway where end-user devices, local networks, and peripheral devices connect to the broader infrastructure, such as the internet. It’s the point at which a user or device accesses the network or the point where data leaves the network to reach its destination. the network edge is the boundary between a local network and the broader network infrastructure, and it plays a crucial role in data transmission and connectivity, especially in the context of emerging technologies like edge computing.

What is Edge Computing and How Does It Differ from Network Edge?

The terms “network edge” and “edge computing” are related concepts, but they refer to different aspects of the technology landscape.

What is Edge Computing?

Edge computing is a distributed computing paradigm that involves processing data near the source of data generation rather than relying on a centralized cloud-based system. In traditional computing architectures, data is typically sent to a centralized data center or cloud for processing and analysis. However, with edge computing, the processing is performed closer to the “edge” of the network, where the data is generated. Edge computing complements traditional cloud computing by extending computational capabilities to the edge of the network, offering a more distributed and responsive infrastructure.

” Also Check – What Is Edge Computing?

What is the Difference Between Edge Computing and Network Edge?

While the network edge and edge computing share a proximity in their focus on the periphery of the network, they address distinct aspects of the technological landscape. The network edge is primarily concerned with connectivity and access, and it doesn’t specifically imply data processing or computation. Edge computing often leverages the network edge to achieve distributed computing, low-latency processing and efficient utilization of resources for tasks such as data analysis, decision-making, and real-time response.

Network Edge vs. Edge Computing

Network Edge vs. Network Core: What’s the Difference?

Another common source of confusion is discerning the difference between the network edge and the network core.

What is Network Core?

The network core, also known as the backbone network, is the central part of a telecommunications network that provides the primary pathway for data traffic. It serves as the main infrastructure for transmitting data between different network segments, such as from one city to another or between major data centers. The network core is responsible for long-distance, high-capacity data transport, ensuring that information can flow efficiently across the entire network.

What is the Difference between the Network Edge and the Network Core?

The network edge is where end-users and local networks connect to the broader infrastructure, and edge computing involves processing data closer to the source, the network core is the backbone that facilitates the long-distance transmission of data between different edges, locations, or network segments. It is a critical component in the architecture of large-scale telecommunications and internet systems.

Advantages of Network Edge in Enterprise Data Environments

Let’s turn our attention to the practical implications of edge networking in enterprise data environments.

Efficient IoT Deployments

In the realm of the Internet of Things (IoT), where devices generate copious amounts of data, edge networking shines. It optimizes the processing of IoT data locally, reducing the load on central servers and improving overall efficiency.

Improved Application Performance

Edge networking enhances the performance of applications by processing data closer to the point of use. This results in faster application response times, contributing to improved user satisfaction and productivity.

Enhanced Reliability

Edge networks are designed for resilience. Even if connectivity to the central cloud is lost, local processing and communication at the edge can continue to operate independently, ensuring continuous availability of critical services.

Reduced Network Costs

Local processing in edge networks diminishes the need for transmitting large volumes of data over the network. This not only optimizes bandwidth usage but also contributes to cost savings in network infrastructure.

Privacy and Security

Some sensitive data can be processed locally at the edge, addressing privacy and security concerns by minimizing the transmission of sensitive information over the network. Improved data privacy and security compliance, especially in industries with stringent regulations.

In this era of digital transformation, the network edge stands as a gateway to a more connected, efficient, and responsive future.

Related Articles:

How Does Edge Switch Make an Importance in Edge Network?

How 400G Ethernet Influences Enterprise Networks?

Since the approval of its relevant 802.3bs standard from the IEEE in 2017, 400GbE Ethernet has become the talk of the town. The main reason behind it is the ability of this technology to beat the existing solutions by a mile. With its implementation, the current data transfer speeds will simply see a fourfold increase. Vigorous efforts are being made by the cloud service providers and network infrastructure vendors to pace up the deployment. However, there are a number of challenges that can hamper its effective implementation and hence, the adoption.

In this article, we will have a detailed look into the opportunities and the challenges linked to the successful implementation of 400G Ethernet enterprise network. This will provide a clear picture of the impact this technology will have on large-scale organizations.

Opportunities for 400G Ethernet Enterprise Networks

  • Better management of the traffic over video streaming services
  • Facilitates IoT device requirements
  • Improved data transmission density

How can 400G Ethernet assist enterprise networks in handling growing traffic demands?

Rise of 5G connectivity

Rising traffic and bandwidth demands are compelling the CSPs for rapid adoption of 5G both at the business as well as the customer end. A successful implementation requires a massive increase in bandwidth to cater for the 5G backhaul. In addition, 400G can provide CSPs with a greater density in small cells development. 5G deployment requires the cloud data centers to be brought closer to the users as well as the devices. This streamlines the edge computing (handling time-sensitive data) part, which is another game-changer in this area.5G

Data Centers Handling Video Streaming Services Traffic

The introduction of 400GbE Ethernet has brought a great opportunity for the data centers working behind the video streaming services as Content Delivery Networks. This is because the growing demand for bandwidth is going out of hand using the current technology. As the number of users increased, the introduction of better quality streams like HD and 4K has put additional pressure on the data consumption. Therefore, the successful implementation of 400GbE would come as a sigh of relief for the data centers. Apart from rapid data transferability, issues like jitter will also be brought down. Furthermore, large amounts of data transfer over a single wavelength will also bring down the maintenance cost.

High-Performance Computing (HPC)

The application of high-performance computing is in every industry sub-vertical whether it is healthcare, retail, oil & gas or weather forecasting. Real-time analysis of data is required in each of these fields and it is going to be a driver for the 400G growth. The combined power of HPC and 400G will bring out every bit of performance from the infrastructure leading to financial and operational efficiency.400G Ethernet

Addressing the Internet of Things (IoT) Traffic Demands

Another opportunity that resides in this solution is for the data centers to manage IoT needs. Data generated by the IoT devices is not large; it is the aggregation of the connections that actually hurts. Working together, these devices open new pathways over internet and Ethernet networks which leads to an exponential increase in the traffic. A fourfold increase in the data transfer speed will make it considerably convenient for the relevant data centers to gain the upper hand in this race.

Greater Density for Hyperscale Data Centers

In order to meet the increasing data needs, the number of data centers is also seeing a considerable increase. A look at the relevant stats reveals that 111 new Hyperscale data centers were set up during the last two years, and 52 out of them were initiated during peak COVID times when the logistical issues were also seeing an unprecedented increase. In view of this fact, every data center coming to the fore is looking to setup 400GbE. Provision of greater density in fiber, racks, and switches via 400GbE would help them incorporate huge and complex computing and networking requirements while minimizing the ESG footprint at the same time.

Easier Said Than Done: What Are the Challenges In 400G Ethernet technology

Below are some of the challenges enterprise data centers are facing in 400G implementation.

Cost and Power Consumption

Today’s ecosystem of 400G transceivers and DSP are power-intensive. Currently, some transceivers don’t support the latest MSA. They are developed uniquely by different vendors using their proprietary technology.

Overall, the aim is to reduce $/gigabit and watts/gigabit.

The Need for Real-World Networking Plugfests

Despite the standard being approved by IEEE, a number of modifications still need to be made in various areas like specifications, manufacturing, and design. Although the conducted tests have shown promising results, the interoperability needs to be tested in real-world networking environments. This would outline how this technology is actually going to perform in enterprise networks. In addition, any issues faced at any layer of the network will be highlighted.

Transceiver Reliability

Secondly, transceiver reliability also comes as a major challenge in this regard. Currently, the relevant manufacturers are finding it hard to meet the device power budget. The main reason behind that is the use of a relatively older design of QSFP transceiver form factor as it was originally designed for 40GbE. Problems in meeting the device power budget lead to issues like heating, optical distortions, and packet loss.

The Transition from NRZ to PAM-4

Furthermore, the shift from binary non-return to zero to pulse amplitude modulation with the introduction of 400GbE also poses a challenge for encoding and decoding. This is because NRZ was a familiar set of optical coding whereas PAM-4 requires involvement of extensive hardware and an enhanced level of sophistication. Mastering this form of coding would require time, even for a single manufacturer.from NRZ to PAM-4

Greater Risk of Link Flaps

Enterprise use of 400GbE also increases the risk of link flaps. Link flaps are defined as the phenomenon involving rapid disconnection in an optical connection. Whenever such a scenario occurs, auto-negotiation and link-training are performed before the data is allowed to flow again. While using 400GbE, link flaps can occur due to a number of additional reasons like problems with the switch, design problems with the -transceiver, or heat.

Inference

The true deployment of 400GbE Ethernet enterprise network is undoubtedly going to ease management for cloud service providers and networking vendors. However, it is still a bumpy road. With the modernization and rapid advancements in technology, scalability is going to become a lot easier for the data centers. Still, we are still a long way from the destination of a successful implementation. With higher data transfer rates easing traffic management, a lot of risks to the fiber alignment and packet loss still need to be tackled.

Article Source: How 400G Ethernet Influences Enterprise Networks?

Related Articles:

PAM4 in 400G Ethernet application and solutions

400G OTN Technologies: Single-Carrier, Dual-Carrier and Quad-Carrier

Coherent Optics and 400G Applications

In today’s high-tech and data-driven environment, network operators face an increasing demand to support the ever-rising data traffic while keeping capital and operation expenditures in check. Incremental advancements in bandwidth component technology, coherent detection, and optical networking have seen the rise of coherent interfaces that allows for efficient control, lower cost, power, and footprint.

Below, we have discussed more about 400G, coherent optics, and how the two are transforming data communication and network infrastructures in a way that’s beneficial for clients and network service providers.

What is 400G?

400G is the latest generation of cloud infrastructure, which represents a fourfold increase in the maximum data-transfer speed over the current maximum standard of 100G. Besides being faster, 400G has more fiber lanes, which allows for better throughput (the quantity of data handled at a go). Therefore, data centers are shifting to 400G infrastructure to bring new user experiences with innovative services such as augmented reality, virtual gaming, VR, etc.

Simply put, data centers are like an expressway interchange that receives and directs information to various destinations, and 400G is an advancement to the interchange that adds more lanes and a higher speed limit. This not only makes 400G the go-to cloud infrastructure but also the next big thing in optical networks.

400G

What is Coherent Optics?

Coherent optical transmission or coherent optics is a technique that uses a variation of the amplitude and phase or segment of light and transmission across two polarizations to transport significantly more information through a fiber optic cable. Coherent optics also provides faster bit rates, greater flexibility, modest photonic line systems, and advanced optical performance.

This technology forms the basis of the industry’s drive to embrace the network transfer speed of 100G and beyond while delivering terabits of data across one fiber pair. When appropriately implemented, coherent optics solve the capacity issues that network providers are experiencing. It also allows for increased scalability from 100 to 400G and beyond for every signal carrier. This delivers more data throughput at a relatively lower cost per bit.

Coherent

Fundamentals of Coherent Optics Communication

Before we look at the main properties of coherent optics communication, let’s first understand the brief development of this data transmission technique. Ideally, fiber-optic systems came to market in the mid-1970s, and enormous progress has been realized since then. Subsequent technologies that followed sought to solve some of the major communication problems witnessed at the time, such as dispersion issues and high optical fiber losses.

And though coherent optical communication using heterodyne detection was proposed in 1970, it did not become popular because the IMDD scheme dominated the optical fiber communication systems. Fast-forward to the early 2000s, and the fifth-generation optical systems entered the market with one major focus – to make the WDM system spectrally efficient. This saw further advances through 2005, bringing to light digital-coherent technology & space-division multiplexing.

Now that you know a bit about the development of coherent optical technology, here are some of the critical attributes of this data transmission technology.

  • High-grain soft-decision FEC (forward error correction):This enables data/signals to traverse longer distances without the need for several subsequent regenerator points. The results are more margin, less equipment, simpler photonic lines, and reduced costs.
  • Strong mitigation to dispersion: Coherent processors accounts for dispersion effects once the signals have been transmitted across the fiber. The advanced digital signal processors also help avoid the headaches of planning dispersion maps & budgeting for polarization mode dispersion (PMD).
  • Programmability: This means the technology can be adjusted to suit a wide range of networks and applications. It also implies that one card can support different baud rates or multiple modulation formats, allowing operators to choose from various line rates.

The Rise of High-Performance 400G Coherent Pluggables

With 400G applications, two streams of pluggable coherent optics are emerging. The first is a CFP2-based solution with 1000+km reach capability, while the second is a QSFP DD ZR solution for Ethernet and DCI applications. These two streams come with measurement and test challenges in meeting rigorous technical specifications and guaranteeing painless integration and placement in an open network ecosystem.

When testing these 400G coherent optical transceivers and their sub-components, there’s a need to use test equipment capable of producing clean signals and analyzing them. The test equipment’s measurement bandwidth should also be more than 40-GHz. For dual-polarization in-phase and quadrature (IQ) signals, the stimulus and analysis sides need varying pulse shapes and modulation schemes on the four synchronized channels. This is achieved using instruments that are based on high-speed DAC (digital to analog converters) and ADC (analog to digital converters). Increasing test efficiency requires modern tools that provide an inclusive set of procedures, including interfaces that can work with automated algorithms.

Coherent Optics Interfaces and 400G Architectures

Supporting transport optics in form factors similar to client optics is crucial for network operators because it allows for simpler and cost-effective architectures. The recent industry trends toward open line systems also mean these transport optics can be plugged directly into the router without requiring an external transmission system.

Some network operators are also adopting 400G architectures, and with standardized, interoperable coherent interfaces, more deployments and use cases are coming to light. Beyond DCI, several application standards, such as Open ROADM and OpenZR+, now offer network operators increased performance and functionality without sacrificing interoperability between modules.

Article Source:Coherent Optics and 400G Applications

Related Articles:
Typical Scenarios for 400G Network: A Detailed Look into the Application Scenarios
How 400G Ethernet Influences Enterprise Networks?
ROADM for 400G WDM Transmission

400G Multimode Fiber: 400G SR4.2 vs 400G SR8

Cloud and AI applications are driving demand for data rates beyond 100 Gb/s, moving to high-speed and low-power 400 Gb/s interconnects. The optical fiber industry is responding by developing two IEEE 400G Ethernet standards, namely 400GBASE-SR4.2 and 400GBASE-SR8, to support the short-reach application space inside the data center. This article will elaborate on the two standards and their comparison.

400GBASE-SR4.2

400GBASE-SR4.2, also called 400GBASE-BD4.2, is a 4-pair, 2-wavelength multimode solution that supports reaches of 70m (OM3), 100m (OM4), and 150m (OM5). It is not only the first instance of an IEEE 802.3 solution that employs both multiple pairs of fibers and multiple wavelengths, but also the first Ethernet standard to use two short wavelengths to double multimode fiber capacity from 50 Gb/s to 100 Gb/s per fiber.

400GBASE-SR4.2 operates over the same type of cabling used to support 40GBASE-SR4, 100GBASE-SR4 and 200GBASE-SR4. It uses bidirectional transmission on each fiber, with each wavelength traveling in opposite directions. As such, each active position at the transceiver is both a transmitter and a receiver, which means 400GBASE-SR4.2 has eight optical transmitters and eight optical receivers in a bidirectional optical configuration.

The optical lane arrangement is shown as follows. The leftmost four positions labeled TR transmit wavelength λ1 (850nm) and receive wavelength λ2 (910nm). Conversely, the rightmost four positions labeled RT receive wavelength λ1 and transmit wavelength λ2.

400GBASE-SR4.2 fiber interface

400GBASE-SR8

400GBASE-SR8 is an 8-pair, 1-wavelength multimode solution that supports reaches of 70m (OM3), 100m (OM4 & OM5). It is the first IEEE fiber interface to use eight pairs of fibers. Unlike 400GBASE-SR4.2, it operates over a single wavelength (850nm) with each pair supporting 50 Gb/s transmission. In addition, it has two variants of optical lane arrangement. One variant uses the 24-fiber MPO, configured as two rows of 12 fibers, and the other interface variant uses a single-row MPO-16.

400GBASE-SR8 fiber interface variant 1
400GBASE-SR8 fiber interface variant 2

400GBASE-SR8 offers flexibility of fiber shuffling with 50G/100G/200G configurations. It also supports breakout at different speeds for various applications such as compute, storage, flash, GPU, and TPU. 400G-SR8 QSFP DD/OSFP transceivers can be used as 400GBASE-SR8, 2x200GBASE-SR4, 4x100GBASE-SR2, 8x50GBASE-SR.

400G SR4.2 vs. 400G SR8

As multimode solutions for 400G Ethernet, 400GBASE-SR4.2 and 400GBASE-SR8 share some features, but they also differ in a number of ways as discussed in the previous section.

The following table shows a clear picture of how they compare to each other.

 400GBASE-SR4.2400GBASE-SR8
AllianceIEEE 802.3cmIEEE 802.3cm (breakout: 802.3cd)
Max reach150m over OM5100m over OM4/OM5
Fibers8 fibers16 fibers (ribbon patch cord)
Wavelength2 wavelengths (850nm and 910nm)1 wavelength (850nm)
BiDi technologySupport/
Signal modulation formatPAM4 signalingPAM4 signaling
LaserVCSELVCSEL
Form factorQSFP-DD, OSFPQSFP-DD, OSFP

400GBASE-SR8 is technically simple but requires a ribbon patch cord with 16 fibers. It is usually built with 8 VCSEL lasers and doesn’t include any gearbox, so the overall cost of modules and fibers remains low. By contrast, 400GBASE-SR4.2 is technically more complex so the overall cost of related fibers or modules is higher, but it can support a longer reach.

In addition, 400GBASE-SR8 offers both flexibility and higher density. It supports fiber shuffling with 50G/100G/200G configurations and fanout at different I/O speeds for various applications. A 400G-SR8 QSFP-DD transceiver can be used as 400GBASE-SR8, 2x200GBASE-SR4, 4x100GBASE-SR2, or 8x50GBASE-SR.

400G SR4.2 & 400G SR8: Boosting Higher Speed Ethernet

As multimode fiber continues to evolve to serve growing demands for speed and capacity, both 400GBASE-SR4.2 and 400GBASE-SR8 help boost 400G Ethernet and scale up multimode fiber links too ensure the viability of optical solutions for various demanding applications.

The two IEEE 802.3cm standards provide a smooth evolution path for Ethernet, boosting cloud-based services and applications. Future advances point toward the ability to support even higher data rates as they are upgraded to the next level. The data center Industry will take advantage of the latest multimode fiber technology such as OM5 fiber, and use multiple wavelengths to transmit 100 Gb/s and 400 Gb/s over fibers over short reach of more than150 meters.

Beyond 2021-2022 timeframe, once an 800 Gb/s Ethernet standard is standardized, using more advanced technology with two-wavelength operation could create an 800 Gb/s, four-pair link. At the same time a single wavelength could support an 800 Gb/s eight-pair link. In this sense, 400GBASE-SR4.2 and 400GBASE-SR8 are setting the pace for a promising future.

Article Source: 400G Multimode Fiber: 400G SR4.2 vs 400G SR8

Related Articles:

400G Modules: Comparing 400GBASE-LR8 and 400GBASE-LR4
400G Optics in Hyperscale Data Centers
How 400G Has Transformed Data Centers

Importance of FEC for 400G


The rapid adoption of 400G technologies has seen a spike in bandwidth demands and a low tolerance for errors and latency in data transmission. Data centers are now rethinking the design of data communication systems to expand the available bandwidth while improving transmission quality.

Meeting this goal can be quite challenging, considering that improving one aspect of data transmission consequently hurts another. However, one solution seems to stand out from the rest as far as enabling reliable, efficient, and high-quality data transmission is concerned. We’ve discussed more on Forward Error Correction (FEC) and 400G technology in the sections below, including the FEC considerations for 400Gbps Ethernet.

What Is FEC?

Forward Error Correction is an error rectification method used in digital signals to improve data reliability. The technique is used to detect and correct errors in data being transmitted without retransmitting the data.

FEC introduces redundant data and the error-correcting code before data transmission is done. The redundant bits/data are complex functions of the original information and are sent multiple times since an error can appear in any transmitted samples. The receiver then corrects errors without requesting retransmission of the data by acknowledging only parts of the data with no apparent errors.

FEC codes can also generate bit-error-rate signals used as feedback to fine-tune analog receiving electronics. The FEC code design determines the number of missing bits that can be corrected. Block codes and convolutional codes are the two FEC code categories that are widely used. Convolutional codes handle arbitrary-length data and use the Viterbi algorithm for decoding purposes. On the other hand, block codes handle fixed-size data packets, and partial code blocks are decoded in polynomial time to the code block length.

FEC

What Is 400G?

This is the next generation of cloud infrastructure widely used by high-traffic volume data centers, telecommunication service providers, and other large enterprises with relentless data transmission needs. The rapidly increasing network traffic has seen network carriers continually face bandwidth challenges. This exponential sprout in traffic is driven by the increased deployments of machine learning, cloud computing, artificial intelligence (AI), and IoT devices.

Compared to the previous 100G solution, 400G, also known as 400GbE or 400GB/s, is four times faster. This Terabit Ethernet transmits data at 400 billion bits per second, i.e., in optical wavelength; hence it’s finding application in high-speed, high-performance deployments.

The 400G technology also delivers the power, data density, and efficiency required for cutting-edge technologies such as virtual reality (VR), augmented reality (AR), 5G, and 4K video streaming. Besides consuming less power, the speeds also support scale-out and scale-up architectures by providing high density, low-cost-per-bit, and reliable throughput.

Why 400G Requires FEC

Several data centers are adopting 400 Gigabit Ethernet, thanks to the faster network speeds and expanded use cases that allow for new business opportunities. This 400GE data transmission standard uses the PAM4 technology, which offers twice the transmission speed of NRZ technology used for 100GE.

The increased speed and convenience of PAM4 also come with its own challenges. For instance, the PAM4 transmission speed is twice as fast as that of NRZ, but the signal levels are half that of 100G technology. This degrades the signal-to-noise ratio (SNR); hence 400G transmissions are more susceptible to distortion.

Therefore, forward error correction (FEC) is used to solve the waveform distortion challenge common with 400GE transmission. That said, the actual transmission rate of a 400G Ethernet link is 425Gbps, with the additional 25 bits used in establishing the FEC techniques. 400GE elements, such as DR4 and FR4 optics, have transmission errors, which FEC helps rectify.

FEC Considerations for 400Gbps Ethernet

With the 802.3bj standards, FEC-related latency is often targeted to be equal to or less than 100ns. Here, the receive time for FEC-frame takes approximately 50ns, with the rest time budget used for decoding. This FEC latency target is practical and achievable.

Using similar/same FEC code for the 400GbE transmission makes it possible to achieve lower latency. But when a higher coding gain FEC is required, e.g., at the PMD level, one can trade off FEC latency for the desired coding gain. It’s therefore recommended to keep a similar latency target (preferably 100ns) while pushing for a higher coding gain of FEC.

Given that PAM4 modulation is used, FEC’s target coding gain (CG) could be over 8dB. And since soft-decision FEC comes with excessive power consumption, it’s not often preferred for 400GE deployments. Similarly, conventional block codes with their limited latency need a higher overclocking ratio to achieve the target.

Assuming that a transcoding scheme similar to that used in 802.3bj is included, the overclocking ratio should be less than 10%. This helps minimize the line rate increase while ensuring sufficient coding gain with limited latency.

So under 100ns latency and less than 10% overclocking ratio, FEC codes with about 8.5dB coding gain are realizable for 400GE transmission. Similarly, you can employ M (i.e., M>1) independent encoders for M-interleaved block codes instead of using parallel encoders to achieve 400G throughput.

Conclusion

400GE transmission offers several benefits to data centers and large enterprises that rely on high-speed data transmission for efficient operation. And while this 400G technology is highly reliable, it introduces some transmission errors that can be solved effectively using forward error correction techniques. There are also some FEC considerations for 400G Ethernet, most of which rely on your unique data transmission and network needs.



Article Source: Importance of FEC for 400G

Related Articles:
How 400G Ethernet Influences Enterprise Networks?
How Is 5G Pushing the 400G Network Transformation?
400G Transceiver, DAC, or AOC: How to Choose?

ROADM for 400G WDM Transmission

As global optical networks advance, there is an increasing necessity for new technologies such as 400G that meet the demands of network operators. Video streaming, surging data volumes, 5G network, remote working, and ever-growing business necessities create extreme bandwidth demands.

Network operators and data centers are also embracing WDM transmission to boost data transfer speed, increase bandwidth and enhance a better user experience. And to solve some of the common 400G WDM transmission problems, such as reduced transmission reach, ROADMs are being deployed. Below, we have discussed more about ROADM for 400G WDM transmission.

Reconfigurable Optical Add-drop Multiplexer (ROADM) Technology

ROADM is a device with access to all wavelengths on a fiber line. Introduced in the early 2000s, ROADM allows for remote configuration/reconfiguration of A-Z lightpaths. Its networking standard makes it possible to block, add, redirect or pass visible light beams and modulated infrared (IR) in the fiber-optic network depending on the particular wavelength.

ROADMs are employed in systems that utilize wavelength division multiplexing (WDM). It also supports more than two directions at sites for optical mesh-based networking. Unlike its predecessor, the OADM, ROADM can adjust the add/drop vs. pass-through configuration whenever traffic patterns change.

As a result, the operations are simplified by automating the connections through an intermediate site. This implies that it’s unnecessary to deploy technicians to perform manual patches in response to a new wavelength or alter a wavelength’s path. The results are optimized network traffic where bandwidth demands are met without incurring extra costs.

ROADM

Overview of Open ROADM

Open ROADM is a 400G pluggable solution that champions cross-vendor interoperability for optical equipment, including ROADMs, transponders, and pluggable optics. This solution defines some optical interoperability requirements for ROADM and comprises hardware devices that manage and routes traffic over the fiber optic lines.

Initially, Open ROADM was designed to address the rise in data traffic on wireless networks experienced between 2007 and 2015. The major components of Open ROADM – ROADM switch, pluggable optics, and transponder – are controllable via an open standards-based API accessible through an SDN Controller.

One of the main objectives of Open ROADM is to ensure network operators and vendors devise a universal approach to designing networks that are flexible, scalable, and cost-effective. It also offers a standard model to streamline the management of multi-vendor optical network infrastructure.

400G and WDM Transmission

WDM transmission is a multiplexing technique of several optical carrier signals through a single optical fiber channel by varying the wavelength of the laser lights. This technology allows different data streams to travel in both directions over a fiber network, increasing bandwidth and reducing the number of fibers used in the primary network or transmission line.

With 400G technology seeing widespread adoption in various industries, there’s a need for optical fiber networking systems to adapt and support the increasing data speeds and capacity. WDM transmission technique offers this convenience and is considered a technology of choice for transmitting larger amounts of data across networks/sites. WDM-based networks can also hold various data traffic at different speeds over an optical channel, allowing for increased flexibility.

400G WDM still faces a number of challenges. For instance, the high symbol rate stresses the DAC/ADC in terms of bandwidth, while the high-order quadrature amplitude modulation (QAM) stresses the DAC/ADC in terms of its ENOB (effective number of bits.)

As far as transmission performance is concerned, the high-order QAM requires more optical signal-to-noise ratio (OSNR) at the receiver side, which reduces the transmission reach. Additionally, it’s more sensitive to the accumulation of linear and non-linear phase noise. Most of these constraints can be solved with the use of ROADM architectures. We’ve discussed more below.

WDM Transmission

Open ROADM MSA and the ROADM Architecture for 400G WDM

The Open ROADM MSA defines some interoperability specifications for ROADM switches, pluggable optics, and transponders. Most ROADMs in the market are proprietary devices built by specific suppliers making interoperability a bit challenging. The Open ROADM MSA, therefore, seeks to provide the technical foundation to deploy networks with increased flexibility.

In other words, Open ROADM aims at disaggregating the data network by allowing for the coexistence of multiple transponders and ROADM vendors with a few restrictions. This can be quite helpful for 400G WDM systems, especially when lead-time and inventory issues arise, as the ability to mix & match can help eliminate delays.

By leveraging WDM for fiber gain as well as optical line systems with ROADMs, network operators can design virtual fiber paths between two points over some complex fiber topologies. That is, ROADMs introduce a logical transport underlay of single-hop router connections that can be optimized to suit the IP traffic topology. These aspects play a critical role in enhancing 400G adoption that offers the much-needed capacity-reach, flexibility, and efficiency for network operators.

That said, ROADMs have evolved over the years to support flexile-grid WSS technology. One of the basic ROADM architectures uses fixed filters for add/drop, while the other architectures offer flexibility in wavelength assignment/color or the option to freely route wavelengths in any direction with little to no restriction. This means you can implement multi-degree networking with multiple fiber paths for every node connecting to different sites. The benefit is that you can move traffic along another path if one fiber path isn’t working.

Conclusion

As data centers and network operators work on minimizing overall IP-optical network cost, there’s a push to implement robust, flexible, and optimized IP topologies. So by utilizing 400GbE client interfaces, ROADMs for 400G can satisfy the ever-growing volume requirements of DCI and cloud operators. Similarly, deploying pluggable modules and tapping into the WDM transmission technique increases network capacity and significantly reduces power consumption while simplifying maintenance and support.

Article Source: ROADM for 400G WDM Transmission
Related Articles:

400G ZR vs. Open ROADM vs. ZR+
FS 200G/400G CFP2-DCO Transceivers Overview

FS 400G Product Family Introduction

400G ZR vs. Open ROADM vs. ZR+


As global optical networks evolve, there’s an increasing need to innovate new solutions that meet the requirements of network operators. Some of these requirements include the push to maximize fiber utilization while reducing the cost of data transmission. Over the last decade, coherent optical transmission has played a critical role in meeting these requirements, and it’s expected to progressively improve for the next stages of tech and network evolution.

Today, we have coherent pluggable solutions supporting data rates from 100G to 400G. These performance-optimized systems are designed for small spaces and are low power, making them highly attractive to data center operators. We’ve discussed the 400G ZR, Open ROADM, and ZR+ optical networking standards below.

Understanding 400G ZR vs. Open ROADM vs. ZR+

Depending on the network setups and the unique data transmission requirements, data centers can choose to deploy any of the coherent pluggable solutions. We’ve highlighted key facts about these solutions below, from definitions to differences and applications.

What Is 400G ZR?

400G ZR defines a classic, economical, and interoperable standard for transferring 400 Gigabit Ethernet over a single optical wavelength using DWDM (dense wavelength division multiplexing) and higher-order modulation such as 16 QAM. The Optical Interoperability Forum (OIF) developed this low-cost standard for data transmission as one of the first standards to define an interoperable 400G interface.

400G ZR leverages an ultra-modern coherent optical technology and supports high-capacity point-to-point data transport over DCI links between 80 and 120km. The performance of 400ZR modules is also limited to ensure it’s cost-effective with a small physical size. This helps ensure that the power consumption fits within smaller modules such as the Quad Small Form-Factor Pluggable Double-Density (QSFP-DD) and Octal-Small Form-Factor Pluggable (OSFP). The 400G ZR enables the use of inexpensive yet modest performance components within the modules.

400G ZR

What Is Open ROADM?

This is one of the 400G pluggable solutions that define interoperability specifications for Reconfigurable Optical Add/Drop Multiplexers (ROADM). The latter comprises hardware devices that manage and route data traffic transported over high-capacity fiber-optic lines. Open ROADM was first designed to combat the surge in traffic on the wireless network experienced between the years 2007 and 2015.

The key components of Open ROADM include ROADM switch, transponders, and pluggable optics – all controllable via open standards-based API accessed via an SDN Controller utilizing the NETCONF protocol. Launched in 2016, the Open ROADM initiative’s main objective was to bring together multiple vendors and network operators so they could devise an agreed approach to design networks that are scalable, cost-effective, and flexible.

This multi-source agreement (MSA) aims to shift from a traditionally closed ROADM optical transport network toward a disaggregated open transport network while allowing for centralized software control. Some of the ways to disaggregate ROADM systems include hardware disaggregation (e.g., defining a common shelf) and functional disaggregation (less about hardware, more about function).

The Open ROADM MSA went for the functional disaggregation first because of the complexity of common shelves. The team intended to focus on simplicity, concentrating on lower-performance metro systems at the time of its first release. Open ROADM handles 100-400GbE and 100-400G OTN client traffic within a typical deployment paradigm of 500km.

Open ROADM

What Is ZR+?

The ZR+ represents a series of coherent pluggable solutions holding line capacities up to 400 Gb/s and stretching well past the 120km specification for 400ZR. OpenZR+ was designed to maintain the classic Ethernet-only host interface of 400ZR while adding support to aid features such as the extended point-to-point reach of up to around 500km and the inclusion of support for OTN Ethernet, etc.

The recently issued MSA provides interoperable 100G, 200G, 300G & 400G line rates over regional, metro, and long-haul distances, utilizing OpenFEC forward error correction and 100-400G optical line specifications. There’s also a broad range of coverage for ZR+ pluggable, and these products can be deployed across routers, switches, and optical transport equipment.

ZR+

400G ZR, Open ROADM, and ZR+ Differences

Target Application

400ZR and OpenZR+ were designed to satisfy the growing volume requirements of DCI and cloud operators using 100GbE/400GbE client interfaces, while OpenROADM provides a good alternative for carriers that require transporting OTN client signals (OTU4).

In other words, the 400ZR efforts concentrate on one modulation type and line rate (400G) for metro point-to-point applications. On the other hand, the OpenZR+ and Open ROADM groups concentrate on high-efficiency optical specifications capable of adjustable 100G-400G line rates and lengthier optical reaches.

400G Reach: Deployment Paradigm

400ZR modules support high-capacity data transport over DCI links of up to 80 to 120km. On the other hand, OpenZR+ and OpenROADM, under perfect network presumption, can transmit the network for up to 480 km in 400G mode.

Power Targets

The power consumption targets of these coherent pluggable also vary. For instance, the 400zr has a target power consumption of 15W, while Open ROADM and ZR+ have power consumption targets of not more than 25W.

Applications for 400G ZR, Open ROADM and ZR+

Each of these coherent pluggable solutions finds use cases in various settings. Below is a quick summary of the three data transfer standards and their major applications.

  • 400G ZR – frequently used for point-to-point DCI (up to 80km), simplifying the task of interconnecting data centers.
  • Open ROADM – This architecture can be deployed using different vendors, provided they exist in the same network. It gives the option to use transponders from various vendors at the end of each circuit.
  • ZR+ – It provides a comprehensive, open, and flexible coherent solution in a relatively smaller form factor pluggable module. This standard addresses hyperscale data center applications for high-intensive edge and regional interconnects.

A Look into the Future

As digital transformation takes shape across industries, there’s an increasing demand for scalable solutions and architectures for transmitting and accessing data. The industry is also moving towards real-world deployments of 400G networks, and the three coherent pluggable solutions above are seeing wider adoption.

400ZR and the OpenZR+ specifications were developed to meet the network demands of DCI and cloud operators using 100 and 400GbE interfaces. On the other hand, Open ROADM offers a better alternative for carriers that want to transport OTN client signals. Currently, Open ZR+ and Open ROADM provide more benefits to data center operators than 400G ZR, and technology is just getting better. Moving into the future, optical networking standards will continue to improve both in design and performance.

Article Source: 400G ZR vs. Open ROADM vs. ZR+
Related Articles:

ROADM for 400G WDM Transmission

400G ZR & ZR+ – New Generation of Solutions for Longer-reach Optical Communications

FS 400G Cabling Solutions: DAC, AOC, and Fiber Cabling

How Is 5G Pushing the 400G Network Transformation?

With the rapid technological disruption and the wholesale shift to digital, several organizations are now adopting 5G networks, thanks to the fast data transfer speeds and improved network reliability. The improved connectivity also means businesses can expand on their service delivery and even enhance user experiences, increasing market competitiveness and revenue generated.

Before we look at how 5G is driving the adoption of 400G transformation, let’s first understand what 5G and 400G are and how the two are related.

What is 5G?

5G is the latest wireless technology that delivers multi-Gbps peak data speeds and ultra-low latency. This technology marks a massive shift in communication with the potential to greatly transform how data is received and transferred. The increased reliability and a more consistent user experience also enable an array of new applications and use cases extending beyond network computing to include distributed computing.

And while the future of 5G is still being written, it’s already creating a wealth of opportunities for growth & innovation across industries. The fact that tech is constantly evolving and that no one knows exactly what will happen next is perhaps the fascinating aspect of 5G and its use cases. Whatever the future holds, one is likely certain: 5G will provide far more than just a speedier internet connection. It has the potential to disrupt businesses and change how customers engage and interact with products and services.

What is 400G?

400G or 400G Ethernet is the next generation of cloud infrastructure that offers a four-fold jump in max data-transfer speed from the standard maximum of 100G. This technology addresses the tremendous bandwidth demands on network infrastructure providers, partly due to the massive adoption of digital transformation initiatives.

Additionally, exponential data traffic growth driven by cloud storage, AI, and Machine Learning use cases has seen 400G become a key competitive advantage in the networking and communication world. Major data centers are also shifting to quicker, more scalable infrastructures to keep up with the ever-growing number of users, devices, and applications. Hence high-capacity connection is becoming quite critical.

How are 5G and 400G Related?

The 5G wireless technology, by default, offers greater speeds, reduced latencies, and increased data connection density. This makes it an attractive option for highly-demanding applications such as industrial IoT, smart cities, autonomous vehicles, VR, and AR. And while the 5G standard is theoretically powerful, its real-world use cases are only as good as the network architecture this wireless technology relies on.

The low-latency connections required between devices, data centers, and the cloud demands a reliable and scalable implementation of the edge-computing paradigms. This extends further to demand greater fiber densification at the edge and substantially higher data rates on the existing fiber networks. Luckily, 400G fills these networking gaps, allowing carriers, multiple-system operators (MSOs), and data center operators to streamline their operations to meet most of the 5G demands.

5G Use Cases Accelerating 400G transformation

As the demand for data-intensive services increases, organizations are beginning to see some business sense in investing in 5G and 400G technologies. Here are some of the major 5G applications driving 400G transformation.

High-Speed Video Streaming

The rapid adoption of 5G technology is expected to take the over-the-top viewing experience to a whole new level as demand for buffer-free video streaming, and high-quality content grows. Because video consumes the majority of mobile internet capacity today, the improved connectivity will give new opportunities for digital streaming companies. Video-on-demand (VOD) enthusiasts will also bid farewell to video buffering, thanks to the 5G network’s ultra-fast download speeds and super-low latency. Still, 400G Ethernet is required to ensure reliable power, efficiency, and density to support these applications.

Virtual Gaming

5G promises a more captivating future for gamers. The network’s speed enhances high-definition live streaming, and thanks to ultra-low latency, 5G gaming won’t be limited to high-end devices with a lot of processing power. In other words, high-graphics games can be displayed and controlled by a mobile device; however, processing, retrieval, and storage can all be done in the cloud.

Use cases such as low-latency Virtual Reality (VR) apps, which rely on fast feedback and near-real-time response times to give a more realistic experience, also benefit greatly from 5G. And as this wireless network becomes the standard, the quantity and sophistication of these applications are expected to peak. That is where 400G data centers and capabilities will play a critical role.

The Internet of Things (IoT)

Over the years, IoT has grown and become widely adopted across industries, from manufacturing and production to security and smart home deployments. Today, 5G and IoT are poised to allow applications that would have been unthinkable a few years ago. And while this ultra-fast wireless technology promises low latency and high network capacity to overcome the most significant barriers to IoT proliferation, the network infrastructure these applications rely on is a key determining factor. Taking 5G and IoT to the next level means solving the massive bandwidth demands while delivering high-end flexibility that gives devices near real-time ability to sense and respond.

400G Network

400G Ethernet as a Gateway to High-end Optical Networks

Continuous technological improvements and the increasing amount of data generated call for solid network infrastructures that support fast, reliable, and efficient data transfer and communication. Not long ago, 100G and 200G were considered sophisticated network upgrades, and things are getting even better.

Today, operators and service providers that were among the first to deploy 400G are already reaping big from their investments. Perhaps one of the most compelling features of 400G isn’t what it offers at the moment but rather its ability to accommodate further upgrades to 800G and beyond. What’s your take on 5G and 400G, or your progress in deploying these novel technologies?

Article Source: How Is 5G Pushing the 400G Network Transformation?

Related Articles:

Typical Scenarios for 400G Network: A Detailed Look into the Application Scenarios

What’s the Current and Future Trend of 400G Ethernet?

How 400G Has Transformed Data Centers

With the rapid technological adoption witnessed in various industries across the world, data centers are adapting on the fly to keep up with the rising client expectations. History is also pointing to a data center evolution characterized by an ever-increasing change in fiber density, bandwidth, and lane speeds.

Data centers are shifting from 100G to 400G technologies in a bid to create more powerful networks that offer enhanced experiences to clients. Some of the factors pushing for 400G deployments include recent advancements in disruptive technologies such as AI, 5G, and cloud computing.

Today, forward-looking data centers that want to maximize cost while ensuring high-end compatibility and convenience have made 400G Ethernet a priority. Below, we have discussed the evolution of data centers, the popular 400G form factors, and what to expect in the data center switching market as technology continues to improve.

Evolution of Data Centers

The concept of data centers dates back to the 1940s, when the world’s first programmable computer, the Electronic Numerical Integrator and Computer, or ENIAC, was the apex of computational technology. The latter was primarily used by the US army to compute artillery fire during the Second World War. It was complex to maintain and operate and was only operated in a particular environment.

This saw the development of the first data centers centered on intelligence and secrecy. Ideally, a data center would have a single door and no windows. And besides the hundreds of feet of wiring and vacuum tubes, huge vents and fans were required for cooling. Refer to our data center evolution infographic to learn more about the rise of modern data centers and how technology has played a huge role in shaping the end-user experience.data center evolution

The Limits of Ordinary Data Centers

Some of the notable players driving the data center evolution are CPU design companies like Intel and AMD. The two have been advancing processor technologies, and both boost exceptional features that can support any workload.

And while most of these data center processors are reliable and optimized for several applications, they aren’t engineered for the specialized workloads that are coming up like big data analytics, machine learning, and artificial intelligence.

How 400G Has Transformed Data Centers

The move to 400 Gbps drastically transforms how data centers and data center interconnect (DCI) networks are engineered and built. This shift to 400G connections is more of a speculative and highly-dynamic game between the client and networking side.

Currently, two multisource agreements compete for the top spot as a form-factor of choice among consumers in the rapidly evolving 400G market. The two technologies are QSFP-DD and OSFP optical/pluggable transceivers.

OSFP vs. QSFP-DD

QSFP-DD is the most preferred 400G optical form factor on the client-side, thanks to the various reach options available. The emergence of the Optical Internetworking Forum’s 400ZR and the trend toward combining switching and transmission in one box are the two factors driving the network side. Here, the choice of form factors narrows down to power and mechanics.

The OSFP being a bigger module, provides lots of useful space for DWDM components, plus it features heat dissipation capabilities up to 15W of power. When putting coherent capabilities into a small form factor, power is critical. This gives OSFP a competitive advantage on the network side.

And despite the OSFP’s power, space, and enhanced signal integrity performance, it’s not compatible with QSFP28 plugs. Additionally, its technology doesn’t have the 100Gbps version, so it cannot provide an efficient transition from legacy modules. This is another reason it has not been widely adopted on the client side.

However, the QSFP-DD is compatible with QSFP28 and QSFP plugs and has seen a lot of support in the market. The only challenge is its low power dissipation, often capped at 12 W. This makes it challenging to efficiently handle a coherent ASIC (application-specific integrated circuit) and keep it cool for an extended period.

The switch to 400GE data centers is also fueled by the server’s adoption of 25GE/50GE interfaces to meet the ever-growing demand for high-speed storage access and a vast amount of data processing.400G OSFP vs. QSFP-DD

The Future of 400G Data Center Switches

Cloud service provider companies such as Amazon, Facebook, and Microsoft are still deploying 100G to reduce costs. According to a report by Dell’Oro Group, 100G is expected to peak in the next two years. But despite 100G dominating the market now, 400G shipments are expected to surpass 15M million switch ports by 2023.

In 2018, the first batch of 400G switch systems based on 12.8 Tbps chips was released. Google, which then was the only cloud service provider, was among the earliest companies to get into the market. Fast-forward, other cloud service providers have entered the market helping fuel the transformation even further. Today, cloud service companies make a big chunk of 400G customers, but service providers are expected to be next in line.

Choosing a Data Center Switch

Data center switches are available in a range of form factors, designs, and switching capabilities. Depending on your unique use cases, you want to choose a reliable data center switch that provides high-end flexibility and is built for the environment in which they are deployed. Some of the critical factors to consider during the selection process are infrastructure scalability and ease of programmability. A good data center switch is power efficient with reliable cooling and should allow for easy customization and integration with automated tools and systems. Here is an article about Data Center Switch Wiki, Usage and Buying Tips.

Article Source: How 400G Has Transformed Data Centers

Related Articles:

What’s the Current and Future Trend of 400G Ethernet?

400ZR: Enable 400G for Next-Generation DCI

400G Data Center Deployment Challenges and Solutions

As technology advances, specific industry applications such as video streaming, AI, and data analytics are increasingly pushing for increased data speeds and massive bandwidth demands. 400G technology, with its next-gen optical transceivers, brings a new user experience with innovative services that allow for faster and more data processing at a time.

Large data centers and enterprises struggling with data traffic issues embrace 400G solutions to improve operational workflows and ensure better economics. Below is a quick overview of the rise of 400G, the challenges of deploying this technology, and the possible solutions.

The Rise of 400G Data Centers

The rapid transition to 400G in several data centers is changing how networks are designed and built. Some of the key drivers of this next-gen technology are cloud computing, video streaming, AI, and 5G, which have driven the demand for high-speed, high-bandwidth, and highly scalable solutions. The large amount of data generated by smart devices, the Internet of Things, social media, and other As-a-Service models are also accelerating this 400G transformation.

The major benefits of upgrading to a 400G data center are the increased data capacity and network capabilities required for high-end deployments. This technology also delivers more power, efficiency, speed, and cost savings. A single 400G port is considerably cheaper than four individual 100G ports. Similarly, the increased data speeds allow for convenient scale-up and scale-out by providing high-density, reliable, and low-cost-per-bit deployments.

How 400G Works

Before we look at the deployment challenges and solutions, let’s first understand how 400G works. First, the actual line rate or data transmission speed of a 400G Ethernet link is 425 Gbps. The extra 25 bits establish a forward error connection (FEC) procedure, which detects and corrects transmission errors.

400G adopts the 4-level pulse amplitude modulation (PAM4) to combine higher signal and baud rates. This increases the data rates four-fold over the current Non-Return to Zero (NRZ) signaling. With PAM4, operators can implement four lanes of 100G or eight lanes of 50G for different form factors (i.e., OSFP and QSFP-DD). This optical transceiver architecture supports transmission of up to 400 Gbit/s over either parallel fibers or multiwavelength.

PM4
PAM4

Deployment Challenges & Solutions

Interoperability Between Devices

The PAM4 signaling introduced with 400G deployments creates interoperability issues between the 400G ports and legacy networking gear. That is, the existing NRZ switch ports and transceivers aren’t interoperable with PAM4. This challenge is widely experienced when deploying network breakout connections between servers, storage, and other appliances in the network.

400G transceiver transmits and receives with 4 lanes of 100G or 8 lanes of 50G with PAM4 signaling on both the electrical and optical interfaces. However, the legacy 100G transceivers are designed on 4 lanes of 25G NRZ signaling on the electrical and optical sides. These two are simply not interoperable and call for a transceiver-based solution.

One such solution is the 100G transceivers that support 100G PAM4 on the optical side and 4X25G NRZ on the electrical side. This transceiver performs the re-timing between the NRZ and PAM4 modulation within the transceiver gearbox. Examples of these transceivers are the QSFP28 DR and FR, which are fully interoperable with legacy 100G network gear, and QSFP-DD DR4 & DR4+ breakout transceivers. The latter are parallel series modules that accept an MPO-12 connector with breakouts to LC connectors to interface FR or DR transceivers.

NRZ & PM4
Interoperability Between Devices

Excessive Link Flaps

Link flaps are faults that occur during data transmission due to a series of errors or failures on the optical connection. When this occurs, both transceivers must perform auto-negotiation and link training (AN-LT) before data can flow again. If link flaps frequently occur, i.e., several times per minute, it can negatively affect throughput.

And while link flaps are rare with mature optical technologies, they still occur and are often caused by configuration errors, a bad cable, or defective transceivers. With 400GbE, link flaps may occur due to heat and design issues with transceiver modules or switches. Properly selecting transceivers, switches, and cables can help solve this link flaps problem.

Transceiver Reliability

Some optical transceiver manufacturers face challenges staying within the devices’ power budget. This results in heat issues, which causes fiber alignment challenges, packet loss, and optical distortions. Transceiver reliability problems often occur when old QSFP transceiver form factors designed for 40GbE are used at 400GbE.

Similar challenges are also witnessed with newer modules used in 400GbE systems, such as the QSFP-DD and CFP8 form factors. A solution is to stress test transceivers before deploying them in highly demanding environments. It’s also advisable to prioritize transceiver design during the selection process.

Deploying 400G in Your Data Center

Keeping pace with the ever-increasing number of devices, users, and applications in a network calls for a faster, high-capacity, and more scalable data infrastructure. 400G meets these demands and is the optimal solution for data centers and large enterprises facing network capacity and efficiency issues. The successful deployment of 400G technology in your data center or organization depends on how well you have articulated your data and networking needs.

Upgrading your network infrastructure can help relieve bottlenecks from speed and bandwidth challenges to cost constraints. However, making the most of your network upgrades depends on the deployment procedures and processes. This could mean solving the common challenges and seeking help whenever necessary.

A rule of thumb is to enlist the professional help of an IT expert who will guide you through the 400G upgrade process. The IT expert will help you choose the best transceivers, cables, routers, and switches to use and even conduct a thorough risk analysis on your entire network. That way, you’ll upgrade appropriately based on your network needs and client demands.
Article Source: 400G Data Center Deployment Challenges and Solutions
Related Articles:

NRZ vs. PAM4 Modulation Techniques
400G Multimode Fiber: 400G SR4.2 vs 400G SR8
Importance of FEC for 400G

400G Optics in Hyperscale Data Centers

Since their advent, data centers have been striving hard to address the rising bandwidth requirements. A look at the stats reveals that 3.04 Exabytes of data is being generated on a daily basis. Whenever a hyperscale data center is taken into consideration, the bandwidth requirements are massive as the relevant applications require a preemptive approach due to their scalable nature. As the introduction of 400G data centers has taken the data transfer speed to a whole new level, it has brought significant convenience in addressing various areas of concern. In this article, we will dig a little deeper and try to answer the following questions:

  • What are the driving factors of 400G development?
  • What are the reasons behind the use of 400G optics in hyperscale data centers?
  • What are the trends in 400G devices in large-scale data centers?

What Are the Driving Factors For 400G Development?

The driving factors for 400G development are segregated into video streaming services and video conferencing services. These services require pretty high data transfer speeds in order to function smoothly across the globe.

Video Streaming Services

Video streaming services were already taking a toll on the bandwidth requirements. That, combined with the COVID-19 pandemic, forced a large population to stay and work from home. This automatically increased the usage of video streaming platforms. A look at the stats reveals that a medium-quality stream on Netflix consumes 0.8 GB per hour. See that in relation to over 209 million subscribers. As the traveling costs came down, the savings went to improved quality streams on Netflix like HD and 4K. What stood at 0.8 GB per hour rose to 3 and 7 GB per hour. This evolved the need for 400G development.

Video Conferencing Services

As COVID-19 made working from home the new norm, video conferencing services also saw a major boost. Till 2021, 20.56 million people have been reported to be working from home in the US alone. As video conferencing took center stage, Zoom, which consumes 500 MB per hour, saw a huge increase in its user base. This also puts great pressure on the data transfer needs.

What Makes 400G Optics the Ideal Choice For Hyperscale Data Centers?

Significant Decrease in Energy and Carbon Footprint

To put it simply, 400G raises the data transfer speed four times. 400G reduces the cost of 100G ports as breakouts when comparing a 4 x 100G solution to facilitate 400GbE with a single 400G solution to do the same. A single node at the output minimizes the risk of failures as well as lower the energy requirement. This brings down the ESG footprint that has become a KPI for the organizations going forward.

Reduced Operational Cost

As mentioned earlier, a 400G solution requires a single 400G port, whereas addressing the same requirement via a 100G solution requires four 100G ports. On a router, four ports cost way more than a single port that can facilitate rapid data transfer. The same is the case with power. Combined together, these two bring the operational cost down to a considerable extent.400G Optics

Trends of 400G Optics in Large-Scale Data Centers—Quick Adoption

The introduction of 400G solution in large-scale data centers has reshaped the entire sector. This is due to a humongous increase in the data transfer speeds. According to research, 400G is expected to replace 100G and 200G deployments way faster than its predecessors. Since its introduction, more and more vendors are upgrading to network devices that support 400G. The following image truly depicts the technology adoption rate.Trends of 400G Optics

Challenges Ahead

Lack of Advancement in the 400G Optical Transceivers sector

Although the shift towards such network devices is rapid, there are a number of implementation challenges. This is because it is not only the devices that need to be upgraded but also the infrastructure. Vendors are trying to upgrade them in order to stay ahead of the curve but the cost of the development and maturity of optical transceivers is not at the expected benchmark. The same is the case with their cost and reliability. As optical transceivers are a critical element, this comes as a major challenge in the deployment of 400G solutions.

Latency Measurement

In addition, the introduction of this solution has also made network testing and monitoring more important than ever. Latency measurement has always been a key indicator when evaluating performance. Data throughput combined with jitter and frame loss also comes as a major concern in this regard.

Investment in Network Layers

Lastly, the creation of a plug-and-play environment for this solution also needs to be more realistic. This will require a greater investment in the physical, higher level, and network-IP components layers.

Conclusion

Rapid technological advancements have led to concepts like the Internet of Things. These implementations require greater data transfer speeds. That, combined with the world going to remote work, has exponentially increased the traffic. Hyperscale data centers were already feeling the pressure and the introduction of 400G data centers is a step in the right direction. It is a preemptive approach to address the growing global population and the increasing number of internet users.

Article Source: 400G Optics in Hyperscale Data Centers

Related Articles:

How Many 400G Transceiver Types Are in the Market?

Global Optical Transceiver Market: Striding to High-Speed 400G Transceivers

FAQs on 400G Transceivers and Cables


400G transceivers and cables play a vital role in the process of constructing a 400G network system. Then, what is a 400G transceiver? What are the applications of QSFP-DD cables? Find answers here.

FAQs on 400G Transceivers and Cables Definition and Types

Q1: What is a 400G transceiver?

A1: 400G transceivers are optical modules that are mainly used for photoelectric conversion with a transmission rate of 400Gbps. 400G transceivers can be classified into two categories according to the applications: client-side transceivers for interconnections between the metro networks and the optical backbone, and line-side transceivers for transmission distances of 80km or even longer.

Q2: What are QSFP-DD cables?

A2: QSFP-DD cables contain two forms: one is a form of high-speed cable with QSFP-DD connectors on either end, transmitting and receiving 400Gbps data over a thin twinax cable or a fiber optic cable, and the other is a form of breakout cable that can split one 400G signal into 2x 200G, 4x 100G, or 8x 50G, enabling interconnection within a rack or between adjacent racks.

Q3: What are the 400G transceivers packaging forms?

A3: There are mainly the following six packaging forms of 400G optical modules:

  • QSFP-DD: 400G QSFP-DD (Quad Small Form Factor Pluggable-Double Density) is an expansion of QSFP, adding one row to the original 4-channel interface to 8 channels, running at 50Gb/s each, for a total bandwidth of 400Gb/s.
  • OSFP: OSFP (Octal Small Formfactor Pluggable, Octal means 8) is a new interface standard and is not compatible with the existing photoelectric interface. The size of 400G OSFP modules is slightly larger than that of 400G QSFP-DD.
  • CFP8: CFP8 is an expansion of CFP4, with 8 channels and a correspondingly larger size.
  • COBO: COBO (Consortium for On-Board Optics) means that all optical components are placed on the PCB. COBO is with good heat-dissipation and small-size. However, since it is not hot-swappable, once a module fails, it will be troublesome to repair.
  • CWDM8: CWDM 8 is an extension of CWDM4 with four new center wavelengths (1351/1371/1391/1411 nm). The wavelength range becomes wider and the number of lasers is doubled.
  • CDFP: CDFP was born earlier, and there are three editions of the specification. CD stands for 400 (Roman numerals). With 16 channels, the size of CDFP is relatively large.

Q4: What 400G transceivers and QSFP-DD cables are available on the market?

A4: The two tables below show the main types of 400G transceivers and cables on the market:

400G TransceiversStandardsMax Cable DistanceConnectorMediaTemperature Range
400G QSFP-DD SR8QSFP-DD MSA Compliant70m OM3/100m OM4MTP/MPO-16MMF0 to 70°C
400G QSFP-DD DR4QSFP-DD MSA, IEEE 802.3bs500mMTP/MPO-12SMF0 to 70°C
400G QSFP-DD XDR4/DR4+QSFP-DD MSA2kmMTP/MPO-12SMF0 to 70°C
400G QSFP-DD FR4QSFP-DD MSA2kmLC DuplexSMF0 to 70°C
400G QSFP-DD 2FR4QSFP-DD MSA, IEEE 802.3bs2kmCSSMF0 to 70°C
400G QSFP-DD LR4QSFP-DD MSA Compliant10kmLC DuplexSMF0 to 70°C
400G QSFP-DD LR8QSFP-DD MSA Compliant10kmLC DuplexSMF0 to 70°C
400G QSFP-DD ER8QSFP-DD MSA Compliant40kmLC DuplexSMF0 to 70°C
400G OSFP SR8IEEE P802.3cm; IEEE 802.3cd100mMTP/MPO-16MMF0 to 70°C
400G OSFP DR4IEEE 802.3bs500mMTP/MPO-12SMF0 to 70°C
4000G OSFP XDR4/DR4+/2kmMTP/MPO-12SMF0 to 70°C
400G OSFP FR4100G lambda MSA2kmLC DuplexSMF0 to 70°C
400G OSFP 2FR4IEEE 802.3bs2kmCSSMF0 to 70°C
400G OSFP LR4100G lambda MSA10kmLC DuplexSMF0 to 70°C



QSFP-DD CablesCatagoryProduct DescriptionReachTemperature RangePower Consumption
400G QSFP-DD DACQSFP-DD to QSFP-DD DACwith each 400G QSFP-DD using 8x 50G PAM4 electrical lanesno more than 3m0 to 70°C<1.5W
400G QSFP-DD Breakout DACQSFP-DD to 2x 200G QSFP56 DACwith each 200G QSFP56 using 4x 50G PAM4 electrical lanesno more than 3m0 to 70°C<0.1W
QSFP-DD to 4x 100G QSFPs DACwith each 100G QSFPs using 2x 50G PAM4 electrical lanesno more than 3m0 to 70°C<0.1W
QSFP-DD to 8x 50G SFP56 DACwith each 50G SFP56 using 1x 50G PAM4 electrical laneno more than 3m0 to 80°C<0.1W
400G QSFP-DD AOCQSFP-DD to QSFP-DD AOCwith each 400G QSFP-DD using 8x 50G PAM4 electrical lanes70m (OM3) or 100m (OM4)0 to 70°C<10W
400G QSFP-DD Breakout AOCQSFP-DD to 2x 200G QSFP56 AOCwith each 200G QSFP56 using 4X 50G PAM4 electrical lane70m (OM3) or 100m (OM4)0 to 70°C/
QSFP-DD to 8x 50G SFP56 AOCwith each 50G SFP56 using 1x 50G PAM4 electrical lane70m (OM3) or 100m (OM4)0 to 70°C/
400G OSFP DACOSFP to OSFP DACwith each 400G OSFP using 8x 50G PAM4 electrical lanesno more than 3m0 to 70°C<0.5W
400G OSFP Breakout DACOSFP to 2x 200G QSFP56 DACwith each 200G QSFP56 using 4x 50G PAM4 electrical lanesno more than 3m0 to 70°C/
OSFP to 4x100G QSFPs DACwith each 100G QSFPs using 2x 50G PAM4 electrical lanesno more than 3m0 to 70°C/
OSFP to 8x 50G SFP56 DACwith each 50G SFP56 using 1x 50G PAM4 electrical laneno more than 3m//
400G OSFP AOCOSFP to OSFP AOCwith each 400G OSFP using 8x 50G PAM4 electrical lanes70m (OM3) or 100m (OM4)0 to 70°C<9.5W



Q5: What do the suffixes “SR8, DR4 / XDR4, FR4 / LR4 and 2FR4” mean in 400G transceivers?

A5: The letters refer to reach, and the number refers to the number of optical channels:

  • SR8: SR refers to 100m over MMF. Each of the 8 optical channels from an SR8 module is carried on separate fibers, resulting in a total of 16 fibers (8 Tx and 8 Rx).
  • DR4 / XDR4: DR / XDR refer to 500m / 2km over SMF. Each of the 4 optical channels is carried on separate fibers, resulting in a total of 4 pairs of fibers.
  • FR4 / LR4: FR4 / LR4 refer to 2km / 10km over SMF. All 4 optical channels from an FR4 / LR4 are multiplexed onto one fiber pair, resulting in a total of 2 fibers (1 Tx and 1 Rx).
  • 2FR4: 2FR4 refers to 2 x 200G-FR4 links with 2km over SMF. Each of the 200G FR4 links has 4 optical channels, multiplexed onto one fiber pair (1 Tx and 1 Rx per 200G link). A 2FR4 has 2 of these links, resulting in a total of 4 fibers, and a total of 8 optical channels.

FAQs on 400G Transceivers and Cables Applications

Q1: What are the benefits of moving to 400G technology?

A1: 400G technology can increase the throughput of data and maximize the bandwidth and port density of the data centers. With only 1/4 the number of optical fiber links, connectors, and patch panels when using 100G platforms for the same aggregate bandwidth, 400G optics can also reduce operating expenses. With these benefits, 400G transceivers and QSFP-DD cables can provide ideal solutions for data centers and high-performance computing environments.

Q2: What are the applications of QSFP-DD cables?

A2: QSFP-DD cables are mainly used for short-distance 400G Ethernet connectivity in the data centers, and 400G to 2x 200G / 4x 100G / 8x 50G Ethernet applications.

Q3: 400G QSFP-DD vs 400G OSFP/CFP8: What are the differences?

A3: The table below includes detailed comparisons for the three main form factors of 400G transceivers.

400G Transceiver400G QSFP-DD400G OSFPCFP8
Application ScenarioData centerData center & telecomTelecom
Size18.35mm× 89.4mm× 8.5mm22.58mm× 107.8mm× 13mm40mm× 102mm× 9.5mm
Max Power Consumption12W15W24W
Backward Compatibility with QSFP28YesThrough adapterNo
Electrical signaling (Gbps)8× 50G
Switch Port Density (1RU)363616
Media TypeMMF & SMF
Hot PluggableYes
Thermal ManagementIndirectDirectIndirect
Support 800GNoYesNo



For more details about the differences, please refer to the blog: Differences Between QSFP-DD and QSFP+/QSFP28/QSFP56/OSFP/CFP8/COBO

Q4: What does it mean when an electrical or optical channel is PAM4 or NRZ in 400G transceivers?

A4: NRZ is a modulation technique that has two voltage levels to represent logic 0 and logic 1. PAM4 uses four voltage levels to represent four combinations of two bits logic-11, 10, 01, and 00. PAM4 signal can transmit twice faster than the traditional NRZ signal.

When a signal is referred to as “25G NRZ”, it means the signal is carrying data at 25 Gbps with NRZ modulation. When a signal is referred to as “50G PAM4”, or “100G PAM4”, it means the signal is carrying data at 50 Gbps, or 100 Gbps, respectively, using PAM4 modulation. The electrical connector interface of 400G transceivers is always 8x 50Gb/s PAM4 (for a total of 400Gb/s).

FAQs on Using 400G Transceivers and Cables in Data Centers

Q1: Can I plug an OSFP module into a 400G QSFP-DD port, or a QSFP-DD module into an OSFP port?

A1: No. OSFP and QSFP-DD are two physically distinct form factors. If you have an OSFP system, then 400G OSFP optics must be used. If you have a QSFP-DD system, then 400G QSFP-DD optics must be used.

Q2: Can a QSFP module be plugged into a 400G QSFP-DD port?

A2: Yes. A QSFP (40G or 100G) module can be inserted into a QSFP-DD port as QSFP-DD is backward compatible with QSFP modules. When using a QSFP module in a 400G QSFP-DD port, the QSFP-DD port must be configured for a data rate of 100G (or 40G).

Q3: Is it possible with a 400G OSFP on one end of a 400G link, and a 400G QSFP-DD on the other end?

A3: Yes. OSFP and QSFP-DD describe the physical form factors of the modules. As long as the Ethernet media types are the same (i.e. both ends of the link are 400G-DR4, or 400G-FR4 etc.), 400G OSFP and 400G QSFP-DD modules will interoperate with each other.

Q4: How can I break out a 400G port and connect to 100G QSFP ports on existing platforms?

A4: There are several ways to break out a 400G port to 100G QSFP ports:

  • QSFP-DD-DR4 to 4x 100G-QSFP-DR over 500m SMF
400G to 4x 100G
  • QSFP-DD-XDR4 to 4x 100G-QSFP-FR over 2km SMF
400G to 4x 100G
  • QSFP-DD-LR4 to 4x 100G-QSFP-LR over 10km SMF
400G to 4x 100G
  • OSFP-400G-2FR4 to 2x QSFP-100G-CWDM4 over 2km SMF
400G to 4x 100G

Apart from the 400G transceivers mentioned above, 400G to 4x 100G breakout cables can also be used.

Article Source: FAQs on 400G Transceivers and Cables

Related Articles:

400G Transceiver, DAC, or AOC: How to Choose?

400G OSFP Transceiver Types Overview

100G NIC: An Irresistible Trend in Next-Generation 400G Data Center

NIC, short for network interface card, which can be called network interface controller, network adapter or LAN adapter, allows a networking device to communicate with other networking devices. Without NIC, networking can hardly be done. There are NICs with different types and speeds, such as wireless and wired NIC, from 10G to 100G. Among them, 100G NIC, as a product appearing in recent years, hasn’t taken a large market share yet. This post gives a description of 100G NIC and the trends in NIC as follows.

What Is 100G NIC?

NIC is installed on a computer and used for communicating over a network with another computer, server or other network devices. It comes in many different forms but there are two main different types of NIC: wired NIC and wireless NIC. Wireless NICs use wireless technologies to access the network, while wired NICs use DAC cable or transceiver and fiber patch cable. The most popular wired LAN technology is Ethernet. In terms of its application field, it can be divided into computer NIC card and server NIC card. For client computers, one NIC is needed in most cases. However, for servers, it makes sense to use more than one NIC to meet the demand for handling more network traffic. Generally, one NIC has one network interface, but there are still some server NICs that have two or more interfaces built in a single card.

100G NIC

Figure 1: FS 100G NIC

With the expanding of data center from 10G to 100G, 25G server NIC has gained a firm foothold in the NIC market. In the meantime, the growth in demand for bandwidth is driving data center to higher bandwidth, 200G/400G and 100G transceivers have been widespread, which paves the way for 100G server.

How to Select 100G NIC?

How to choose the best 100G NIC from all the vendors? If you are stuck in this puzzle, see the following section listing recommendations and considerations to consider.

Connector

Connector types like RJ45, LC, FC, SC are commonly used connectors on NIC. You should check the connector type supported by NIC. Today many networks are only using RJ45, so it may be not that hard to choose the NIC for the right connector type as it has been in the past. Even so, some network may utilize a different interface such as coax. Therefore, check if the card you are planning to buy supports this connection before purchasing.

Bus Type

PCI is a hardware bus used for adding internal components to the computer. There are three main PCI bus types used by servers and workstations now: PCI, PCI-X and PCI-E. Among them, PCI is the most conventional one. It has a fixed width of 32 bits and can handle only 5 devices at a time. PCI-X is a higher upgraded version, providing more bandwidth. With the emergence of PCI-E, PCI-X cards are gradually replaced. PCI-E is a serial connection so that devices no longer share bandwidth like they do on a normal bus. Besides, there are different physical sizes of PCI-E card in the market: x16, x8, x4, and x1. Before purchasing a 100G NIC, it is necessary to make sure which PCI version and slot width can be compatible with your current equipment and network environment.

Hot swappable

There are some NICs that can be installed and removed without shutting down the system, which helps minimize downtime by allowing faulty devices to be replaced immediately. While you are choosing your 100G NIC, be sure to check if it supports hot swapping.

Trends in NIC

NICs were commonly used in desktop computers in the 1990s and early 2000s. Up to now, it has been widely used in servers and workstations with different types and rates. With the popularization of wireless networking and WiFi, wireless NICs gradually grows in popularity. However, wired cards are still popular for relatively immobile network devices owing to the reliable connections.NICs have been upgrading for years. As data centers are expanding at an unprecedented pace and driving the need for higher bandwidth between the server and switches, networking is moving from 10G to 25G and even 100G. Companies like Intel and Mellanox have launched their 100G NIC in succession.

During the upgrading from 10G to 100G in data centers, 25G server connectivity popularized for 100G migration can be realized by 4 strands of 25G. 25G NIC is still the mainstream. However, considering the fact that the overall bandwidth for data centers grows quickly and hardware upgrade cycles for data centers occur every two years, the ethernet speed can be faster than we expect. 400G data center is just on the horizon. It stands a good chance that 100G NIC will play an integral role in next-generation 400G networking.

Meanwhile, the need of 100G NIC will drive the demand for other network devices as well. For instance, 100G transceiver, the device between NIC and network, is bound to pervade. Now 100G transceivers are provided by many brands with different types such as CXP, CFP, QSFP28 transceivers,etc. FS supplies a full series of compatible 100G QSFP28 and CFP transceivers that can be matched with the major brand of 100G Ethernet NIC, such as Mellanox and Intel.

Conclusion

Nowadays with the hyping of the next generation cellular technology, 5G, the higher bandwidth is needed for data flow, which paves the way for 100G NIC. On the occasion, 100G transceivers and 400G network switches will be in great need. We believe that the new era of 5G networks will see the popularization of 100G NIC and change towards a new era of network performance.

Article Source: 100G NIC: An Irresistible Trend in Next-Generation 400G Data Center

Related Articles:

400G QSFP Transceiver Types and Fiber Connections

How Many 400G Transceiver Types Are in the Market?

NRZ vs. PAM4 Modulation Techniques

The leading trends such as cloud computing and big data drive the exponential traffic growth and the rise of 400G Ethernet. Data center networks are facing a larger bandwidth demand, and innovative technologies are required for infrastructure to meet shifting demands. Currently, there are two different signal modulation techniques examined for next-generation Ethernet: non-return to zero (NRZ), and pulse-amplitude modulation 4-level (PAM4). This article will take you through these two modulation techniques and compare them to find the optimal choice for 400G Ethernet.

NRZ and PAM4 Basics

NRZ is a modulation technique using two signal levels to represent the 1/0 information of a digital logic signal. Logic 0 is a negative voltage, and Logic 1 is a positive voltage. One bit of logic information can be transmitted or received within each clock period. The baud rate, or the speed at which a symbol can change, equals the bit rate for NRZ signals.

NRZ
NRZ

PAM4 is a technology that uses four different signal levels for signal transmission and each symbol period represents 2 bits of logic information (0, 1, 2, 3). To achieve that, the waveform has 4 different levels, carrying 2 bits: 00, 01, 10 or 11, as shown below. With two bits per symbol, the baud rate is half the bit rate.

PAM4
PAM4

Comparison of NRZ vs. PAM4

Bit Rate

A transmission with NRZ mechanism will have the same baud rate and bitrate because one symbol can carry one bit. 28Gbps (gigabit per second) bitrate is equivalent to 28GBdps (gigabaud per second) baud rate. While, because PAM4 carries 2 bits per symbol, 56Gbps PAM4 will have a line transmission at 28GBdps. Therefore, PAM4 doubles the bit rate for a given baud rate over NRZ, bringing higher efficiency for high-speed optical transmission such as 400G. To be more specific, a 400 Gbps Ethernet interface can be realized with eight lanes at 50Gbps or four lanes at 100Gbps using PAM4 modulation.

Signal Loss

PAM4 allows twice as much information to be transmitted per symbol cycle as NRZ. Therefore, at the same bitrate, PAM4 only has half the baud rate, also called symbol rate, of the NRZ signal, so the signal loss caused by the transmission channel in PAM4 signaling is greatly reduced. This key advantage of PAM4 allows the use of existing channels and interconnects at higher bit rates without doubling the baud rate and increasing the channel loss.

Signal-to-noise Ratio (SNR) and Bit Error Rate (BER)

According to the following figure, the eye height for PAM4 is 1/3 of that for NRZ, causing the PAM4 to increase SNR (Signal-Noise Ratio) by -9.54 dB (Link Budget Penalty), which impacts the signal quality and introduces additional constraints in high-speed signaling. The 33% smaller vertical eye opening makes PAM4 signaling more sensitive to noise, resulting in a higher bit error rate. However, PAM4 was made possible because of forward-error correction (FEC) that can help link system to achieve the desired BER.

NRZ vs. PAM4
NRZ vs. PAM4

Power Consumption

Reducing BER in a PAM4 channel requires equalization at the Rx end and pre-compensation at the Tx end, which both consume extra power than the NRZ link for a given clock rate. This means PAM4 transceivers generate more heat at each end of the link. However, the new state-of-the-art silicon photonics (SiPh) platform can effectively reduce energy consumption and can be used in 400G transceivers. For example, FS silicon photonics 400G transceiver combines SiPh chips and PAM4 signaling, making it a cost-effective and lower power consumption solution for 400G data center.

Shift from NRZ to PAM4 for 400G Ethernet

With massive data transmitted across the globe, many organizations pose their quest for migration towards 400G. Initially, 16× 25G baud rate NRZ is used for 400G Ethernet, such as 400G-SR16, but the link loss and size of the scheme can not meet the demands of 400G Ethernet. Whereas PAM4 enables higher bit rates at half the baud rate, designers can continue to use existing channels at potential 400G Ethernet data rates. As a result, PAM4 has overtaken NRZ as the preferred modulation method for electrical or optical signal transmission in 400G optical modules.

Article Source: NRZ vs. PAM4 Modulation Techniques

Related Articles:
400G Data Center Deployment Challenges and Solutions
400G ZR vs. Open ROADM vs. ZR+
400G Multimode Fiber: 400G SR4.2 vs 400G SR8

400G ZR & ZR+ – New Generation of Solutions for Longer-reach Optical Communications

400G

400G ZR and ZR+ coherent pluggable optics have become new solutions for high-density networks with data rates from 100G to 400G featuring low power and small space. Let’s see how the latest generation of 400G ZR and 400G ZR+ optics extends the economic benefits to meet the requirements of network operators, maximizes fiber utilization, and reduces the cost of data transport.

400G ZR & ZR+: Definitions

What Is 400G ZR?

400G ZR coherent optical modules are compliant with the OIF-400ZR standard, ensuring industry-wide interoperability. They provide 400Gbps of optical bandwidth over a single optical wavelength using DWDM (dense wavelength division multiplexing) and higher-order modulation such as 16 QAM. Implemented predominantly in the QSFP-DD form factor, 400G ZR will serve the specific requirement for massively parallel data center interconnect of 400GbE with distances of 80-120km. To learn more about 400G transceivers: How Many 400G Transceiver Types Are in the Market?

Overview of 400G ZR+

ZR+ is a range of coherent pluggable solutions with line capacities up to 400Gbps and reaches well beyond 80km supporting various application requirements. The specific operational and performance requirements of different applications will determine what types of 400G ZR+ coherent plugs will be used in networks. Some applications will take advantage of interoperable, multi-vendor ecosystems defined by standards body or MSA specifications and others will rely on the maximum performance achievable in the constraints of a pluggable module package. Four categories of 400G ZR+ applications will be explained in the following part.

400G ZR & ZR+: Applications

400G ZR – Application Scenario

The arrival of 400G ZR modules has ushered in a new era of DWDM technology marked by open, standards based, and pluggable DWDM optics, enabling true IP-over-DWDM. 400G ZR is often applied for point-to-point DCI (up to 80km), making the task of interconnecting data centers as simple as connecting switches inside a data center (as shown below).

Figure 1: 400G ZR Applied in Single-span DCI

Four Primary Deployment Applications for 400G ZR+

Extended-reach P2P Packet

One definition of ZR+ is a straightforward extension of 400G ZR transcoded mappings of Ethernet with a higher performance FEC to support longer reaches. In this case, 400G ZR+ modules are narrowly defined as supporting a single-carrier 400Gbps optical line rate and transporting 400GbE, 2x 200GbE or 4x 100GbE client signals for point-to-point reaches (up to around 500km). This solution is specifically dedicated to packet transport applications and destined for router platforms.

Multi-span Metro OTN

Another definition of ZR+ is the inclusion of support for OTN, such as client mapping and multiplexing into FlexO interfaces. This coherent pluggable solution is intended to support the additional requirements of OTN networks, carry both Ethernet and OTN clients, and address transport in multi-span ROADM networks. This category of 400G ZR+ is required where demarcation is important to operators, and is destined primarily for multi-span metro ROADM networks.

Figure 2: 400G ZR+ Applied in Multi-span Metro OTN

Multi-span Metro Packet

The third definition of ZR+ is support for extended reach Ethernet or packet transcoded solution that is further optimized for critical performance such as latency. This 400G ZR+ coherent pluggable with high performance FEC and sophisticated coding algorithms supports the longest reach over 1000km multi-span metro packet transport.

Figure 3: 400G ZR+ Applied in Multi-span Metro Packet

Multi-span Metro Regional OTN

The fourth definition of ZR+ supports both Ethernet and OTN clients. This coherent pluggable also leverages high performance FEC and PCS, along with tunable optical filters and amplifiers for maximum reach. It supports a rich feature set of OTN network functions for deployment over both fixed and flex-grid line systems. This category of 400G ZR+ provides solutions with higher performance to address a much wider range of metro/regional packet networking requirements.

400G ZR & ZR+: What Makes Them Suitable for Longer-reach Transmission in Data Center?

Coherent Technology Adopted by 400G ZR & ZR+

Coherent technology uses the three degrees of freedom (amplitude, phase and polarization of light) to focus more data on the wave that is being transmitted. In this way, coherent optics can transport more data over a single fiber for greater distances using higher order modulation techniques, which results in better spectral efficiency. 400G ZR and ZR+ is a leap forward in the application of coherent technology. With higher-order modulation and DWDM unlocking high bandwidth, 400G ZR and ZR+ modules can reduce cost and complexity for high-level data center interconnects.

Importance of 400G ZR & ZR+

400G ZR and 400G ZR+ coherent pluggable optics take implementation challenges to the next level by adding some of the elements for high-performance solutions while pushing component design for low-power, pluggability, and modularity.

Conclusion

Although there are still many challenges to making 400G ZR and 400G ZR+ transceiver modules that fit into the small size and power budget of OSFP or QSFP-DD packages and also achieving interoperation as well the costs and volume targets. With 400Gbps high optical bandwidth and low power consumption, 400G ZR & ZR+ may very well be the new generation in longer-reach optical communications.

Original Source: 400G ZR & ZR+ – New Generation of Solutions for Longer-reach Optical Communications

400G OSFP Transceiver Types Overview

400G

OSFP stands for Octal Small Form-factor Pluggable, which consists of 8 electrical lanes, running at 50Gb/s each, for a total of the bandwidth of 400Gb/s. This post will give an introduction of 400G OSFP transceiver types, the fiber connections, and some QAs about OSFP.

400G OSFP Transceiver Types

Below lists some current main 400G OSFP transceiver types: OSFP SR8, OSFP DR4, OSFP DR4+, OSFP FR4, OSFP 2*FR4, and OSFP LR4, which summarize OSFP transceiver according to the two transmission types (over multimode fiber and single-mode fiber) they support.

Fibers Connections for 400G OSFP Transceivers

400G OSFP SR8

Figure 1 OSFP SR8 to OSFP SR8.jpg
  • 400G OSFP SR8 to 2× 200G SR4 over MTP-16 to 2× MPO-8 breakout cable.
Figure 2 OSFP SR8 to 2 200G SR4.jpg
  • 400G OSFP SR8 to 8× 50G SFP via MTP-16 to 8× LC duplex breakout cable with up to 100m.
Figure 3 OSFP SR8 to 8 50G SFP.jpg

400G OSFP DR4

  • 400G OSFP DR4 to 400G OSFP DR4 over an MTP-12/MPO-12 cable.Figure 1 OSFP SR8 to OSFP SR8.jpg
  • 400G OSFP DR4 to 4× 100G DR4 over MTP-12/MPO-12 to 4× LC duplex breakout cable.
Figure 4 OSFP DR4 to 4 100G DR.jpg

400G OSFP XDR4/DR4+

  • 400G OSFP DR4+ to 400G OSFP DR4+ over an MTP-12/MPO-12 cable.
  • 400G OSFP DR4+ to 4× 100G DR over MTP-12/MPO-12 to 4× LC duplex breakout cable.
Figure 5 OSFP DR4+ to 4 100G DR.jpg

400G OSFP FR4

400G OSFP FR4 to 400G OSFP FR4 over duplex LC cable.

Figure 6 OSFP FR4 to OSFP FR4.jpg

400G OSFP 2FR4

OSFP 2FR4 can break out to 2× 200G and interop with 2× 200G-FR4 QSFP transceivers via 2× CS to 2× LC duplex cable.

400G OSFP Transceivers: Q&A

Q: What does “SR8”, “DR4”, “XDR4”, “FR4”, and “LR4” mean?

A: “SR” refers to short range, and “8” implies there are 8 optical channels. “DR” refers to 500m reach using single-mode fiber, and “4” implies there are 4 optical channels. “XDR4” is short for “eXtended reach DR4”. And “LR” refers to 10km reach using single-mode fiber.

Q: Can I plug an OSFP transceiver module into a QSFP-DD port?

A: No. QSFP-DD and OSFP are totally different form factors. For more information about QSFP-DD transceivers, you can refer to 400G QSFP-DD Transceiver Types Overview. You can use only one kind of form factor in the corresponding system. E.g., if you have an OSFP system, OSFP transceivers and cables must be used.

Q: Can I plug a 100G QSFP28 module into an OSFP port?

A: Yes. A QSFP28 module can be inserted into an OSFP port but with an adapter. When using a QSFP28 module in an OSFP port, the OSFP port must be configured for a data rate of 100G instead of 400G.

Q: What other breakout options are possible apart from using OSFP modules mentioned above?

A: OSFP 400G DACs & AOCs are possible for breakout 400G connections. See 400G Direct Attach Cables (DAC & AOC) Overview for more information about 400G DACs & AOCs.

Original Source: 400G OSFP Transceiver Types Overview

400G Ethernet Manufacturers and Vendors

New data-intensive applications have led to a dramatic increase in network traffic, raising the demand for higher processing speeds, lower latency, and greater storage capacity. These require higher network bandwidth, up to 400G or higher. Therefore, the 400G market is currently growing rapidly. Many organizations join the ranks of 400G equipment vendors early, and are already reaping the benefits. This article will take you through 400G Ethernet market trend and some global 400G equipment vendors.

The 400G Era

The emergence of new services, such as 4K VR, Internet of Things (IoT), and cloud computing, raises connected devices and internet users. According to an IEEE report, they forecast that “device connections will grow from 18 billion in 2017 to 28.5 billion devices by 2022.” And the number of internet users will soar “from 3.4 billion in 2017 to 4.8 billion in 2022.” Hence, network traffic is exploding. Indeed, the average annual growth rate of network traffic remains at a high level of 26%.

Annual Growth of Network Traffic
Annual Growth of Network Traffic

Facing the rapid growth of network traffic, 100GE/200GE ports are unable to meet the demand for network connectivity from a large number of customers. Many organizations and enterprises, especially hyperscale data centers and cloud operators, are aggressively adopting next-generation 400G network infrastructure to help address workloads. 400G provides the ideal solution for operators to meet high-capacity network requirements, reduce operational costs, and achieve sustainability goals. Due to the good development prospects of 400G market, many IT infrastructure providers are scrambling to layout and join the 400G market competition, launching a variety of 400G products. Dell’Oro group indicates “the ecosystem of 400G technologies, from silicon to optics, is ramping.” Starting in 2021, large-scale deployments will contribute meaningful market. They forecast that 400G shipments will exceed 15 million ports by 2023, and 400G will be widely deployed in all of the largest core networks in the world. In addition, according to GLOBE NEWSWIRE, the global 400G transceiver market is expected to be at $22.6 billion in 2023. 400G Ethernet is about to be deployed at scale, leading to the arrival of the 400G era.

400G Growth

Companies Offering 400G Networking Equipment

Many top companies seized the good opportunity of the fast-growing 400G market, and launched various 400G equipment. Many well-known IT infrastructure providers, which laid out 400G products early on, have become the key players in the 400G market after years of development, such as Cisco, Arista, Juniper, etc.

400G Equipment Vendors
400G Equipment Vendors

Cisco

Cisco foresaw the need for the Internet and its infrastructure at a very early stage, and as a result, has put a stake in the ground that no other company has been able to eclipse. Over the years, Cisco has become a top provider of software and solutions and a dominant player in the highly competitive 25/50/100Gb space. Cisco entered the 400G space with its latest networking hardware and optics as announced on October 31, 2018. Its Nexus switches are Cisco’s most important 400G product. Cisco primarily expects to help customers migrate to 400G Ethernet with solutions including Cisco’s ACI (Application Centric Infrastructure), streamlining operations, Cisco Nexus data networking switches, and Cisco Network Assurance Engine (NAE), amongst others. Cisco has seized the market opportunity and is continuing to grow its sales with its 400G products. Cisco reported second-quarter revenue of $12.7 billion, up 6% year over year, demonstrating the good prospects of 400G Ethernet market.

Arista Networks

Arista Networks, founded in 2008, provides software-driven cloud networking solutions for large data center storage and computing environments. Arista is smaller than rival Cisco, but it has made significant gains in market share and product development during the last several years. Arista announced on October 23, 2018, the release of 400G platforms and optics, presenting its entry into the 400G Ethernet market. Nowadays, Arista focuses on its comprehensive 400G platforms that include various series switches and 400G optical modules for large-scale cloud, leaf and spine, routing transformation, and hyperscale IO intensive applications. The launch of Arista’s diverse 400G switches has also resulted in significant sales and market share growth. According to IDC, Arista networks saw a 27.7 percent full year switch ethernet switch revenue rise in 2021. Arista has put legitimate market share pressure on leader Cisco in the tech sector during the past five years.

Juniper Networks

Juniper is a leading provider of networking products. With the arrival of the 400G era, Juniper offers comprehensive 400G routing and switching platforms: packet transport routers, universal routing platforms, universal metro routers, and switches. Recently, it also introduced 400G coherent pluggable optics to further address 400G data communication needs. Juniper believes that 400G will become the new data rate currency for future builds and is fully prepared for the 400G market competition. And now, Juniper has become the key player in the 400G market.

Huawei Technologies

Huawei, a massive Chinese tech company, is gaining momentum in its data center networking business. Huawei is already in the “challenger” category to the above-mentioned industry leaders—getting closer to the line of “leader” area. On OFC 2018, Huawei officially released its 400G optical network solution for commercial use, joining the ranks of 400G product vendors. Hence, it achieves obvious economic growth. Huawei accounted for 28.7% of the global communication equipment market last year, an increase of 7% year on year. As Huawei’s 400G platforms continue to roll out, related sales are expected to rise further. The broad Chinese market will also further strengthen Huawei’s leading position in the global 400G space.

FS

Founded in 2009, FS is a global high-tech company providing high-speed communication network solutions and services to several industries. Through continuous technology upgrades, professional end-to-end supply chain, and brand partnership with top vendors, FS services customers across 200 countries – with the industry’s most comprehensive and innovative solution portfolio. FS is one of the earliest 400G vendors in the world, with a diverse portfolio of 400G products, including 400G switches, optical transceivers, cables, etc. FS thinks 400G Ethernet is an inevitable trend in the current networking market, and has seized this good opportunity to gain a large number of loyal customers in the 400G market. In the future, FS will continue to provide customers with high-quality and reliable 400G products for the migration to 400G Ethernet.

Getting Started with 400G Ethernet

400G is the next generation of cloud infrastructure, driving next-generation data center networks. Many organizations and enterprises are planning to migrate to 400G. The companies mentioned above have provided 400G solutions for several years, making them a good choice for enterprises. There are also lots of other organizations trying to enter the ranks of 400G manufacturers and vendors, driving the growing prosperity of the 400G market. Remember to take into account your business needs and then choose the right 400G product manufacturer and vendor for your investment or purchase.

Data Center Containment: Types, Benefits & Challenges

Over the past decade, data center containment has experienced a high rate of implementation by many data centers. It can greatly improve the predictability and efficiency of traditional data center cooling systems. This article will elaborate on what data center containment is, common types of it, and their benefits and challenges.

What Is Data Center Containment?

Data center containment is the separation of cold supply air from the hot exhaust air from IT equipment so as to reduce operating cost, optimize power usage effectiveness, and increase cooling capacity. Containment systems enable uniform and stable supply air temperature to the intake of IT equipment and a warmer, drier return air to cooling infrastructure.

Types of Data Center Containment

There are mainly two types of data center containment, hot aisle containment and cold aisle containment.

Hot aisle containment encloses warm exhaust air from IT equipment in data center racks and returns it back to cooling infrastructure. The air from the enclosed hot aisle is returned to cooling equipment via a ceiling plenum or duct work, and then the conditioned air enters the data center via raised floor, computer room air conditioning (CRAC) units, or duct work.

Hot aisle containment

Cold aisle containment encloses cold aisles where cold supply air is delivered to cool IT equipment. So the rest of the data center becomes a hot-air return plenum where the temperature can be high. Physical barriers such as solid metal panels, plastic curtains, or glass are used to allow for proper airflow through cold aisles.

Cold aisle containment

Hot Aisle vs. Cold Aisle

There are mixed views on whether it’s better to contain the hot aisle or the cold aisle. Both containment strategies have their own benefits as well as challenges.

Hot aisle containment benefits

  • The open areas of the data center are cool, so that visitors to the room will not think the IT equipment is not being cooled sufficiently. In addition, it allows for some low density areas to be un-contained if desired.
  • It is generally considered to be more effective. Any leakages that come from raised floor openings in the larger part of the room go into the cold space.
  • With hot aisle containment, low-density network racks and stand-alone equipment like storage cabinets can be situated outside the containment system, and they will not get too hot, because they are able to stay in the lower temperature open areas of the data center.
  • Hot aisle containment typically adjoins the ceiling where fire suppression is installed. With a well-designed space, it will not affect normal operation of a standard grid fire suppression system.

Hot aisle containment challenges

  • It is generally more expensive. A contained path is needed for air to flow from the hot aisle all the way to cooling units. Often a drop ceiling is used as return air plenum.
  • High temperatures in the hot aisle can be undesirable for data center technicians. When they need to access IT equipment and infrastructure, a contained hot aisle can be a very uncomfortable place to work. But this problem can be mitigated using temporary local cooling.

Cold aisle containment benefits

  • It is easy to implement without the need for additional architecture to contain and return exhaust air such as a drop ceiling or air plenum.
  • Cold aisle containment is less expensive to install as it only requires doors at ends of aisles and baffles or roof over the aisle.
  • Cold aisle containment is typically easier to retrofit in an existing data center. This is particularly true for data centers that have overhead obstructions such as existing duct work, lighting and power, and network distribution.

Cold aisle containment challenges

  • When utilizing a cold aisle system, the rest of the data center becomes hot, resulting in high return air temperatures. It also may create operational issues if any non-contained equipment such as low-density storage is installed in the general data center space.
  • The conditioned air that leaks from the openings under equipment like PDUs and raised floor tend to enter air paths that return to cooling units. This reduces the efficiency of the system.
  • In many cases, cold aisles have intermediate ceilings over the aisle. This may affect the overall fire protection and lighting design, especially when added to an existing data center.

How to Choose the Best Containment Option?

Every data center is unique. To find the most suitable option, you have to take into account a number of aspects. The first thing is to evaluate your site and calculate the Cooling Capacity Factor (CCF) of the computer room. Then observe the unique layout and architecture of each computer room to discover conditions that make hot aisle or cold aisle containment preferable. With adequate information and careful consideration, you will be able to choose the best containment option for your data center.

Article Source: Data Center Containment: Types, Benefits & Challenges

Related Articles:

What Is a Containerized Data Center: Pros and Cons

The Most Common Data Center Design Missteps

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

The COVID-19 pandemic caused several companies to shut down, and the implications were reduced production and altered supply chains. In the tech world, where silicon microchips are the heart of everything electronic, raw material shortage became a barrier to new product creation and development.

During the lockdown periods, some essential workers were required to stay home, which meant chip manufacturing was unavailable for several months. By the time lockdown was lifted and the world embraced the new normal, the rising demand for consumer and business electronics was enough to ripple up the supply chain.

Below, we’ve discussed the challenges associated with the current chip shortage, what to expect moving forward, and the possible interventions necessary to overcome the supply chain constraints.

Challenges Caused by the Current Chip Shortage

As technology and rapid innovation sweeps across industries, semiconductor chips have become an essential part of manufacturing – from devices like switches, wireless routers, computers, and automobiles to basic home appliances.

devices

To understand and quantify the impact this chip shortage has caused spanning the industry, we’ll need to look at some of the most affected sectors. Here’s a quick breakdown of how things have unfolded over the last eighteen months.

Automobile Industry

in North America and Europe had slowed or stopped production due to a lack of computer chips. Major automakers like Tesla, Ford, BMW, and General Motors have all been affected. The major implication is that the global automobile industry will manufacture 4 million fewer cars by the end of 2021 than earlier planned, and it will forfeit an average of $110 billion in revenue.

Consumer Electronics

Consumer electronics such as desktop PCs and smartphones rose in demand throughout the pandemic, thanks to the shift to virtual learning among students and the rise in remote working. At the start of the pandemic, several automakers slashed their vehicle production forecasts before abandoning open semiconductor chip orders. And while the consumer electronics industry stepped in and scooped most of those microchips, the supply couldn’t catch up with the demand.

Data Centers

Most chip fabrication companies like Samsung Foundries, Global Foundries, and TSMC prioritized high-margin orders from PC and data center customers during the pandemic. And while this has given data centers a competitive edge, it isn’t to say that data centers haven’t been affected by the global chip shortage.

data center

Some of the components data centers have struggled to source include those needed to put together their data center switching systems. These include BMC chips, capacitors, resistors, circuit boards, etc. Another challenge is the extended lead times due to wafer and substrate shortages, as well as reduced assembly capacity.

LED Lighting

LED backlights common in most display screens are powered by hard-to-find semiconductor chips. The prices of gadgets with LED lighting features are now highly-priced due to the shortage of raw materials and increased market demand. This is expected to continue up to the beginning of 2022.

Renewable Energy- Solar and Turbines

Renewable energy systems, particularly solar and turbines, rely on semiconductors and sensors to operate. The global supply chain constraints have hurt the industry and even forced some energy solutions manufacturers like Enphase Energy to

Semiconductor Trends: What to Expect Moving Forward

In response to the global chip shortage, several component manufacturers have ramped up production to help mitigate the shortages. However, top electronics and semiconductor manufacturers say the crunch will only worsen before it gets better. Most of these industry leaders speculate that the semiconductor shortage could persist into 2023.

Based on the ongoing disruption and supply chain volatility, various analysts in a recent CNBC article and Bloomberg interview echoed their views, and many are convinced that the coming year will be challenging. Here are some of the key takeaways:

Pat Gelsinger, CEO of Intel Corp., noted in April 2021 that the chip shortage would recover after a couple of years.

DigiTimes Report found that Intel and AMD server ICs and data centers have seen their lead times extend to 45 to 66 weeks.

The world’s third-largest EMS and OEM provider, Flex Ltd., expects the global semiconductor shortage to proceed into 2023.

In May 2021, Global Foundries, the fourth-largest contract semiconductor manufacturer, signed a $1.6 billion, 3-year silicon supply deal with AMD, and in late June, it launched its new $4 billion, 300mm-wafer facility in Singapore. Yet, the company says its production capacity will only increase component production earliest in 2023.

TMSC, one of the leading pure-play foundries in the industry, says it won’t meaningfully increase the component output until 2023. However, it’s optimistic that the company will ramp up the fabrication of automotive micro-controllers by 60% by the end of 2021.

From the industry insights above, it’s evident that despite the many efforts that major players put into resolving the global chip shortage, the bottlenecks will probably persist throughout 2022.

Additionally, some industry observers believe that the move by big tech companies such as Amazon, Microsoft, and Google to design their own chips for cloud and data center business could worsen the chip shortage crisis and other problems facing the semiconductor industry.

article, the authors hint that the entry of Microsoft, Amazon, and Google into the chip design market will be a turning point in the industry. These tech giants have the resources to design superior and cost-effective chips of their own, something most chip designers like Intel have in limited proportions.

Since these tech giants will become independent, each will be looking to create component stockpiles to endure long waits and meet production demands between inventory refreshes. Again, this will further worsen the existing chip shortage.

Possible Solutions

To stay ahead of the game, major industry players such as chip designers and manufacturers and the many affected industries have taken several steps to mitigate the impacts of the chip shortage.

For many chip makers, expanding their production capacity has been an obvious response. Other suppliers in certain regions decided to stockpile and limit exports to better respond to market volatility and political pressures.

Similarly, improving the yields or increasing the number of chips manufactured from a silicon wafer is an area that many manufacturers have invested in to boost chip supply by some given margin.

chip manufacturing

Here are the other possible solutions that companies have had to adopt:

Embracing flexibility to accommodate older chip technologies that may not be “state of the art” but are still better than nothing.

Leveraging software solutions such as smart compression and compilation to build efficient AI models to help unlock hardware capabilities.

LED Lighting

The latest global chip shortage has led to severe shocks in the semiconductor supply chain, affecting several industries from automobile, consumer electronics, data centers, LED, and renewables.

Industry thought leaders believe that shortages will persist into 2023 despite the current build-up in mitigation measures. And while full recovery will not be witnessed any time soon, some chip makers are optimistic that they will ramp up fabrication to contain the demand among their automotive customers.

That said, staying ahead of the game is an all-time struggle considering this is an issue affecting every industry player, regardless of size or market position. Expanding production capacity, accommodating older chip technologies, and leveraging software solutions to unlock hardware capabilities are some of the promising solutions.

Added

This article is being updated continuously. If you want to share any comments on FS switches, or if you are inclined to test and review our switches, please email us via media@fs.com or inform us on social media platforms. We cannot wait to hear more about your ideas on FS switches.

Article Source: The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Related Articles:

Impact of Chip Shortage on Datacenter Industry

Infographic – What Is a Data Center?

The Most Common Data Center Design Missteps

Introduction

Data center design is to provide IT equipment with a high-quality, standard, safe, and reliable operating environment, fully meeting the environmental requirements for stable and reliable operation of IT devices and prolonging the service life of computer systems. Data center design is the most important part of data center construction directly relating to the success or failure of data center long term planning, so its design should be professional, advanced, integral, flexible, safe, reliable, and practical.

9 Missteps in Data Center Design

Data center design is one of the effective solutions to overcrowded or outdated data centers, while inappropriate design results in obstacles for growing enterprises. Poor planning can lead to a waste of valuable funds and more issues, increasing operating expenses. Here are 9 mistakes to be aware of when designing a data center.

Miscalculation of Total Cost

Data center operation expense is made up of two key components: maintenance costs and operating costs. Maintenance costs refer to the costs associated with maintaining all critical facility support infrastructure, such as OEM equipment maintenance contracts, data center cleaning fees, etc. Operating costs refer to costs associated with day-to-day operations and field personnel, such as the creation of site-specific operational documentation, capacity management, and QA/QC policies and procedures. If you plan to build or expand a business-critical data center, the best approach is to focus on three basic parameters: capital expenditures, operating and maintenance expenses, and energy costs. Taking any component out of the equation, you might face the case that the model does not properly align an organization’s risk profile and business spending profile.

Unspecified Planning and Infrastructure Assessment

Infrastructure assessment and clear planning are essential processes for data center construction. For example, every construction project needs to have a chain of command that clearly defines areas of responsibility and who is responsible for aspects of data center design. Those who are involved need to evaluate the potential applications of the data center infrastructure and what types of connectivity requirements they need. In general, planning involves a rack-by-rack blueprint, including network connectivity and mobile devices, power requirements, system topology, cooling facilities, virtual local and on-premises networks, third-party applications, and operational systems. For the importance of data center design, you should have a thorough understanding of the functionality before it begins. Otherwise, you’ll fall short and cost more money to maintain.

data center

Inappropriate Design Criteria

Two missteps can send enterprises into an overspending death spiral. First of all, everyone has different design ideas, but not everyone is right. Second, the actual business is mismatched with the desired vision and does not support the setting of kilowatts per square foot or rack. Over planning in design is a waste of capital. Higher-level facilities also result in higher operational and energy costs. A data center designer establishes the proper design criteria and performance characteristics and then builds capital expenditure and operating expenses around it.

Unsuitable Data Center Site

Enterprises often need to find a perfect building location when designing a data center. If you don’t get some site-critical information, it will lead to some cases. Large users are well aware of the data center and have concerns about power availability and cost, fiber optics, and irresistible factors. Baseline users often have business model shells in their core business areas that decide whether they need to build or refurbish. Hence, premature site selection or unreasonable geographic location will fail to meet the design requirements.

Pre-design Space Planning

It is also very important to plan the space capacity inside the data center. The raised floor to support ratio can be as high as 1 to 1, while the mechanical and electrical equipment needs enough space to accommodate. In addition, the planning of office and IT equipment storage areas also needed to be considered. Therefore, it is very critical to estimate and plan the space capacity during data center design. Estimation errors can make the design of a data center unsuitable for the site space, which means suspending project re-evaluation and possibly repurchasing components.

Mismatched Business Goals

Enterprises need to clearly understand their business goals when debugging a data center so that they can complete the data center design. After meeting the business goals, something should be considered, such as which specific applications the data center supports, additional computing power, and later business expansion. Additionally, enterprises need to communicate these goals to data center architects, engineers, and builders to ensure that the overall design meets business needs.

Design Limitations

The importance of modular design is well-publicized in the data center industry. Although the modular approach refers to adding extra infrastructure in an immediate mode to preserve capital, it doesn’t guarantee complete success. Modular and flexible design is the key to long-term stable operation, also meets your data center plans. On the power system, you have to take note of adding UPS (Uninterruptible Power Supply) capacity to existing modules without system disruption. Input and output distribution system design shouldn’t be overlooked, it can allow the data center to adapt to any future changes in the underlying construction standards.

Improper Data Center Power Equipment

To design a data center to maximize equipment uptime and reduce power consumption, you must choose the right power equipment based on the projected capacity. Typically, you might use redundant computing to predict triple server usage to ensure adequate power, which is a waste. Long-term power consumption trends are what you need to consider. Install automatic power-on generators and backup power sources, and choose equipment that can provide enough power to support the data center without waste.

Over-complicated Design

In many cases, redundant targets introduce some complexity. If you add multiple ways to build a modular system, things can quickly get complicated. The over-complexity of data center design means more equipment and components, and these components are the source of failure, which can cause problems such as:

  • Human error. Data statistics errors lead to system data vulnerability and increase operational risks.
  • Expensive. In addition to equipment and components, the maintenance of components failure also incurs more charges.
  • Design concept. If maintainability wasn’t considered by the data center design when the IT team has the requirements of operating or servicing, system operational normality even human security get impacts.

Conclusion

Avoid the nine missteps above to find design solutions for data center IT infrastructure and build a data center that suits your business. Data center design missteps have some impacts on enterprises, such as business expansion, infrastructure maintenance, and security risks. Hence, all infrastructure facilities and data center standards must be rigorously estimated during data center design to ensure long-term stable operation within a reasonable budget.

Article Source: The Most Common Data Center Design Missteps

Related Articles:

How to Utilize Data Center Space More Effectively?

Data Center White Space and Gray Space