400G Optics in Hyperscale Data Centers

Since their advent, data centers have been striving hard to address the rising bandwidth requirements. A look at the stats reveals that 3.04 Exabytes of data is being generated on a daily basis. Whenever a hyperscale data center is taken into consideration, the bandwidth requirements are massive as the relevant applications require a preemptive approach due to their scalable nature. As the introduction of 400G data centers has taken the data transfer speed to a whole new level, it has brought significant convenience in addressing various areas of concern. In this article, we will dig a little deeper and try to answer the following questions:

  • What are the driving factors of 400G development?
  • What are the reasons behind the use of 400G optics in hyperscale data centers?
  • What are the trends in 400G devices in large-scale data centers?

What Are the Driving Factors For 400G Development?

The driving factors for 400G development are segregated into video streaming services and video conferencing services. These services require pretty high data transfer speeds in order to function smoothly across the globe.

Video Streaming Services

Video streaming services were already taking a toll on the bandwidth requirements. That, combined with the COVID-19 pandemic, forced a large population to stay and work from home. This automatically increased the usage of video streaming platforms. A look at the stats reveals that a medium-quality stream on Netflix consumes 0.8 GB per hour. See that in relation to over 209 million subscribers. As the traveling costs came down, the savings went to improved quality streams on Netflix like HD and 4K. What stood at 0.8 GB per hour rose to 3 and 7 GB per hour. This evolved the need for 400G development.

Video Conferencing Services

As COVID-19 made working from home the new norm, video conferencing services also saw a major boost. Till 2021, 20.56 million people have been reported to be working from home in the US alone. As video conferencing took center stage, Zoom, which consumes 500 MB per hour, saw a huge increase in its user base. This also puts great pressure on the data transfer needs.

What Makes 400G Optics the Ideal Choice For Hyperscale Data Centers?

Significant Decrease in Energy and Carbon Footprint

To put it simply, 400G raises the data transfer speed four times. 400G reduces the cost of 100G ports as breakouts when comparing a 4 x 100G solution to facilitate 400GbE with a single 400G solution to do the same. A single node at the output minimizes the risk of failures as well as lower the energy requirement. This brings down the ESG footprint that has become a KPI for the organizations going forward.

Reduced Operational Cost

As mentioned earlier, a 400G solution requires a single 400G port, whereas addressing the same requirement via a 100G solution requires four 100G ports. On a router, four ports cost way more than a single port that can facilitate rapid data transfer. The same is the case with power. Combined together, these two bring the operational cost down to a considerable extent.400G Optics

Trends of 400G Optics in Large-Scale Data Centers—Quick Adoption

The introduction of 400G solution in large-scale data centers has reshaped the entire sector. This is due to a humongous increase in the data transfer speeds. According to research, 400G is expected to replace 100G and 200G deployments way faster than its predecessors. Since its introduction, more and more vendors are upgrading to network devices that support 400G. The following image truly depicts the technology adoption rate.Trends of 400G Optics

Challenges Ahead

Lack of Advancement in the 400G Optical Transceivers sector

Although the shift towards such network devices is rapid, there are a number of implementation challenges. This is because it is not only the devices that need to be upgraded but also the infrastructure. Vendors are trying to upgrade them in order to stay ahead of the curve but the cost of the development and maturity of optical transceivers is not at the expected benchmark. The same is the case with their cost and reliability. As optical transceivers are a critical element, this comes as a major challenge in the deployment of 400G solutions.

Latency Measurement

In addition, the introduction of this solution has also made network testing and monitoring more important than ever. Latency measurement has always been a key indicator when evaluating performance. Data throughput combined with jitter and frame loss also comes as a major concern in this regard.

Investment in Network Layers

Lastly, the creation of a plug-and-play environment for this solution also needs to be more realistic. This will require a greater investment in the physical, higher level, and network-IP components layers.

Conclusion

Rapid technological advancements have led to concepts like the Internet of Things. These implementations require greater data transfer speeds. That, combined with the world going to remote work, has exponentially increased the traffic. Hyperscale data centers were already feeling the pressure and the introduction of 400G data centers is a step in the right direction. It is a preemptive approach to address the growing global population and the increasing number of internet users.

Article Source: 400G Optics in Hyperscale Data Centers

Related Articles:

How Many 400G Transceiver Types Are in the Market?

Global Optical Transceiver Market: Striding to High-Speed 400G Transceivers

FAQs on 400G Transceivers and Cables


400G transceivers and cables play a vital role in the process of constructing a 400G network system. Then, what is a 400G transceiver? What are the applications of QSFP-DD cables? Find answers here.

FAQs on 400G Transceivers and Cables Definition and Types

Q1: What is a 400G transceiver?

A1: 400G transceivers are optical modules that are mainly used for photoelectric conversion with a transmission rate of 400Gbps. 400G transceivers can be classified into two categories according to the applications: client-side transceivers for interconnections between the metro networks and the optical backbone, and line-side transceivers for transmission distances of 80km or even longer.

Q2: What are QSFP-DD cables?

A2: QSFP-DD cables contain two forms: one is a form of high-speed cable with QSFP-DD connectors on either end, transmitting and receiving 400Gbps data over a thin twinax cable or a fiber optic cable, and the other is a form of breakout cable that can split one 400G signal into 2x 200G, 4x 100G, or 8x 50G, enabling interconnection within a rack or between adjacent racks.

Q3: What are the 400G transceivers packaging forms?

A3: There are mainly the following six packaging forms of 400G optical modules:

  • QSFP-DD: 400G QSFP-DD (Quad Small Form Factor Pluggable-Double Density) is an expansion of QSFP, adding one row to the original 4-channel interface to 8 channels, running at 50Gb/s each, for a total bandwidth of 400Gb/s.
  • OSFP: OSFP (Octal Small Formfactor Pluggable, Octal means 8) is a new interface standard and is not compatible with the existing photoelectric interface. The size of 400G OSFP modules is slightly larger than that of 400G QSFP-DD.
  • CFP8: CFP8 is an expansion of CFP4, with 8 channels and a correspondingly larger size.
  • COBO: COBO (Consortium for On-Board Optics) means that all optical components are placed on the PCB. COBO is with good heat-dissipation and small-size. However, since it is not hot-swappable, once a module fails, it will be troublesome to repair.
  • CWDM8: CWDM 8 is an extension of CWDM4 with four new center wavelengths (1351/1371/1391/1411 nm). The wavelength range becomes wider and the number of lasers is doubled.
  • CDFP: CDFP was born earlier, and there are three editions of the specification. CD stands for 400 (Roman numerals). With 16 channels, the size of CDFP is relatively large.

Q4: What 400G transceivers and QSFP-DD cables are available on the market?

A4: The two tables below show the main types of 400G transceivers and cables on the market:

400G TransceiversStandardsMax Cable DistanceConnectorMediaTemperature Range
400G QSFP-DD SR8QSFP-DD MSA Compliant70m OM3/100m OM4MTP/MPO-16MMF0 to 70°C
400G QSFP-DD DR4QSFP-DD MSA, IEEE 802.3bs500mMTP/MPO-12SMF0 to 70°C
400G QSFP-DD XDR4/DR4+QSFP-DD MSA2kmMTP/MPO-12SMF0 to 70°C
400G QSFP-DD FR4QSFP-DD MSA2kmLC DuplexSMF0 to 70°C
400G QSFP-DD 2FR4QSFP-DD MSA, IEEE 802.3bs2kmCSSMF0 to 70°C
400G QSFP-DD LR4QSFP-DD MSA Compliant10kmLC DuplexSMF0 to 70°C
400G QSFP-DD LR8QSFP-DD MSA Compliant10kmLC DuplexSMF0 to 70°C
400G QSFP-DD ER8QSFP-DD MSA Compliant40kmLC DuplexSMF0 to 70°C
400G OSFP SR8IEEE P802.3cm; IEEE 802.3cd100mMTP/MPO-16MMF0 to 70°C
400G OSFP DR4IEEE 802.3bs500mMTP/MPO-12SMF0 to 70°C
4000G OSFP XDR4/DR4+/2kmMTP/MPO-12SMF0 to 70°C
400G OSFP FR4100G lambda MSA2kmLC DuplexSMF0 to 70°C
400G OSFP 2FR4IEEE 802.3bs2kmCSSMF0 to 70°C
400G OSFP LR4100G lambda MSA10kmLC DuplexSMF0 to 70°C



QSFP-DD CablesCatagoryProduct DescriptionReachTemperature RangePower Consumption
400G QSFP-DD DACQSFP-DD to QSFP-DD DACwith each 400G QSFP-DD using 8x 50G PAM4 electrical lanesno more than 3m0 to 70°C<1.5W
400G QSFP-DD Breakout DACQSFP-DD to 2x 200G QSFP56 DACwith each 200G QSFP56 using 4x 50G PAM4 electrical lanesno more than 3m0 to 70°C<0.1W
QSFP-DD to 4x 100G QSFPs DACwith each 100G QSFPs using 2x 50G PAM4 electrical lanesno more than 3m0 to 70°C<0.1W
QSFP-DD to 8x 50G SFP56 DACwith each 50G SFP56 using 1x 50G PAM4 electrical laneno more than 3m0 to 80°C<0.1W
400G QSFP-DD AOCQSFP-DD to QSFP-DD AOCwith each 400G QSFP-DD using 8x 50G PAM4 electrical lanes70m (OM3) or 100m (OM4)0 to 70°C<10W
400G QSFP-DD Breakout AOCQSFP-DD to 2x 200G QSFP56 AOCwith each 200G QSFP56 using 4X 50G PAM4 electrical lane70m (OM3) or 100m (OM4)0 to 70°C/
QSFP-DD to 8x 50G SFP56 AOCwith each 50G SFP56 using 1x 50G PAM4 electrical lane70m (OM3) or 100m (OM4)0 to 70°C/
400G OSFP DACOSFP to OSFP DACwith each 400G OSFP using 8x 50G PAM4 electrical lanesno more than 3m0 to 70°C<0.5W
400G OSFP Breakout DACOSFP to 2x 200G QSFP56 DACwith each 200G QSFP56 using 4x 50G PAM4 electrical lanesno more than 3m0 to 70°C/
OSFP to 4x100G QSFPs DACwith each 100G QSFPs using 2x 50G PAM4 electrical lanesno more than 3m0 to 70°C/
OSFP to 8x 50G SFP56 DACwith each 50G SFP56 using 1x 50G PAM4 electrical laneno more than 3m//
400G OSFP AOCOSFP to OSFP AOCwith each 400G OSFP using 8x 50G PAM4 electrical lanes70m (OM3) or 100m (OM4)0 to 70°C<9.5W



Q5: What do the suffixes “SR8, DR4 / XDR4, FR4 / LR4 and 2FR4” mean in 400G transceivers?

A5: The letters refer to reach, and the number refers to the number of optical channels:

  • SR8: SR refers to 100m over MMF. Each of the 8 optical channels from an SR8 module is carried on separate fibers, resulting in a total of 16 fibers (8 Tx and 8 Rx).
  • DR4 / XDR4: DR / XDR refer to 500m / 2km over SMF. Each of the 4 optical channels is carried on separate fibers, resulting in a total of 4 pairs of fibers.
  • FR4 / LR4: FR4 / LR4 refer to 2km / 10km over SMF. All 4 optical channels from an FR4 / LR4 are multiplexed onto one fiber pair, resulting in a total of 2 fibers (1 Tx and 1 Rx).
  • 2FR4: 2FR4 refers to 2 x 200G-FR4 links with 2km over SMF. Each of the 200G FR4 links has 4 optical channels, multiplexed onto one fiber pair (1 Tx and 1 Rx per 200G link). A 2FR4 has 2 of these links, resulting in a total of 4 fibers, and a total of 8 optical channels.

FAQs on 400G Transceivers and Cables Applications

Q1: What are the benefits of moving to 400G technology?

A1: 400G technology can increase the throughput of data and maximize the bandwidth and port density of the data centers. With only 1/4 the number of optical fiber links, connectors, and patch panels when using 100G platforms for the same aggregate bandwidth, 400G optics can also reduce operating expenses. With these benefits, 400G transceivers and QSFP-DD cables can provide ideal solutions for data centers and high-performance computing environments.

Q2: What are the applications of QSFP-DD cables?

A2: QSFP-DD cables are mainly used for short-distance 400G Ethernet connectivity in the data centers, and 400G to 2x 200G / 4x 100G / 8x 50G Ethernet applications.

Q3: 400G QSFP-DD vs 400G OSFP/CFP8: What are the differences?

A3: The table below includes detailed comparisons for the three main form factors of 400G transceivers.

400G Transceiver400G QSFP-DD400G OSFPCFP8
Application ScenarioData centerData center & telecomTelecom
Size18.35mm× 89.4mm× 8.5mm22.58mm× 107.8mm× 13mm40mm× 102mm× 9.5mm
Max Power Consumption12W15W24W
Backward Compatibility with QSFP28YesThrough adapterNo
Electrical signaling (Gbps)8× 50G
Switch Port Density (1RU)363616
Media TypeMMF & SMF
Hot PluggableYes
Thermal ManagementIndirectDirectIndirect
Support 800GNoYesNo



For more details about the differences, please refer to the blog: Differences Between QSFP-DD and QSFP+/QSFP28/QSFP56/OSFP/CFP8/COBO

Q4: What does it mean when an electrical or optical channel is PAM4 or NRZ in 400G transceivers?

A4: NRZ is a modulation technique that has two voltage levels to represent logic 0 and logic 1. PAM4 uses four voltage levels to represent four combinations of two bits logic-11, 10, 01, and 00. PAM4 signal can transmit twice faster than the traditional NRZ signal.

When a signal is referred to as “25G NRZ”, it means the signal is carrying data at 25 Gbps with NRZ modulation. When a signal is referred to as “50G PAM4”, or “100G PAM4”, it means the signal is carrying data at 50 Gbps, or 100 Gbps, respectively, using PAM4 modulation. The electrical connector interface of 400G transceivers is always 8x 50Gb/s PAM4 (for a total of 400Gb/s).

FAQs on Using 400G Transceivers and Cables in Data Centers

Q1: Can I plug an OSFP module into a 400G QSFP-DD port, or a QSFP-DD module into an OSFP port?

A1: No. OSFP and QSFP-DD are two physically distinct form factors. If you have an OSFP system, then 400G OSFP optics must be used. If you have a QSFP-DD system, then 400G QSFP-DD optics must be used.

Q2: Can a QSFP module be plugged into a 400G QSFP-DD port?

A2: Yes. A QSFP (40G or 100G) module can be inserted into a QSFP-DD port as QSFP-DD is backward compatible with QSFP modules. When using a QSFP module in a 400G QSFP-DD port, the QSFP-DD port must be configured for a data rate of 100G (or 40G).

Q3: Is it possible with a 400G OSFP on one end of a 400G link, and a 400G QSFP-DD on the other end?

A3: Yes. OSFP and QSFP-DD describe the physical form factors of the modules. As long as the Ethernet media types are the same (i.e. both ends of the link are 400G-DR4, or 400G-FR4 etc.), 400G OSFP and 400G QSFP-DD modules will interoperate with each other.

Q4: How can I break out a 400G port and connect to 100G QSFP ports on existing platforms?

A4: There are several ways to break out a 400G port to 100G QSFP ports:

  • QSFP-DD-DR4 to 4x 100G-QSFP-DR over 500m SMF
400G to 4x 100G
  • QSFP-DD-XDR4 to 4x 100G-QSFP-FR over 2km SMF
400G to 4x 100G
  • QSFP-DD-LR4 to 4x 100G-QSFP-LR over 10km SMF
400G to 4x 100G
  • OSFP-400G-2FR4 to 2x QSFP-100G-CWDM4 over 2km SMF
400G to 4x 100G

Apart from the 400G transceivers mentioned above, 400G to 4x 100G breakout cables can also be used.

Article Source: FAQs on 400G Transceivers and Cables

Related Articles:

400G Transceiver, DAC, or AOC: How to Choose?

400G OSFP Transceiver Types Overview

100G NIC: An Irresistible Trend in Next-Generation 400G Data Center

NIC, short for network interface card, which can be called network interface controller, network adapter or LAN adapter, allows a networking device to communicate with other networking devices. Without NIC, networking can hardly be done. There are NICs with different types and speeds, such as wireless and wired NIC, from 10G to 100G. Among them, 100G NIC, as a product appearing in recent years, hasn’t taken a large market share yet. This post gives a description of 100G NIC and the trends in NIC as follows.

What Is 100G NIC?

NIC is installed on a computer and used for communicating over a network with another computer, server or other network devices. It comes in many different forms but there are two main different types of NIC: wired NIC and wireless NIC. Wireless NICs use wireless technologies to access the network, while wired NICs use DAC cable or transceiver and fiber patch cable. The most popular wired LAN technology is Ethernet. In terms of its application field, it can be divided into computer NIC card and server NIC card. For client computers, one NIC is needed in most cases. However, for servers, it makes sense to use more than one NIC to meet the demand for handling more network traffic. Generally, one NIC has one network interface, but there are still some server NICs that have two or more interfaces built in a single card.

100G NIC

Figure 1: FS 100G NIC

With the expanding of data center from 10G to 100G, 25G server NIC has gained a firm foothold in the NIC market. In the meantime, the growth in demand for bandwidth is driving data center to higher bandwidth, 200G/400G and 100G transceivers have been widespread, which paves the way for 100G server.

How to Select 100G NIC?

How to choose the best 100G NIC from all the vendors? If you are stuck in this puzzle, see the following section listing recommendations and considerations to consider.

Connector

Connector types like RJ45, LC, FC, SC are commonly used connectors on NIC. You should check the connector type supported by NIC. Today many networks are only using RJ45, so it may be not that hard to choose the NIC for the right connector type as it has been in the past. Even so, some network may utilize a different interface such as coax. Therefore, check if the card you are planning to buy supports this connection before purchasing.

Bus Type

PCI is a hardware bus used for adding internal components to the computer. There are three main PCI bus types used by servers and workstations now: PCI, PCI-X and PCI-E. Among them, PCI is the most conventional one. It has a fixed width of 32 bits and can handle only 5 devices at a time. PCI-X is a higher upgraded version, providing more bandwidth. With the emergence of PCI-E, PCI-X cards are gradually replaced. PCI-E is a serial connection so that devices no longer share bandwidth like they do on a normal bus. Besides, there are different physical sizes of PCI-E card in the market: x16, x8, x4, and x1. Before purchasing a 100G NIC, it is necessary to make sure which PCI version and slot width can be compatible with your current equipment and network environment.

Hot swappable

There are some NICs that can be installed and removed without shutting down the system, which helps minimize downtime by allowing faulty devices to be replaced immediately. While you are choosing your 100G NIC, be sure to check if it supports hot swapping.

Trends in NIC

NICs were commonly used in desktop computers in the 1990s and early 2000s. Up to now, it has been widely used in servers and workstations with different types and rates. With the popularization of wireless networking and WiFi, wireless NICs gradually grows in popularity. However, wired cards are still popular for relatively immobile network devices owing to the reliable connections.NICs have been upgrading for years. As data centers are expanding at an unprecedented pace and driving the need for higher bandwidth between the server and switches, networking is moving from 10G to 25G and even 100G. Companies like Intel and Mellanox have launched their 100G NIC in succession.

During the upgrading from 10G to 100G in data centers, 25G server connectivity popularized for 100G migration can be realized by 4 strands of 25G. 25G NIC is still the mainstream. However, considering the fact that the overall bandwidth for data centers grows quickly and hardware upgrade cycles for data centers occur every two years, the ethernet speed can be faster than we expect. 400G data center is just on the horizon. It stands a good chance that 100G NIC will play an integral role in next-generation 400G networking.

Meanwhile, the need of 100G NIC will drive the demand for other network devices as well. For instance, 100G transceiver, the device between NIC and network, is bound to pervade. Now 100G transceivers are provided by many brands with different types such as CXP, CFP, QSFP28 transceivers,etc. FS supplies a full series of compatible 100G QSFP28 and CFP transceivers that can be matched with the major brand of 100G Ethernet NIC, such as Mellanox and Intel.

Conclusion

Nowadays with the hyping of the next generation cellular technology, 5G, the higher bandwidth is needed for data flow, which paves the way for 100G NIC. On the occasion, 100G transceivers and 400G network switches will be in great need. We believe that the new era of 5G networks will see the popularization of 100G NIC and change towards a new era of network performance.

Article Source: 100G NIC: An Irresistible Trend in Next-Generation 400G Data Center

Related Articles:

400G QSFP Transceiver Types and Fiber Connections

How Many 400G Transceiver Types Are in the Market?

NRZ vs. PAM4 Modulation Techniques

The leading trends such as cloud computing and big data drive the exponential traffic growth and the rise of 400G Ethernet. Data center networks are facing a larger bandwidth demand, and innovative technologies are required for infrastructure to meet shifting demands. Currently, there are two different signal modulation techniques examined for next-generation Ethernet: non-return to zero (NRZ), and pulse-amplitude modulation 4-level (PAM4). This article will take you through these two modulation techniques and compare them to find the optimal choice for 400G Ethernet.

NRZ and PAM4 Basics

NRZ is a modulation technique using two signal levels to represent the 1/0 information of a digital logic signal. Logic 0 is a negative voltage, and Logic 1 is a positive voltage. One bit of logic information can be transmitted or received within each clock period. The baud rate, or the speed at which a symbol can change, equals the bit rate for NRZ signals.

NRZ
NRZ

PAM4 is a technology that uses four different signal levels for signal transmission and each symbol period represents 2 bits of logic information (0, 1, 2, 3). To achieve that, the waveform has 4 different levels, carrying 2 bits: 00, 01, 10 or 11, as shown below. With two bits per symbol, the baud rate is half the bit rate.

PAM4
PAM4

Comparison of NRZ vs. PAM4

Bit Rate

A transmission with NRZ mechanism will have the same baud rate and bitrate because one symbol can carry one bit. 28Gbps (gigabit per second) bitrate is equivalent to 28GBdps (gigabaud per second) baud rate. While, because PAM4 carries 2 bits per symbol, 56Gbps PAM4 will have a line transmission at 28GBdps. Therefore, PAM4 doubles the bit rate for a given baud rate over NRZ, bringing higher efficiency for high-speed optical transmission such as 400G. To be more specific, a 400 Gbps Ethernet interface can be realized with eight lanes at 50Gbps or four lanes at 100Gbps using PAM4 modulation.

Signal Loss

PAM4 allows twice as much information to be transmitted per symbol cycle as NRZ. Therefore, at the same bitrate, PAM4 only has half the baud rate, also called symbol rate, of the NRZ signal, so the signal loss caused by the transmission channel in PAM4 signaling is greatly reduced. This key advantage of PAM4 allows the use of existing channels and interconnects at higher bit rates without doubling the baud rate and increasing the channel loss.

Signal-to-noise Ratio (SNR) and Bit Error Rate (BER)

According to the following figure, the eye height for PAM4 is 1/3 of that for NRZ, causing the PAM4 to increase SNR (Signal-Noise Ratio) by -9.54 dB (Link Budget Penalty), which impacts the signal quality and introduces additional constraints in high-speed signaling. The 33% smaller vertical eye opening makes PAM4 signaling more sensitive to noise, resulting in a higher bit error rate. However, PAM4 was made possible because of forward-error correction (FEC) that can help link system to achieve the desired BER.

NRZ vs. PAM4
NRZ vs. PAM4

Power Consumption

Reducing BER in a PAM4 channel requires equalization at the Rx end and pre-compensation at the Tx end, which both consume extra power than the NRZ link for a given clock rate. This means PAM4 transceivers generate more heat at each end of the link. However, the new state-of-the-art silicon photonics (SiPh) platform can effectively reduce energy consumption and can be used in 400G transceivers. For example, FS silicon photonics 400G transceiver combines SiPh chips and PAM4 signaling, making it a cost-effective and lower power consumption solution for 400G data center.

Shift from NRZ to PAM4 for 400G Ethernet

With massive data transmitted across the globe, many organizations pose their quest for migration towards 400G. Initially, 16× 25G baud rate NRZ is used for 400G Ethernet, such as 400G-SR16, but the link loss and size of the scheme can not meet the demands of 400G Ethernet. Whereas PAM4 enables higher bit rates at half the baud rate, designers can continue to use existing channels at potential 400G Ethernet data rates. As a result, PAM4 has overtaken NRZ as the preferred modulation method for electrical or optical signal transmission in 400G optical modules.

Article Source: NRZ vs. PAM4 Modulation Techniques

Related Articles:
400G Data Center Deployment Challenges and Solutions
400G ZR vs. Open ROADM vs. ZR+
400G Multimode Fiber: 400G SR4.2 vs 400G SR8

400G ZR & ZR+ – New Generation of Solutions for Longer-reach Optical Communications

400G

400G ZR and ZR+ coherent pluggable optics have become new solutions for high-density networks with data rates from 100G to 400G featuring low power and small space. Let’s see how the latest generation of 400G ZR and 400G ZR+ optics extends the economic benefits to meet the requirements of network operators, maximizes fiber utilization, and reduces the cost of data transport.

400G ZR & ZR+: Definitions

What Is 400G ZR?

400G ZR coherent optical modules are compliant with the OIF-400ZR standard, ensuring industry-wide interoperability. They provide 400Gbps of optical bandwidth over a single optical wavelength using DWDM (dense wavelength division multiplexing) and higher-order modulation such as 16 QAM. Implemented predominantly in the QSFP-DD form factor, 400G ZR will serve the specific requirement for massively parallel data center interconnect of 400GbE with distances of 80-120km. To learn more about 400G transceivers: How Many 400G Transceiver Types Are in the Market?

Overview of 400G ZR+

ZR+ is a range of coherent pluggable solutions with line capacities up to 400Gbps and reaches well beyond 80km supporting various application requirements. The specific operational and performance requirements of different applications will determine what types of 400G ZR+ coherent plugs will be used in networks. Some applications will take advantage of interoperable, multi-vendor ecosystems defined by standards body or MSA specifications and others will rely on the maximum performance achievable in the constraints of a pluggable module package. Four categories of 400G ZR+ applications will be explained in the following part.

400G ZR & ZR+: Applications

400G ZR – Application Scenario

The arrival of 400G ZR modules has ushered in a new era of DWDM technology marked by open, standards based, and pluggable DWDM optics, enabling true IP-over-DWDM. 400G ZR is often applied for point-to-point DCI (up to 80km), making the task of interconnecting data centers as simple as connecting switches inside a data center (as shown below).

Figure 1: 400G ZR Applied in Single-span DCI

Four Primary Deployment Applications for 400G ZR+

Extended-reach P2P Packet

One definition of ZR+ is a straightforward extension of 400G ZR transcoded mappings of Ethernet with a higher performance FEC to support longer reaches. In this case, 400G ZR+ modules are narrowly defined as supporting a single-carrier 400Gbps optical line rate and transporting 400GbE, 2x 200GbE or 4x 100GbE client signals for point-to-point reaches (up to around 500km). This solution is specifically dedicated to packet transport applications and destined for router platforms.

Multi-span Metro OTN

Another definition of ZR+ is the inclusion of support for OTN, such as client mapping and multiplexing into FlexO interfaces. This coherent pluggable solution is intended to support the additional requirements of OTN networks, carry both Ethernet and OTN clients, and address transport in multi-span ROADM networks. This category of 400G ZR+ is required where demarcation is important to operators, and is destined primarily for multi-span metro ROADM networks.

Figure 2: 400G ZR+ Applied in Multi-span Metro OTN

Multi-span Metro Packet

The third definition of ZR+ is support for extended reach Ethernet or packet transcoded solution that is further optimized for critical performance such as latency. This 400G ZR+ coherent pluggable with high performance FEC and sophisticated coding algorithms supports the longest reach over 1000km multi-span metro packet transport.

Figure 3: 400G ZR+ Applied in Multi-span Metro Packet

Multi-span Metro Regional OTN

The fourth definition of ZR+ supports both Ethernet and OTN clients. This coherent pluggable also leverages high performance FEC and PCS, along with tunable optical filters and amplifiers for maximum reach. It supports a rich feature set of OTN network functions for deployment over both fixed and flex-grid line systems. This category of 400G ZR+ provides solutions with higher performance to address a much wider range of metro/regional packet networking requirements.

400G ZR & ZR+: What Makes Them Suitable for Longer-reach Transmission in Data Center?

Coherent Technology Adopted by 400G ZR & ZR+

Coherent technology uses the three degrees of freedom (amplitude, phase and polarization of light) to focus more data on the wave that is being transmitted. In this way, coherent optics can transport more data over a single fiber for greater distances using higher order modulation techniques, which results in better spectral efficiency. 400G ZR and ZR+ is a leap forward in the application of coherent technology. With higher-order modulation and DWDM unlocking high bandwidth, 400G ZR and ZR+ modules can reduce cost and complexity for high-level data center interconnects.

Importance of 400G ZR & ZR+

400G ZR and 400G ZR+ coherent pluggable optics take implementation challenges to the next level by adding some of the elements for high-performance solutions while pushing component design for low-power, pluggability, and modularity.

Conclusion

Although there are still many challenges to making 400G ZR and 400G ZR+ transceiver modules that fit into the small size and power budget of OSFP or QSFP-DD packages and also achieving interoperation as well the costs and volume targets. With 400Gbps high optical bandwidth and low power consumption, 400G ZR & ZR+ may very well be the new generation in longer-reach optical communications.

Original Source: 400G ZR & ZR+ – New Generation of Solutions for Longer-reach Optical Communications

400G OSFP Transceiver Types Overview

400G

OSFP stands for Octal Small Form-factor Pluggable, which consists of 8 electrical lanes, running at 50Gb/s each, for a total of the bandwidth of 400Gb/s. This post will give an introduction of 400G OSFP transceiver types, the fiber connections, and some QAs about OSFP.

400G OSFP Transceiver Types

Below lists some current main 400G OSFP transceiver types: OSFP SR8, OSFP DR4, OSFP DR4+, OSFP FR4, OSFP 2*FR4, and OSFP LR4, which summarize OSFP transceiver according to the two transmission types (over multimode fiber and single-mode fiber) they support.

Fibers Connections for 400G OSFP Transceivers

400G OSFP SR8

Figure 1 OSFP SR8 to OSFP SR8.jpg
  • 400G OSFP SR8 to 2× 200G SR4 over MTP-16 to 2× MPO-8 breakout cable.
Figure 2 OSFP SR8 to 2 200G SR4.jpg
  • 400G OSFP SR8 to 8× 50G SFP via MTP-16 to 8× LC duplex breakout cable with up to 100m.
Figure 3 OSFP SR8 to 8 50G SFP.jpg

400G OSFP DR4

  • 400G OSFP DR4 to 400G OSFP DR4 over an MTP-12/MPO-12 cable.Figure 1 OSFP SR8 to OSFP SR8.jpg
  • 400G OSFP DR4 to 4× 100G DR4 over MTP-12/MPO-12 to 4× LC duplex breakout cable.
Figure 4 OSFP DR4 to 4 100G DR.jpg

400G OSFP XDR4/DR4+

  • 400G OSFP DR4+ to 400G OSFP DR4+ over an MTP-12/MPO-12 cable.
  • 400G OSFP DR4+ to 4× 100G DR over MTP-12/MPO-12 to 4× LC duplex breakout cable.
Figure 5 OSFP DR4+ to 4 100G DR.jpg

400G OSFP FR4

400G OSFP FR4 to 400G OSFP FR4 over duplex LC cable.

Figure 6 OSFP FR4 to OSFP FR4.jpg

400G OSFP 2FR4

OSFP 2FR4 can break out to 2× 200G and interop with 2× 200G-FR4 QSFP transceivers via 2× CS to 2× LC duplex cable.

400G OSFP Transceivers: Q&A

Q: What does “SR8”, “DR4”, “XDR4”, “FR4”, and “LR4” mean?

A: “SR” refers to short range, and “8” implies there are 8 optical channels. “DR” refers to 500m reach using single-mode fiber, and “4” implies there are 4 optical channels. “XDR4” is short for “eXtended reach DR4”. And “LR” refers to 10km reach using single-mode fiber.

Q: Can I plug an OSFP transceiver module into a QSFP-DD port?

A: No. QSFP-DD and OSFP are totally different form factors. For more information about QSFP-DD transceivers, you can refer to 400G QSFP-DD Transceiver Types Overview. You can use only one kind of form factor in the corresponding system. E.g., if you have an OSFP system, OSFP transceivers and cables must be used.

Q: Can I plug a 100G QSFP28 module into an OSFP port?

A: Yes. A QSFP28 module can be inserted into an OSFP port but with an adapter. When using a QSFP28 module in an OSFP port, the OSFP port must be configured for a data rate of 100G instead of 400G.

Q: What other breakout options are possible apart from using OSFP modules mentioned above?

A: OSFP 400G DACs & AOCs are possible for breakout 400G connections. See 400G Direct Attach Cables (DAC & AOC) Overview for more information about 400G DACs & AOCs.

Original Source: 400G OSFP Transceiver Types Overview

400G Ethernet Manufacturers and Vendors

New data-intensive applications have led to a dramatic increase in network traffic, raising the demand for higher processing speeds, lower latency, and greater storage capacity. These require higher network bandwidth, up to 400G or higher. Therefore, the 400G market is currently growing rapidly. Many organizations join the ranks of 400G equipment vendors early, and are already reaping the benefits. This article will take you through 400G Ethernet market trend and some global 400G equipment vendors.

The 400G Era

The emergence of new services, such as 4K VR, Internet of Things (IoT), and cloud computing, raises connected devices and internet users. According to an IEEE report, they forecast that “device connections will grow from 18 billion in 2017 to 28.5 billion devices by 2022.” And the number of internet users will soar “from 3.4 billion in 2017 to 4.8 billion in 2022.” Hence, network traffic is exploding. Indeed, the average annual growth rate of network traffic remains at a high level of 26%.

Annual Growth of Network Traffic
Annual Growth of Network Traffic

Facing the rapid growth of network traffic, 100GE/200GE ports are unable to meet the demand for network connectivity from a large number of customers. Many organizations and enterprises, especially hyperscale data centers and cloud operators, are aggressively adopting next-generation 400G network infrastructure to help address workloads. 400G provides the ideal solution for operators to meet high-capacity network requirements, reduce operational costs, and achieve sustainability goals. Due to the good development prospects of 400G market, many IT infrastructure providers are scrambling to layout and join the 400G market competition, launching a variety of 400G products. Dell’Oro group indicates “the ecosystem of 400G technologies, from silicon to optics, is ramping.” Starting in 2021, large-scale deployments will contribute meaningful market. They forecast that 400G shipments will exceed 15 million ports by 2023, and 400G will be widely deployed in all of the largest core networks in the world. In addition, according to GLOBE NEWSWIRE, the global 400G transceiver market is expected to be at $22.6 billion in 2023. 400G Ethernet is about to be deployed at scale, leading to the arrival of the 400G era.

400G Growth

Companies Offering 400G Networking Equipment

Many top companies seized the good opportunity of the fast-growing 400G market, and launched various 400G equipment. Many well-known IT infrastructure providers, which laid out 400G products early on, have become the key players in the 400G market after years of development, such as Cisco, Arista, Juniper, etc.

400G Equipment Vendors
400G Equipment Vendors

Cisco

Cisco foresaw the need for the Internet and its infrastructure at a very early stage, and as a result, has put a stake in the ground that no other company has been able to eclipse. Over the years, Cisco has become a top provider of software and solutions and a dominant player in the highly competitive 25/50/100Gb space. Cisco entered the 400G space with its latest networking hardware and optics as announced on October 31, 2018. Its Nexus switches are Cisco’s most important 400G product. Cisco primarily expects to help customers migrate to 400G Ethernet with solutions including Cisco’s ACI (Application Centric Infrastructure), streamlining operations, Cisco Nexus data networking switches, and Cisco Network Assurance Engine (NAE), amongst others. Cisco has seized the market opportunity and is continuing to grow its sales with its 400G products. Cisco reported second-quarter revenue of $12.7 billion, up 6% year over year, demonstrating the good prospects of 400G Ethernet market.

Arista Networks

Arista Networks, founded in 2008, provides software-driven cloud networking solutions for large data center storage and computing environments. Arista is smaller than rival Cisco, but it has made significant gains in market share and product development during the last several years. Arista announced on October 23, 2018, the release of 400G platforms and optics, presenting its entry into the 400G Ethernet market. Nowadays, Arista focuses on its comprehensive 400G platforms that include various series switches and 400G optical modules for large-scale cloud, leaf and spine, routing transformation, and hyperscale IO intensive applications. The launch of Arista’s diverse 400G switches has also resulted in significant sales and market share growth. According to IDC, Arista networks saw a 27.7 percent full year switch ethernet switch revenue rise in 2021. Arista has put legitimate market share pressure on leader Cisco in the tech sector during the past five years.

Juniper Networks

Juniper is a leading provider of networking products. With the arrival of the 400G era, Juniper offers comprehensive 400G routing and switching platforms: packet transport routers, universal routing platforms, universal metro routers, and switches. Recently, it also introduced 400G coherent pluggable optics to further address 400G data communication needs. Juniper believes that 400G will become the new data rate currency for future builds and is fully prepared for the 400G market competition. And now, Juniper has become the key player in the 400G market.

Huawei Technologies

Huawei, a massive Chinese tech company, is gaining momentum in its data center networking business. Huawei is already in the “challenger” category to the above-mentioned industry leaders—getting closer to the line of “leader” area. On OFC 2018, Huawei officially released its 400G optical network solution for commercial use, joining the ranks of 400G product vendors. Hence, it achieves obvious economic growth. Huawei accounted for 28.7% of the global communication equipment market last year, an increase of 7% year on year. As Huawei’s 400G platforms continue to roll out, related sales are expected to rise further. The broad Chinese market will also further strengthen Huawei’s leading position in the global 400G space.

FS

Founded in 2009, FS is a global high-tech company providing high-speed communication network solutions and services to several industries. Through continuous technology upgrades, professional end-to-end supply chain, and brand partnership with top vendors, FS services customers across 200 countries – with the industry’s most comprehensive and innovative solution portfolio. FS is one of the earliest 400G vendors in the world, with a diverse portfolio of 400G products, including 400G switches, optical transceivers, cables, etc. FS thinks 400G Ethernet is an inevitable trend in the current networking market, and has seized this good opportunity to gain a large number of loyal customers in the 400G market. In the future, FS will continue to provide customers with high-quality and reliable 400G products for the migration to 400G Ethernet.

Getting Started with 400G Ethernet

400G is the next generation of cloud infrastructure, driving next-generation data center networks. Many organizations and enterprises are planning to migrate to 400G. The companies mentioned above have provided 400G solutions for several years, making them a good choice for enterprises. There are also lots of other organizations trying to enter the ranks of 400G manufacturers and vendors, driving the growing prosperity of the 400G market. Remember to take into account your business needs and then choose the right 400G product manufacturer and vendor for your investment or purchase.

Data Center Containment: Types, Benefits & Challenges

Over the past decade, data center containment has experienced a high rate of implementation by many data centers. It can greatly improve the predictability and efficiency of traditional data center cooling systems. This article will elaborate on what data center containment is, common types of it, and their benefits and challenges.

What Is Data Center Containment?

Data center containment is the separation of cold supply air from the hot exhaust air from IT equipment so as to reduce operating cost, optimize power usage effectiveness, and increase cooling capacity. Containment systems enable uniform and stable supply air temperature to the intake of IT equipment and a warmer, drier return air to cooling infrastructure.

Types of Data Center Containment

There are mainly two types of data center containment, hot aisle containment and cold aisle containment.

Hot aisle containment encloses warm exhaust air from IT equipment in data center racks and returns it back to cooling infrastructure. The air from the enclosed hot aisle is returned to cooling equipment via a ceiling plenum or duct work, and then the conditioned air enters the data center via raised floor, computer room air conditioning (CRAC) units, or duct work.

Hot aisle containment

Cold aisle containment encloses cold aisles where cold supply air is delivered to cool IT equipment. So the rest of the data center becomes a hot-air return plenum where the temperature can be high. Physical barriers such as solid metal panels, plastic curtains, or glass are used to allow for proper airflow through cold aisles.

Cold aisle containment

Hot Aisle vs. Cold Aisle

There are mixed views on whether it’s better to contain the hot aisle or the cold aisle. Both containment strategies have their own benefits as well as challenges.

Hot aisle containment benefits

  • The open areas of the data center are cool, so that visitors to the room will not think the IT equipment is not being cooled sufficiently. In addition, it allows for some low density areas to be un-contained if desired.
  • It is generally considered to be more effective. Any leakages that come from raised floor openings in the larger part of the room go into the cold space.
  • With hot aisle containment, low-density network racks and stand-alone equipment like storage cabinets can be situated outside the containment system, and they will not get too hot, because they are able to stay in the lower temperature open areas of the data center.
  • Hot aisle containment typically adjoins the ceiling where fire suppression is installed. With a well-designed space, it will not affect normal operation of a standard grid fire suppression system.

Hot aisle containment challenges

  • It is generally more expensive. A contained path is needed for air to flow from the hot aisle all the way to cooling units. Often a drop ceiling is used as return air plenum.
  • High temperatures in the hot aisle can be undesirable for data center technicians. When they need to access IT equipment and infrastructure, a contained hot aisle can be a very uncomfortable place to work. But this problem can be mitigated using temporary local cooling.

Cold aisle containment benefits

  • It is easy to implement without the need for additional architecture to contain and return exhaust air such as a drop ceiling or air plenum.
  • Cold aisle containment is less expensive to install as it only requires doors at ends of aisles and baffles or roof over the aisle.
  • Cold aisle containment is typically easier to retrofit in an existing data center. This is particularly true for data centers that have overhead obstructions such as existing duct work, lighting and power, and network distribution.

Cold aisle containment challenges

  • When utilizing a cold aisle system, the rest of the data center becomes hot, resulting in high return air temperatures. It also may create operational issues if any non-contained equipment such as low-density storage is installed in the general data center space.
  • The conditioned air that leaks from the openings under equipment like PDUs and raised floor tend to enter air paths that return to cooling units. This reduces the efficiency of the system.
  • In many cases, cold aisles have intermediate ceilings over the aisle. This may affect the overall fire protection and lighting design, especially when added to an existing data center.

How to Choose the Best Containment Option?

Every data center is unique. To find the most suitable option, you have to take into account a number of aspects. The first thing is to evaluate your site and calculate the Cooling Capacity Factor (CCF) of the computer room. Then observe the unique layout and architecture of each computer room to discover conditions that make hot aisle or cold aisle containment preferable. With adequate information and careful consideration, you will be able to choose the best containment option for your data center.

Article Source: Data Center Containment: Types, Benefits & Challenges

Related Articles:

What Is a Containerized Data Center: Pros and Cons

The Most Common Data Center Design Missteps

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

The COVID-19 pandemic caused several companies to shut down, and the implications were reduced production and altered supply chains. In the tech world, where silicon microchips are the heart of everything electronic, raw material shortage became a barrier to new product creation and development.

During the lockdown periods, some essential workers were required to stay home, which meant chip manufacturing was unavailable for several months. By the time lockdown was lifted and the world embraced the new normal, the rising demand for consumer and business electronics was enough to ripple up the supply chain.

Below, we’ve discussed the challenges associated with the current chip shortage, what to expect moving forward, and the possible interventions necessary to overcome the supply chain constraints.

Challenges Caused by the Current Chip Shortage

As technology and rapid innovation sweeps across industries, semiconductor chips have become an essential part of manufacturing – from devices like switches, wireless routers, computers, and automobiles to basic home appliances.

devices

To understand and quantify the impact this chip shortage has caused spanning the industry, we’ll need to look at some of the most affected sectors. Here’s a quick breakdown of how things have unfolded over the last eighteen months.

Automobile Industry

in North America and Europe had slowed or stopped production due to a lack of computer chips. Major automakers like Tesla, Ford, BMW, and General Motors have all been affected. The major implication is that the global automobile industry will manufacture 4 million fewer cars by the end of 2021 than earlier planned, and it will forfeit an average of $110 billion in revenue.

Consumer Electronics

Consumer electronics such as desktop PCs and smartphones rose in demand throughout the pandemic, thanks to the shift to virtual learning among students and the rise in remote working. At the start of the pandemic, several automakers slashed their vehicle production forecasts before abandoning open semiconductor chip orders. And while the consumer electronics industry stepped in and scooped most of those microchips, the supply couldn’t catch up with the demand.

Data Centers

Most chip fabrication companies like Samsung Foundries, Global Foundries, and TSMC prioritized high-margin orders from PC and data center customers during the pandemic. And while this has given data centers a competitive edge, it isn’t to say that data centers haven’t been affected by the global chip shortage.

data center

Some of the components data centers have struggled to source include those needed to put together their data center switching systems. These include BMC chips, capacitors, resistors, circuit boards, etc. Another challenge is the extended lead times due to wafer and substrate shortages, as well as reduced assembly capacity.

LED Lighting

LED backlights common in most display screens are powered by hard-to-find semiconductor chips. The prices of gadgets with LED lighting features are now highly-priced due to the shortage of raw materials and increased market demand. This is expected to continue up to the beginning of 2022.

Renewable Energy- Solar and Turbines

Renewable energy systems, particularly solar and turbines, rely on semiconductors and sensors to operate. The global supply chain constraints have hurt the industry and even forced some energy solutions manufacturers like Enphase Energy to

Semiconductor Trends: What to Expect Moving Forward

In response to the global chip shortage, several component manufacturers have ramped up production to help mitigate the shortages. However, top electronics and semiconductor manufacturers say the crunch will only worsen before it gets better. Most of these industry leaders speculate that the semiconductor shortage could persist into 2023.

Based on the ongoing disruption and supply chain volatility, various analysts in a recent CNBC article and Bloomberg interview echoed their views, and many are convinced that the coming year will be challenging. Here are some of the key takeaways:

Pat Gelsinger, CEO of Intel Corp., noted in April 2021 that the chip shortage would recover after a couple of years.

DigiTimes Report found that Intel and AMD server ICs and data centers have seen their lead times extend to 45 to 66 weeks.

The world’s third-largest EMS and OEM provider, Flex Ltd., expects the global semiconductor shortage to proceed into 2023.

In May 2021, Global Foundries, the fourth-largest contract semiconductor manufacturer, signed a $1.6 billion, 3-year silicon supply deal with AMD, and in late June, it launched its new $4 billion, 300mm-wafer facility in Singapore. Yet, the company says its production capacity will only increase component production earliest in 2023.

TMSC, one of the leading pure-play foundries in the industry, says it won’t meaningfully increase the component output until 2023. However, it’s optimistic that the company will ramp up the fabrication of automotive micro-controllers by 60% by the end of 2021.

From the industry insights above, it’s evident that despite the many efforts that major players put into resolving the global chip shortage, the bottlenecks will probably persist throughout 2022.

Additionally, some industry observers believe that the move by big tech companies such as Amazon, Microsoft, and Google to design their own chips for cloud and data center business could worsen the chip shortage crisis and other problems facing the semiconductor industry.

article, the authors hint that the entry of Microsoft, Amazon, and Google into the chip design market will be a turning point in the industry. These tech giants have the resources to design superior and cost-effective chips of their own, something most chip designers like Intel have in limited proportions.

Since these tech giants will become independent, each will be looking to create component stockpiles to endure long waits and meet production demands between inventory refreshes. Again, this will further worsen the existing chip shortage.

Possible Solutions

To stay ahead of the game, major industry players such as chip designers and manufacturers and the many affected industries have taken several steps to mitigate the impacts of the chip shortage.

For many chip makers, expanding their production capacity has been an obvious response. Other suppliers in certain regions decided to stockpile and limit exports to better respond to market volatility and political pressures.

Similarly, improving the yields or increasing the number of chips manufactured from a silicon wafer is an area that many manufacturers have invested in to boost chip supply by some given margin.

chip manufacturing

Here are the other possible solutions that companies have had to adopt:

Embracing flexibility to accommodate older chip technologies that may not be “state of the art” but are still better than nothing.

Leveraging software solutions such as smart compression and compilation to build efficient AI models to help unlock hardware capabilities.

LED Lighting

The latest global chip shortage has led to severe shocks in the semiconductor supply chain, affecting several industries from automobile, consumer electronics, data centers, LED, and renewables.

Industry thought leaders believe that shortages will persist into 2023 despite the current build-up in mitigation measures. And while full recovery will not be witnessed any time soon, some chip makers are optimistic that they will ramp up fabrication to contain the demand among their automotive customers.

That said, staying ahead of the game is an all-time struggle considering this is an issue affecting every industry player, regardless of size or market position. Expanding production capacity, accommodating older chip technologies, and leveraging software solutions to unlock hardware capabilities are some of the promising solutions.

Added

This article is being updated continuously. If you want to share any comments on FS switches, or if you are inclined to test and review our switches, please email us via media@fs.com or inform us on social media platforms. We cannot wait to hear more about your ideas on FS switches.

Article Source: The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Related Articles:

Impact of Chip Shortage on Datacenter Industry

Infographic – What Is a Data Center?

The Most Common Data Center Design Missteps

Introduction

Data center design is to provide IT equipment with a high-quality, standard, safe, and reliable operating environment, fully meeting the environmental requirements for stable and reliable operation of IT devices and prolonging the service life of computer systems. Data center design is the most important part of data center construction directly relating to the success or failure of data center long term planning, so its design should be professional, advanced, integral, flexible, safe, reliable, and practical.

9 Missteps in Data Center Design

Data center design is one of the effective solutions to overcrowded or outdated data centers, while inappropriate design results in obstacles for growing enterprises. Poor planning can lead to a waste of valuable funds and more issues, increasing operating expenses. Here are 9 mistakes to be aware of when designing a data center.

Miscalculation of Total Cost

Data center operation expense is made up of two key components: maintenance costs and operating costs. Maintenance costs refer to the costs associated with maintaining all critical facility support infrastructure, such as OEM equipment maintenance contracts, data center cleaning fees, etc. Operating costs refer to costs associated with day-to-day operations and field personnel, such as the creation of site-specific operational documentation, capacity management, and QA/QC policies and procedures. If you plan to build or expand a business-critical data center, the best approach is to focus on three basic parameters: capital expenditures, operating and maintenance expenses, and energy costs. Taking any component out of the equation, you might face the case that the model does not properly align an organization’s risk profile and business spending profile.

Unspecified Planning and Infrastructure Assessment

Infrastructure assessment and clear planning are essential processes for data center construction. For example, every construction project needs to have a chain of command that clearly defines areas of responsibility and who is responsible for aspects of data center design. Those who are involved need to evaluate the potential applications of the data center infrastructure and what types of connectivity requirements they need. In general, planning involves a rack-by-rack blueprint, including network connectivity and mobile devices, power requirements, system topology, cooling facilities, virtual local and on-premises networks, third-party applications, and operational systems. For the importance of data center design, you should have a thorough understanding of the functionality before it begins. Otherwise, you’ll fall short and cost more money to maintain.

data center

Inappropriate Design Criteria

Two missteps can send enterprises into an overspending death spiral. First of all, everyone has different design ideas, but not everyone is right. Second, the actual business is mismatched with the desired vision and does not support the setting of kilowatts per square foot or rack. Over planning in design is a waste of capital. Higher-level facilities also result in higher operational and energy costs. A data center designer establishes the proper design criteria and performance characteristics and then builds capital expenditure and operating expenses around it.

Unsuitable Data Center Site

Enterprises often need to find a perfect building location when designing a data center. If you don’t get some site-critical information, it will lead to some cases. Large users are well aware of the data center and have concerns about power availability and cost, fiber optics, and irresistible factors. Baseline users often have business model shells in their core business areas that decide whether they need to build or refurbish. Hence, premature site selection or unreasonable geographic location will fail to meet the design requirements.

Pre-design Space Planning

It is also very important to plan the space capacity inside the data center. The raised floor to support ratio can be as high as 1 to 1, while the mechanical and electrical equipment needs enough space to accommodate. In addition, the planning of office and IT equipment storage areas also needed to be considered. Therefore, it is very critical to estimate and plan the space capacity during data center design. Estimation errors can make the design of a data center unsuitable for the site space, which means suspending project re-evaluation and possibly repurchasing components.

Mismatched Business Goals

Enterprises need to clearly understand their business goals when debugging a data center so that they can complete the data center design. After meeting the business goals, something should be considered, such as which specific applications the data center supports, additional computing power, and later business expansion. Additionally, enterprises need to communicate these goals to data center architects, engineers, and builders to ensure that the overall design meets business needs.

Design Limitations

The importance of modular design is well-publicized in the data center industry. Although the modular approach refers to adding extra infrastructure in an immediate mode to preserve capital, it doesn’t guarantee complete success. Modular and flexible design is the key to long-term stable operation, also meets your data center plans. On the power system, you have to take note of adding UPS (Uninterruptible Power Supply) capacity to existing modules without system disruption. Input and output distribution system design shouldn’t be overlooked, it can allow the data center to adapt to any future changes in the underlying construction standards.

Improper Data Center Power Equipment

To design a data center to maximize equipment uptime and reduce power consumption, you must choose the right power equipment based on the projected capacity. Typically, you might use redundant computing to predict triple server usage to ensure adequate power, which is a waste. Long-term power consumption trends are what you need to consider. Install automatic power-on generators and backup power sources, and choose equipment that can provide enough power to support the data center without waste.

Over-complicated Design

In many cases, redundant targets introduce some complexity. If you add multiple ways to build a modular system, things can quickly get complicated. The over-complexity of data center design means more equipment and components, and these components are the source of failure, which can cause problems such as:

  • Human error. Data statistics errors lead to system data vulnerability and increase operational risks.
  • Expensive. In addition to equipment and components, the maintenance of components failure also incurs more charges.
  • Design concept. If maintainability wasn’t considered by the data center design when the IT team has the requirements of operating or servicing, system operational normality even human security get impacts.

Conclusion

Avoid the nine missteps above to find design solutions for data center IT infrastructure and build a data center that suits your business. Data center design missteps have some impacts on enterprises, such as business expansion, infrastructure maintenance, and security risks. Hence, all infrastructure facilities and data center standards must be rigorously estimated during data center design to ensure long-term stable operation within a reasonable budget.

Article Source: The Most Common Data Center Design Missteps

Related Articles:

How to Utilize Data Center Space More Effectively?

Data Center White Space and Gray Space

Impact of Chip Shortage on Datacenter Industry

As the global chip shortage let rip, many chip manufacturers have to slow or even halt semiconductor production. Makers of all kinds of electronics such as switches, PCs, servers are all scrambling to get enough chips in the pipeline to match the surging demand for their products. Every manufacturer, supplier and solution provider in datacenter industry is feeling the impact of the ongoing chip scarcity. However, relief is nowhere in sight yet.

What’s Happening?

Due to the rise of AI and cloud computing, datacenter chips have been a highly charged topic in recent times. As networking switches and modern servers, indispensable equipment in datacenter applications, use more advanced components than an average consumer’s PC, naturally when it comes to chip manufacturers and suppliers, data centers are given the top priority. However, with the demand for data center machines far outstripping supply, chip shortages may continue to be pervasive across the next few years. Coupled with economic uncertainties caused by the pandemic, it further puts stress on datacenter management.

According to a report from the Dell’Oro Group, robust datacenter switch sales over the past year could foretell a looming shortage. As the mismatch in supply and demand keeps growing, enterprises looking to buy datacenter switches face extended lead times and elevated costs over the course of the next year.

“So supply is decreasing and demand is increasing,” said Sameh Boujelbene, leader of the analyst firm’s campus and data-center research team. “There’s a belief that things will get worse in the second half of the year, but no consensus on when it’ll start getting better.”

Back in March, Broadcom said that more than 90% of its total chip output for 2021 had already been ordered by customers, who are pressuring it for chips to meet booming demand for servers used in cloud data centers and consumer electronics such as 5G phones.

“We intend to meet such demand, and in doing so, we will maintain our disciplined process of carefully reviewing our backlog, identifying real end-user demand, and delivering products accordingly,” CEO Hock Tan said on a conference call with investors and analysts.

Major Implications

Extended Lead Times

Arista Networks, one of the largest data center networking switch vendors and a supplier of switches to cloud providers, foretells that switch-silicon lead times will be extended to as long as 52 weeks.

“The supply chain has never been so constrained in Arista history,” the company’s CEO, Jayshree Ullal, said on an earnings call. “To put this in perspective, we now have to plan for many components with 52-week lead time. COVID has resulted in substrate and wafer shortages and reduced assembly capacity. Our contract manufacturers have experienced significant volatility due to country specific COVID orders. Naturally, we’re working more closely with our strategic suppliers to improve planning and delivery.”

Hock Tan, CEO of Broadcom, also acknowledged on an earnings call that the company had “started extending lead times.” He said, “part of the problem was that customers were now ordering more chips and demanding them faster than usual, hoping to buffer against the supply chain issues.”

Elevated Cost

Vertiv, one of the biggest sellers of datacenter power and cooling equipment, mentioned it had to delay previously planned “footprint optimization programs” due to strained supply. The company’s CEO, Robert Johnson, said on an earnings call, “We have decided to delay some of those programs.”

Supply chain constraints combined with inflation would cause “some incremental unexpected costs over the short term,” he said, “To share the cost with our customers where possible may be part of the solution.”

“Prices are definitely going to be higher for a lot of devices that require a semiconductor,” says David Yoffie, a Harvard Business School professor who spent almost three decades serving on the board of Intel.

Conclusion

There is no telling that how the situation will continue playing out and, most importantly, when supply and demand might get back to normal. Opinions vary on when the shortage will end. The CEO of chipmaker STMicro estimated that the shortage will end by early 2023. Intel CEO Patrick Gelsinger said it could last two more years.

As a high-tech network solutions and services provider, FS has been actively working with our customers to help them plan for, adapt to, and overcome the supply chain challenges, hoping that we can both ride out this chip shortage crisis. At least, we cannot lose hope, as advised by Bill Wyckoff, vice president at technology equipment provider SHI International, “This is not an ‘all is lost’ situation. There are ways and means to keep your equipment procurement and refresh plans on track if you work with the right partners.”

Article Source: Impact of Chip Shortage on Datacenter Industry

Related Articles:

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Infographic – What Is a Data Center?FacebookTwitterEmail分享

Data Center White Space and Gray Space

Nowadays, with the advent of the 5G era and the advancement of technology, more and more enterprises rely on IT for almost any choice. Therefore, their demand for better data center services has increased dramatically.

However, due to the higher capital and operating costs caused by the cluttered distribution of equipment in data centers, the space has become one of the biggest factors restricting data centers. In order to solve that problem, it’s necessary to optimize the utilization of existing space, for example, to consolidate white space and gray space in data centers.

What is data center white space?

Data center white space refers to the space where IT equipment and infrastructure are located. It includes servers, storage, network gear, racks, air conditioning units, power distribution systems.

White space is usually measured in square feet, ranging anywhere from a few hundred to a hundred thousand square feet. It can be either raised floor or hard floor (solid floor). Raised floors are developed to provide locations for power cabling, tracks for data cabling, cold air distribution systems for IT equipment cooling, etc. It can have access to all elements easily. Different from raised floors, cooling and cabling systems for hard floors are installed overhead. Today, there is a trend from raised floors to hard floors.

Typically, the white space area is the only productive area where an enterprise can utilize the data center space. Moreover, online activities like working from home have increased rapidly in recent years, especially due to the impact of COVID-19, which has increased business demand for data center white space. Therefore, the enterprise has to design data center white space with care.data center white space

What is data center gray space?

Different from data center white space, data center gray space refers to the space where back-end equipment is located. This includes switchgear, UPS, transformers, chillers, and generators.

The existence of gray space is to support the white space, therefore the amount of gray space in equipment is determined by the space assigned for data center white space. The more white space is needed, the more backend infrastructure is required to support it.data center gray space

How to improve the efficiency of space?

Building more data centers and consuming more energy is not a good option for IT organizations to make use of data center space. To increase data center sustainability and reduce energy costs, it’s necessary to use some strategies to combine data center white space and gray space, thus optimizing the efficiency of data center space.

White Space Efficiency Strategies

  • Virtualized technology: The technology of virtualization can integrate many virtual machines into physical machines, reducing physical hardware and saving lots of data center space. Virtualization management systems such as VMware and Hyper V can create a virtualized environment.
  • Cloud computing resources: With the help of the public cloud, enterprises can transfer data through the public internet, thus reducing their needs for physical servers and other IT infrastructure.
  • Data center planning: DCIM software, a kind of data center infrastructure management tool, can help estimate current and future power and server needs. It can also help data centers track and manage resources and optimize their size to save more space.
  • Monitor power and cooling capacity: In addition to the capacity planning about space, monitoring power, and cooling capacity is also necessary to properly configure equipment.

Gray Space Efficiency Strategies

  • State-of-art technologies: Technologies like flywheels can increase the power of the machine, reducing the number of batteries required for the power supply. Besides, the use of solar panels can reduce data center electricity bills. And water cooling can also help reduce the costs of cooling solutions.

Compared with white space efficiency techniques, grace space efficiency strategies are pretty less. However, the most efficient plan is to combine data center white space with gray space. By doing so, enterprises can realize the optimal utilization of data center space.

Article Source: Data Center White Space and Gray Space

Related Articles:

How to Utilize Data Center Space More Effectively?

What Is Data Center Virtualization?

Infographic – What Is a Data Center?

The Internet is where we store and receive a huge amount of information. Where is all the information stored? The answer is data centers. At its simplest, a data center is a dedicated place that organizations use to house their critical applications and data. Here is a short look into the basics of data centers. You will get to know the data center layout, the data pathway, and common types of data centers.

what is a data center

To know more about data centers, click here.

Article Source: Infographic – What Is a Data Center?

Related Articles:

What Is a Data Center?

Infographic — Evolution of Data Centers

What Is a Containerized Data Center: Pros and Cons

The rise of the digital economy has promoted the rapid and vigorous development of industries like cloud computing, Internet of Things, and big data, which have put forward higher requirements for data centers. The drawbacks of traditional data centers have emerged gradually, which are increasingly unable to meet the needs of the market. The prefabricated containerized data center meets the current market demand and will usher in a period of rapid development.

What Is a Containerized Data Center?

A containerized data center comes equipped with data center infrastructures housed in a container. There are different types of containerized data centers, ranging from simple IT containers to comprehensive all-in-one systems integrating the entire physical IT infrastructure.

Generally, a containerized data center includes networking equipment, servers, cooling system, UPS, cable pathways, storage devices, lighting and physical security systems.

A Containerized Data Center
A Containerized Data Center

Pros of Containerized Data Centers

Portability & Durability

Containerized data centers are fabricated in a manufacturing facility and shipped to the end-user in containers. Due to the container appearance, they are flexible to move and cost-saving compared to traditional data centers. What’s more, containers are dustproof, waterproof, and shock-resistant, making containerized data centers suitable for various harsh environments.

Rapid Deployment

Unlike traditional data centers with limited flexibility and difficult management, containerized data centers are prefabricated and pretested at the factory, and are transported to the deployment site for direct set-up. With access to utility power, network and water, the data center can work well. Therefore, the on-site deployment period for containerized data centers is substantially shortened to 2~3 months, demonstrating rapid and flexible deployment.

Energy Efficiency

Containerized data centers are designed for energy efficiency, which effectively limits ongoing operational costs. They enable power and cooling systems to match capacity and workload well, improving work efficiency and reducing over-configuration. More specifically, containerized data centers adopt in-row cooling systems to deliver air to adjacent hotspots with strict airflow management, which greatly improves cold air utilization, saves space and electricity costs in the server room, and reduces power usage efficiency (PUE).

High Scalability

Because of its unique modular design, a containerized data center is easy to install and scale up. More data centers can be added to the modular architecture of containerized data centers according to the requirements to optimize the IT configuration in a data center. With high scalability, containerized data centers can meet the changing demands of the organization rapidly and effortlessly.

Cons of Containerized Data Centers

Limited Computing Performance: Although it contains the entire IT infrastructure, a containerized data center still lacks the same computing capability as a traditional data center.

Low Security: Isolated containerized data centers are more vulnerable to break-ins than data center buildings. And without numerous built-in redundancies, an entire containerized data center can be shut down by a single point of failure.

Lack of Availability: It is challenging and expensive to provide utilities and networks for containerized data centers placed in edge areas.

Conclusion

Despite some shortcomings, containerized data centers have obvious advantages over traditional data centers. From the perspective of both current short-term investment and future long-term operating costs, containerized data centers have become the future trend of data center construction at this stage.

Article Source: What Is a Containerized Data Center: Pros and Cons

Related Articles:

What Is a Data Center?

Micro Data Center and Edge Computing

Top 7 Data Center Management Challenges

5G and Multi-Access Edge Computing

Over the years, the Internet of Things and IoT devices have grown tremendously, effectively boosting productivity and accelerating network agility. This technology has also elevated the adoption of edge computing while ushering in a set of advanced edge devices. By adopting edge computing, computational needs are efficiently met since the computing resources are distributed along the communication path, i.e., via a decentralized computing infrastructure.

One of the benefits of edge computing is improved performance as analytics capabilities are brought closer to the machine. An edge data center also reduces operational costs, thanks to the reduced bandwidth requirement and low latency.

Below, we’ve explored more about 5G wireless systems and multi-access edge computing (MEC), an advanced form of edge computing, and how both extend cloud computing benefits to the edge and closer to the users. Keep reading to learn more.

What Is Multi-Access Edge Computing

Multi-access edge computing (MEC) is a relatively new technology that offers cloud computing capabilities at the network’s edge. This technology works by moving some computing capabilities out of the cloud and closer to the end devices. Hence data doesn’t travel as far, resulting in fast processing speeds.

Ideally, there are two types of MEC, dedicated MEC and distributed MEC. Dedicated MEC is typically deployed at the customer’s site on a mobile private network and is designed only for one business. On the other hand, distributed MEC is deployed on a public network, either 4G or 5G, and connects shared assets and resources.

With both the dedicated and distributed MEC, applications run locally, and data is processed in real or near real-time. This helps avoid latency issues for faster response rates and decision-making. MEC technology has seen wider adoption in video analytics, augmented reality, location services, data caching, local content distribution, etc.

How MEC and 5G are Changing Different Industries

At the heart of multi-access edge computing are wireless and radio access network technologies that open up different networks to a wide range of innovative services. Today, 5G technology is the ultimate network that supports ultra-reliable low latency communication. It also provides an enhanced mobile broadband (eMBB) capability for use cases involving significant data rates such as virtual reality and augmented reality.

That said, 5G use cases can be categorized into three domains, massive IoT, mission-critical IoT, and enhanced mobile broadband. Each of the three categories requires different network features regarding security, mobility, bandwidth, policy control, latency, and reliability.

Why MEC Adoption Is on the Rise

5G MEC adoption is growing exponentially, and there are several reasons why this is the case. One reason is that this technology aligns with the distributed and scalable nature of the cloud, making it a key driver of technical transformation. Similarly, MEC technology is a critical business transformation change agent that offers the opportunity to improve service delivery and even support new market verticals.

Among the top use cases driving the high level of 5G, MEC implementation includes video content delivery, the emergence of smart cities, smart utilities (e.g., water and power grids), and connected cars. This also showcases the significant role MEC plays in different IoT domains. Here’s a quick overview of the primary use cases:

  • Autonomous vehicles – 5G MEC can help enhance operational functions such as continuous sensing and real-time traffic monitoring. This reduces latency issues and increases bandwidth.
  • Smart homes – MEC technology can process data locally, boosting privacy and security. It also reduces communication latency and allows for fast mobility and relocation.
  • AR/VR – Moving computational capabilities and processes to edge amplifies the immersive experience to users, plus it extends the battery-life of AR/VR devices.
  • Smart energy – MEC resolves traffic congestion issues and delays due to huge data generation and intermittent connectivity. It also reduces cyber-attacks by enforcing security mechanisms closer to the edge.
MEC Adoption
MEC Adoption

Getting Started With 5G MEC

One of the key benefits of adopting 5G MEC technology is openness, particularly API openness and the option to integrate third-party apps. Standards compliance and application agility are the other value propositions of multi-access edge computing. Therefore, enterprises looking to benefit from a flexible and open cloud should base their integration on the key competencies they want to achieve.

One of the challenges common during the integration process is hardware platforms’ limitations, as far as scale and openness are concerned. Similarly, deploying 5G MEC technology is costly, especially for small-scale businesses with limited financial backing. Other implementation issues include ecosystem and standards immaturity, software limitations, culture, and technical skillset challenges.

To successfully deploy multi-access edge computing, you need an effective 5G MEC implementation strategy that’s true and tested. You should also consider partnering with an expert IT or edge computing company for professional guidance.

5G MEC Technology: Key Takeaways

Edge-driven transformation is a game-changer in the modern business world, and 5G multi-access edge computing technology is undoubtedly leading the cause. Enterprises that embrace this new technology in their business models benefit from streamlined operations, reduced costs, and enhanced customer experience.

Even then, MEC integration isn’t without its challenges. Companies looking to deploy multi-access edge computing technology should have a solid implementation strategy that aligns with their entire digital transformation agenda to avoid silos.

Article Source: 5G and Multi-Access Edge Computing

Related Articles:

What is Multi-Access Edge Computing?https://community.fs.com/blog/what-is-multi-access-edge-computing.html

Edge Computing vs. Multi-Access Edge Computing

What Is Edge Computing?

Carrier Neutral vs. Carrier Specific: Which to Choose?

As the need for data storage drives the growth of data centers, colocation facilities are increasingly important to enterprises. A colocation data center brings many advantages to an enterprise data center, such as carriers helping enterprises manage their IT infrastructure that reduces the cost for management. There are two types of hosting carriers: carrier-neutral and carrier-specific. In this article, we will discuss the differentiation of them.

Carrier Neutral and Carrier Specific Data Center: What Are They?

Accompanied by the accelerated growth of the Internet, the exponential growth of data has led to a surge in the number of data centers to meet the needs of companies of all sizes and market segments. Two types of carriers that offer managed services have emerged on the market.

Carrier-neutral data centers allow access and interconnection of multiple different carriers while the carriers can find solutions that meet the specific needs of an enterprise’s business. Carrier-specific data centers, however, are monolithic, supporting only one carrier that controls all access to corporate data. At present, most enterprises choose carrier-neutral data centers to support their business development and avoid some unplanned accidents.

There is an example, in 2021, about 1/3 of the cloud infrastructure in AWS was overwhelmed and down for 9 hours. This not only affected millions of websites, but also countless other devices running on AWS. A week later, AWS was down again for about an hour, bringing down the Playstation network, Zoom, and Salesforce, among others. The third downtime of AWS also impacted Internet giants such as Slack, Asana, Hulu, and Imgur to a certain extent. 3 outages of cloud infrastructure in one month took a beyond measure cost to AWS, which also proved the fragility of cloud dependence.

In the above example, we can know that the management of the data center by the enterprise will affect the business development due to some unplanned accidents, which is a huge loss for the enterprise. To lower the risks caused by using a single carrier, enterprises need to choose a carrier-neutral data center and adjust the system architecture to protect their data center.

Why Should Enterprises Choose Carrier Neutral Data Center?

Carrier-neutral data centers are data centers operated by third-party colocation providers, but these third parties are rarely involved in providing Internet access services. Hence, the existence of carrier-neutral data centers enhances the diversity of market competition and provides enterprises with more beneficial options.

Another colocation advantage of a carrier-neutral data center is the ability to change internet providers as needed, saving the labor cost of physically moving servers elsewhere. We have summarized several main advantages of a carrier-neutral data center as follows.

Why Should Enterprises Choose Carrier Neutral Data Center

Redundancy

A carrier-neutral colocation data center is independent of the network operators and not owned by a single ISP. Out of this advantage, it offers enterprises multiple connectivity options, creating a fully redundant infrastructure. If one of the carriers loses power, the carrier-neutral data center can instantly switch servers to another online carrier. This ensures that the entire infrastructure is running and always online. On the network connection, a cross-connect is used to connect the ISP or telecom company directly to the customer’s sub-server to obtain bandwidth from the source. This can effectively avoid network switching to increase additional delay and ensure network performance.

Options and Flexibility

Flexibility is a key factor and advantage for carrier-neutral data center providers. For one thing, the carrier neutral model can increase or decrease the network transmission capacity through the operation of network transmission. And as the business continues to grow, enterprises need colocation data center providers that can provide scalability and flexibility. For another thing, carrier-neutral facilities can provide additional benefits to their customers, such as offering enterprise DR options, interconnect, and MSP services. Whether your business is large or small, a carrier-neutral data center provider may be the best choice for you.

Cost-effectiveness

First, colocation data center solutions can provide a high level of control and scalability, expanding opportunity to storage, which can support business growth and save some expenses. Additionally, it also lowers physical transport costs for enterprises. Second, with all operators in the market competing for the best price and maximum connectivity, a net neutral data center has a cost advantage over a single network facility. What’s more, since freedom of use to any carrier in a carrier-neutral data center, enterprises can choose the best cost-benefit ratio for their needs.

Reliability

Carrier-neutral data centers also boast reliability. One of the most important aspects of a data center is the ability to have 100% uptime. Carrier-neutral data center providers can provide users with ISP redundancy that a carrier-specific data center cannot. Having multiple ISPs at the same time gives better security for all clients. Even if one carrier fails, another carrier may keep the system running. At the same time, the data center service provider provides 24/7 security including all the details and uses advanced technology to ensure the security of login access at all access points to ensure that customer data is safe. Also, the multi-layered protection of the physical security cabinet ensures the safety of data transmission.

Summary

While many enterprises need to determine the best option for their company’s specific business needs, by comparing both carrier-neutral and carrier-specific, choosing a network carrier neutral data center service provider is a better option for today’s cloud-based business customers. Several advantages, such as maximizing total cost, lower network latency, and better network coverage, are of working with a carrier-neutral managed service provider. With no downtime and constant concerns about equipment performance, IT decision-makers for enterprise clients have more time to focus on the more valuable areas that drive continued business growth and success.

Article Source: Carrier Neutral vs. Carrier Specific: Which to Choose?

Related Articles:

What Is Data Center Storage?

On-Premises vs. Cloud Data Center, Which Is Right for Your Business?

Data Center Infrastructure Basics and Management Solutions

Data center infrastructure refers to all the physical components in a data center environment. These physical components play a vital role in the day-to-day operations of a data center. Hence, data center management challenges are an urgent issue that IT departments need to pay attention to. On the one hand, it is to improve the energy efficiency of the data center; on the other hand, it is to know about the operating performance of the data center in real-time ensuring its good working condition and maintaining enterprise development.

Data Center Infrastructure Basics

The standard for data center infrastructure is divided into four tiers, each of which consists of different facilities. They mainly include cabling systems, power facilities, cooling facilities, network infrastructure, storage infrastructure, and computing resources.

There are roughly two types of infrastructure inside a data center: the core components and IT infrastructure. Network infrastructure, storage infrastructure, and computing resources belong to the former, while cooling equipment, power, redundancy, etc. belong to the latter.

Core Components

Network, storage, and computing systems are vital infrastructures for data centers to achieve sharing access to applications and data, providing data centers with shared access to applications and data. Also, they are the core components of data centers.

Network Infrastructure

Datacenter network infrastructure is a combination of network resources, consisting of switches, routers, load balancing, analytics, etc., to facilitate the storage and processing of applications and data. Modern data center networking architectures, through using full-stack networking and security virtualization platforms that support a rich set of data services, can achieve connecting everything from VMs, containers, and bare-metal applications, while enabling centralized management and fine-grained security controls.

Storage Infrastructure

Datacenter storage is a general term for the tools, technologies and processes for designing, implementing, managing and monitoring storage infrastructure and resources in data centers, mainly referring to the equipment and software technologies that implement data and application storage in data center facilities. These include hard drives, tape drives and other forms of internal and external storage and backup management software utilities external storage facilities/solutions.

Computing Resources

A data center meter is a memory and processing power to run applications, usually provided by high-end servers. In the edge computing model, the processing and memory used to run applications on servers may be virtualized, physical, distributed among containers or distributed among remote nodes.

IT Infrastructure

As data centers become critical to enterprise IT operations, it is equally important to keep them running efficiently. When designing data center infrastructure, it is necessary to evaluate its physical environment, including cabling system, power system, cooling system to ensure the security of the physical environment of the data center.

Cabling Systems

The integrated cabling is an important part of data center cable management, supporting the connection, intercommunication and operation of the entire data center network. The system is usually composed of copper cables, optical cables, connectors and wiring equipment. The application of the data center integrated wiring system has the characteristics of high density, high performance, high reliability, fast installation, modularization, future-oriented, and easy application.

Power Systems

Datacenter digital infrastructure requires electricity to operate. Even an interruption of a fraction of a second will result in a significant impact. Hence, power infrastructure is one of the most critical components of a data center. The data center power chain starts at the substation and ends up through building transformers, switches, uninterruptible power supplies, power distribution units, and remote power panels to racks and servers.

Cooling Systems

Data center servers generate a lot of heat while running. Based on this characteristic, cooling is critical to data center operations, aiming to keep systems online. The amount of power each rack can keep cool by itself places a limit on the amount of power a data center can consume. Generally, each rack can allow the data center to operate at an average 5-10 kW cooling density, but some may be higher.

data center

Data Center Infrastructure Management Solutions

Due to the complexity of IT equipment in a data center, the availability, reliability, and maintenance of its components require more attention. Efficient data center operations can be achieved through balanced investments in facilities and accommodating equipment.

Energy Usage Monitoring Equipment

Traditional data centers lack the energy usage monitoring instruments and sensors required to comply with ASHRAE standards and collect measurement data for use in calculating data center PUE. It results in a poor monitoring environment for the power system of the data center. One measure is to install energy monitoring components and systems on power systems to measure data center energy efficiency. Enterprise teams can implement effective strategies by the measure to balance overall energy usage efficiency and effectively monitor the energy usage of all other nodes.

Cooling Facilities Optimization

Independent computer room air conditioning units used in traditional data centers often have separate controls and set points, resulting in excessive operation due to temperature and humidity adjustments. It’s a good way for helping servers to achieve cooling by creating hot-aisle/cold-aisle layouts to maximize the flow of cold air to the equipment intakes and the hot exhaust air from the equipment racks. The creation of hot or cold aisles can eliminate the mixing of hot and cold air by adding partitions or ceilings.

CRAC Efficiency Improvement

Packaged DX air conditioners likely compose the most common type of cooling equipment for smaller data centers. These units are often described as CRAC units. There are, however, there are several ways to improve the energy efficiency of the cooling system employing DX units. Indoor CRAC units are available with a few different heat rejection options.

  • – As with rooftop units, adding evaporative spray can improve the efficiency of air-cooled CRAC units.
  • – A pre-cooling water coil can be added to the CRAC unit upstream of the evaporator coil. When ambient conditions allow the condenser water to be cooled to the extent that it provides direct cooling benefits to the air entering the CRAC unit, the condenser water is diverted to the pre-cooling coil. This will reduce or sometimes eliminate the need for compressor-based cooling for the CRAC unit.

DCIM

Data center infrastructure management is the combination of IT and operations to manage and optimize the performance of data center infrastructure within an organization. DCIM tools help data center operators monitor, measure, and manage the utilization and energy consumption of data center-related equipment and facility infrastructure components, effectively improving the relationship between data center buildings and their systems.

DCIM enables bridging of information across organizational domains such as data center operations, facilities, and IT to maximize data center utilization. Data center operators create flexible and efficient operations by visualizing real-time temperature and humidity status, equipment status, power consumption, and air conditioning workloads in server rooms.

Preventive Maintenance

In addition to the above management and operation solutions for infrastructure, unplanned maintenance is also an aspect to consider. Unplanned maintenance typically costs 3-9 times more than planned maintenance, primarily due to overtime labor costs, collateral damage, emergency parts, and service calls. IT teams can create a recurring schedule to perform preventive maintenance on the data center. Regularly checking the infrastructure status and repairing and upgrading the required components promptly can keep the internal infrastructure running efficiently, as well as extend the lifespan and overall efficiency of the data center infrastructure.

Article Source: Data Center Infrastructure Basics and Management Solutions

Related Articles:

Data Center Migration Steps and Challenges

What Are Data Center Tiers?

Why Green Data Center Matters

Background

Green data centers appear in the concept of enterprise construction, due to the continuous growth of new data storage requirements and the steady enhancement of green environmental protection awareness. Newly retained data must be protected, cooled, and transferred efficiently. This means that the huge energy demands of data centers present challenges in terms of cost and sustainability, and enterprises are increasingly concerned about the energy demands of their data centers. It can be seen that sustainable and renewable energy resources have become the development trend of green data centers.

Green Data Center Is a Trend

A green data center is a facility similar to a regular data center that hosts servers to store, manage, and disseminate data. It is designed to minimize environmental impact by providing maximum energy efficiency. Green data centers have the same characteristics as typical data centers, but the internal system settings and technologies can effectively reduce energy consumption and carbon footprints for enterprises.

The internal construction of a green data center requires the support of a series of services, such as cloud services, cable TV services, Internet services, colocation services, and data protection security services. Of course, many enterprises or carriers have equipped their data centers with cloud services. Some enterprises may also need to rely on other carriers to provide Internet and related services.

According to market trends, the global green data center market is worth around $59.32 billion in 2021 and is expected to grow at a CAGR of 23.5% in the future to 2026. It also shows that the transition to renewable energy sources is accelerating because of the growth of green data centers.

As the growing demand for data storage drives the modernization of data centers, it also places higher demands on power and cooling systems. On the one hand, data centers need to convert non-renewable energy into electricity to generate electricity, resulting in rising electricity costs; on the other hand, some enterprises need to complete the construction of cooling facilities and server cleaning through a lot of water, all of which are ample opportunities for the green data center market. For example, Facebook and Amazon continue to expand their businesses, which has also increased the need for data storage of global companies. These enterprises need a lot of data to complete the analysis of potential customers, but these data processing needs will require a lot of energy. Therefore, the realization of green data centers has become an urgent need for enterprises to solve these problems, and this can also bring more other benefits to enterprises.

Green Data Center Benefits

The green data center concept has grown rapidly in the process of enterprise data center development. Many businesses prefer alternative energy solutions for their data centers, which can bring many benefits to the business. The benefits of green data centers are as follows.

Energy Saving

Green data centers are designed not only to conserve energy, but also to reduce the need for expensive infrastructure to handle cooling and power needs. Sustainable or renewable energy is an abundant and reliable source of energy that can significantly reduce power usage efficiency (PUE). The reduction of PUE enables enterprises to use electricity more efficiently. Green data centers can also use colocation services to decrease server usage, lower water consumption, and reduce the cost of corporate cooling systems.

Cost Reduction

Green data centers use renewable energy to reduce power consumption and business costs through the latest technologies. Shutting down servers that are being upgraded or managed can also help reduce energy consumption at the facility and control operating costs.

Environmental Sustainability

Green data centers can reduce the environmental impact of computing hardware, thereby creating data center sustainability. The ever-increasing technological development requires the use of new equipment and technologies in modern data centers, and the power consumption of these new server devices and virtualization technologies reduces energy consumption, which is environmentally sustainable and brings economic benefits to data center operators.

green data center

Enterprise Social Image Enhancement

Today, users are increasingly interested in solving environmental problems. Green data center services help businesses resolve these issues quickly without compromising performance. Many customers already see responsible business conduct as a value proposition. Enterprises, by meeting compliance, regulatory requirements and regulations of the corresponding regions through the construction of green data centers, improve the image of their own social status.

Reasonable Use of Resources

In an environmentally friendly way, green data centers can allow enterprises to make better use of various resources such as electricity, physical space, and heat, integrating the internal facilities of the data center. It promotes the efficient operation of the data center while achieving rational utilization of resources.

5 Ways to Create a Green Data Center

After talking about the benefits of a green data center, then how to build a green data center. Here are a series of green data center solutions.

  • Virtualization extension: Enterprises can build a virtualized computer system with the help of virtualization technology, and run multiple applications and operating systems through fewer servers, thereby realizing the construction of green data centers.
  • Renewable energy utilization: Enterprises can opt for solar panels, wind turbines or hydroelectric plants that can generate energy to power backup generators without any harm to the environment.
  • Enter eco mode: Using an Alternating current USPs is one way to switch eco mode. This setup can significantly improve data center efficiency and PUE. Alternatively, enterprises can reuse equipment, which not only saves money, but also eliminates unnecessary emissions from seeping into the atmosphere.
  • Optimized cooling: Data center infrastructure managers can introduce simple and implementable cooling solutions, such as deploying hot aisle/cold aisle configurations. Data centers can further accelerate cooling output by investing in air handlers and coolers, and installing economizers that draw outside air from the natural environment to build green data center cooling systems.
  • DCIM and BMS systems: DCIM software and BMS software can help data centers managers identify and document ways to use more efficient energy, helping data centers become more efficient and achieve sustainability goals.

Conclusion

Data center sustainability means reducing energy/water consumption and carbon emissions to offset increased computing and mobile device usage to keep business running smoothly. The development of green data centers has become an imperative development trend, and it also caters to the green goals of global environmental protection. As a beneficiary, enterprises can not only save operating costs, but also effectively reduce energy consumption. This is also an important reason for the construction of green data centers.

Article Source: Why Green Data Center Matters

Related Articles:

Data Center Infrastructure Basics and Management Solutions

What Is a Data Center?

8, 24, 48 Port Switch Recommendations

There are multiple switches in the market and their count of ports can come with 8, 12, 24, 48, etc. Among them, 8, 24, 48 port switch are more commonly used. Well, what should be considered before buying 8, 24, 48 port switch? Are there any recommendations for it?

What to Consider Before Buying 8, 24, 48 Port Switch?

When buying 8, 24, 48 port switch in the market, you can consider the following factors.

  • Features – The Gigabit switch has many features. Except for the basic features like VLAN, security, warranty and so on, you’d better take switching capacity, max power consumption, continuous availability into consideration. Moreover, stack and fanless designs are considerable factors as well. Stack design is able to save the place and fanless design helps reduce the power consumption and noise. Besides, you can choose managed switch or unmanaged switch and the former offers better performance than the latter.
  • Switch ports – Except for the number of ports which should be considered, there are some different types of ports based on their port speeds. For example, RJ 45 port, SFP port, SFP+ port, QSFP+ port, QSFP28 port, etc. You can choose a suitable one according to your need.
  • Price – The switches from famous brands are usually costly and there are some three-party networking vendors offering cost-effective switches. If you have limited costs, you can consider buying switches from reliable three-party vendors.

8, 24, 48 Port Switch Recommendations

The right Gigabit switch should meet the needs of your organization and keep your network running efficiently. Here are some switches recommendations for you.

8 Port Switch

If you have only a few devices to be connected, this 8 port Gigabit switch may be a good choice. FS S1150-8T2F 8 port Gigabit PoE+ managed switch has 2 SFP ports, which transmission distance is up to 120km. It is highly flexible that controls L2-l7 data based on physical port and has powerful ACL functions to access. What’s more, it features superior performance in stability and environmental adaptability. This 8 port switch may be one of the best gigabit switches for home network, including weather-proof IP cameras with windshield wiper and heater, high-performance AP and IP telephone.

8, 24, 48 Port Switch

Figure 1: 8 port Gigabit switches

24 Port Switch

If you are looking for the best 24 port Gigabit switch, this S1400-24T4F managed PoE+ switch would be one of your proper choices. It comes with 24x 10/100/1000Base-T RJ45 Ethernet ports, 1x console port, and 4x Gigabit SFP slots. It can protect the sensitive information and optimizes the network bandwidth to deliver information more effectively. This switch is the best fit for SMBs or entry-level enterprises which need to power for the surveillance, IP Phone, IP Camera or wireless devices.

24 port switch

Figure 2: 24 port switch

48 Port Switch

When you need to uplink a Gigabit SFP switch to a higher end 10G SFP+ switch for network upgrade, this 48 port switch can meet your demand. FS S1600-48T4S PoE+ switch offers 4 SFP+ ports for high-capacity uplinks. It also provides integrated L2+ features such as 802.1Q VLAN, QoS, IGMP Snooping and Static Routing. What’s more, this solution makes it easier to deploy wireless access point (AP) and IP-based terminal network equipment with PoE technology. This switch would be one of your choices if you need the best managed switch for small business or data center.

48 port switch

Figure 3: 48 port switch

Summary

The best Gigabit switch is the one that suits your network most. When you buying 8, 24, 48 port switch, remember to consider the factors mentioned above. FS provides various switches with high-quality and high performance. If you have any needs, welcome to visit FS.COM.

Related Article: FS 24 Port Gigabit Switch Selection Guide

PoE Switch vs Non-PoE Switch: Which One to Choose?

Instead of non-PoE switch, the PoE switch is more commonly used to build the wireless network. Well, what are PoE switch and non-PoE switch? What is the difference between PoE switch vs non-PoE switch? Which one to choose? In this article, we will share some insights and help solve the above questions.

PoE Switch vs Non-PoE Switch: What Are They?

To understand the PoE switch, we’d better know Power over Ethernet first. PoE is a revolutionary technology that allows network cable to provide both data and power for the PoE-enabled devices. The PoE can provide higher power and reduce a lot of power cables during network. Usually, it is used for VoIP phones, network cameras, and some wireless access points.

PoE switch is a networking device with PoE passthrough which has multiple Ethernet ports to connect network segments. It not only transmits network data but also supplies power via a length of Ethernet network cable, like Cat5 or Cat6. The types of hubs can be classified into 8/12/24/48 port Gigabit PoE switch, or unmanaged and managed PoE network switch. Among the various port designs, the 8 port PoE switch is considered as a decent option for home network and 24 port PoE switch is popular for the business network.

Non-PoE switch, just as the name, is the normal switch, which can only send data to network devices. There is no PoE in the normal switch to supply electrical power for end users over Ethernet.

PoE Switch vs Non-PoE Switch: What’s the Difference?

The biggest difference between PoE switch and non-PoE switch is the PoE accessibility. As mentioned above, the PoE switch is PoE enabled while the non-PoE switch is not PoE enabled.

For PoE switch, you can mix PoE and non-PoE devices on the same one. Because if there is no need to use power, you can turn off the PoE of the PoE switch and use it as a regular witch. However, non-PoE switch can’t support the mixing of PoE and non-PoE devices.

For non-PoE switch, it can be PoE ready only by installing a PoE injector to power a few devices. The injector is able to add electrical power and then transmits both data and power to power devices simultaneously. Users require one extra cable to connect power outlets. In this solution, if a PoE injector goes out, it only affects one device. But if the PoE goes out in a PoE switch, all PoE devices will be down.

PoE switch vs non-PoE switch

Figure 1: PoE switch vs non-PoE switch

PoE Switch vs Non-PoE Switch: Which One to Choose?

Many users may encounter this problem. Should we choose PoE switch or non-PoE switch? Though the non-PoE network switch can also acquire PoE by installing injector. But PoE switch has some advantages over the non-PoE switch.

Flexibility – The PoE switch is powered through existing PoE network infrastructure and eliminates the demand for additional electrical wiring. This gives you flexibility to employ the switch wherever you need.

Good performance – PoE switch is designed with advanced features like high-performance hardware and software, auto-sensing PoE compatibility, strong network security and environmental adaptability. It provides better performance for users.

Cost-efficient – There is no need for users to purchase and deploy additional electrical wires and outlets with PoE switch. Therefore, it makes great savings on installation and maintenance costs.

Conclusion

After the comparison of PoE switch vs non-PoE switch, do you know which one to choose? Actually, it depends on your real needs. FS is a good place to go for the reliable and cheap PoE or non-PoE network switch. Welcome to contact us if you have any needs about it.

Related Article: 24 Port Managed PoE Switch: How Can We Benefit From It?