400G Data Center Deployment Challenges and Solutions

As technology advances, specific industry applications such as video streaming, AI, and data analytics are increasingly pushing for increased data speeds and massive bandwidth demands. 400G technology, with its next-gen optical transceivers, brings a new user experience with innovative services that allow for faster and more data processing at a time.

Large data centers and enterprises struggling with data traffic issues embrace 400G solutions to improve operational workflows and ensure better economics. Below is a quick overview of the rise of 400G, the challenges of deploying this technology, and the possible solutions.

The Rise of 400G Data Centers

The rapid transition to 400G in several data centers is changing how networks are designed and built. Some of the key drivers of this next-gen technology are cloud computing, video streaming, AI, and 5G, which have driven the demand for high-speed, high-bandwidth, and highly scalable solutions. The large amount of data generated by smart devices, the Internet of Things, social media, and other As-a-Service models are also accelerating this 400G transformation.

The major benefits of upgrading to a 400G data center are the increased data capacity and network capabilities required for high-end deployments. This technology also delivers more power, efficiency, speed, and cost savings. A single 400G port is considerably cheaper than four individual 100G ports. Similarly, the increased data speeds allow for convenient scale-up and scale-out by providing high-density, reliable, and low-cost-per-bit deployments.

How 400G Works

Before we look at the deployment challenges and solutions, let’s first understand how 400G works. First, the actual line rate or data transmission speed of a 400G Ethernet link is 425 Gbps. The extra 25 bits establish a forward error connection (FEC) procedure, which detects and corrects transmission errors.

400G adopts the 4-level pulse amplitude modulation (PAM4) to combine higher signal and baud rates. This increases the data rates four-fold over the current Non-Return to Zero (NRZ) signaling. With PAM4, operators can implement four lanes of 100G or eight lanes of 50G for different form factors (i.e., OSFP and QSFP-DD). This optical transceiver architecture supports transmission of up to 400 Gbit/s over either parallel fibers or multiwavelength.

PM4
PAM4

Deployment Challenges & Solutions

Interoperability Between Devices

The PAM4 signaling introduced with 400G deployments creates interoperability issues between the 400G ports and legacy networking gear. That is, the existing NRZ switch ports and transceivers aren’t interoperable with PAM4. This challenge is widely experienced when deploying network breakout connections between servers, storage, and other appliances in the network.

400G transceiver transmits and receives with 4 lanes of 100G or 8 lanes of 50G with PAM4 signaling on both the electrical and optical interfaces. However, the legacy 100G transceivers are designed on 4 lanes of 25G NRZ signaling on the electrical and optical sides. These two are simply not interoperable and call for a transceiver-based solution.

One such solution is the 100G transceivers that support 100G PAM4 on the optical side and 4X25G NRZ on the electrical side. This transceiver performs the re-timing between the NRZ and PAM4 modulation within the transceiver gearbox. Examples of these transceivers are the QSFP28 DR and FR, which are fully interoperable with legacy 100G network gear, and QSFP-DD DR4 & DR4+ breakout transceivers. The latter are parallel series modules that accept an MPO-12 connector with breakouts to LC connectors to interface FR or DR transceivers.

NRZ & PM4
Interoperability Between Devices

Excessive Link Flaps

Link flaps are faults that occur during data transmission due to a series of errors or failures on the optical connection. When this occurs, both transceivers must perform auto-negotiation and link training (AN-LT) before data can flow again. If link flaps frequently occur, i.e., several times per minute, it can negatively affect throughput.

And while link flaps are rare with mature optical technologies, they still occur and are often caused by configuration errors, a bad cable, or defective transceivers. With 400GbE, link flaps may occur due to heat and design issues with transceiver modules or switches. Properly selecting transceivers, switches, and cables can help solve this link flaps problem.

Transceiver Reliability

Some optical transceiver manufacturers face challenges staying within the devices’ power budget. This results in heat issues, which causes fiber alignment challenges, packet loss, and optical distortions. Transceiver reliability problems often occur when old QSFP transceiver form factors designed for 40GbE are used at 400GbE.

Similar challenges are also witnessed with newer modules used in 400GbE systems, such as the QSFP-DD and CFP8 form factors. A solution is to stress test transceivers before deploying them in highly demanding environments. It’s also advisable to prioritize transceiver design during the selection process.

Deploying 400G in Your Data Center

Keeping pace with the ever-increasing number of devices, users, and applications in a network calls for a faster, high-capacity, and more scalable data infrastructure. 400G meets these demands and is the optimal solution for data centers and large enterprises facing network capacity and efficiency issues. The successful deployment of 400G technology in your data center or organization depends on how well you have articulated your data and networking needs.

Upgrading your network infrastructure can help relieve bottlenecks from speed and bandwidth challenges to cost constraints. However, making the most of your network upgrades depends on the deployment procedures and processes. This could mean solving the common challenges and seeking help whenever necessary.

A rule of thumb is to enlist the professional help of an IT expert who will guide you through the 400G upgrade process. The IT expert will help you choose the best transceivers, cables, routers, and switches to use and even conduct a thorough risk analysis on your entire network. That way, you’ll upgrade appropriately based on your network needs and client demands.
Article Source: 400G Data Center Deployment Challenges and Solutions
Related Articles:

NRZ vs. PAM4 Modulation Techniques
400G Multimode Fiber: 400G SR4.2 vs 400G SR8
Importance of FEC for 400G

FAQs on 400G Transceivers and Cables


400G transceivers and cables play a vital role in the process of constructing a 400G network system. Then, what is a 400G transceiver? What are the applications of QSFP-DD cables? Find answers here.

FAQs on 400G Transceivers and Cables Definition and Types

Q1: What is a 400G transceiver?

A1: 400G transceivers are optical modules that are mainly used for photoelectric conversion with a transmission rate of 400Gbps. 400G transceivers can be classified into two categories according to the applications: client-side transceivers for interconnections between the metro networks and the optical backbone, and line-side transceivers for transmission distances of 80km or even longer.

Q2: What are QSFP-DD cables?

A2: QSFP-DD cables contain two forms: one is a form of high-speed cable with QSFP-DD connectors on either end, transmitting and receiving 400Gbps data over a thin twinax cable or a fiber optic cable, and the other is a form of breakout cable that can split one 400G signal into 2x 200G, 4x 100G, or 8x 50G, enabling interconnection within a rack or between adjacent racks.

Q3: What are the 400G transceivers packaging forms?

A3: There are mainly the following six packaging forms of 400G optical modules:

  • QSFP-DD: 400G QSFP-DD (Quad Small Form Factor Pluggable-Double Density) is an expansion of QSFP, adding one row to the original 4-channel interface to 8 channels, running at 50Gb/s each, for a total bandwidth of 400Gb/s.
  • OSFP: OSFP (Octal Small Formfactor Pluggable, Octal means 8) is a new interface standard and is not compatible with the existing photoelectric interface. The size of 400G OSFP modules is slightly larger than that of 400G QSFP-DD.
  • CFP8: CFP8 is an expansion of CFP4, with 8 channels and a correspondingly larger size.
  • COBO: COBO (Consortium for On-Board Optics) means that all optical components are placed on the PCB. COBO is with good heat-dissipation and small-size. However, since it is not hot-swappable, once a module fails, it will be troublesome to repair.
  • CWDM8: CWDM 8 is an extension of CWDM4 with four new center wavelengths (1351/1371/1391/1411 nm). The wavelength range becomes wider and the number of lasers is doubled.
  • CDFP: CDFP was born earlier, and there are three editions of the specification. CD stands for 400 (Roman numerals). With 16 channels, the size of CDFP is relatively large.

Q4: What 400G transceivers and QSFP-DD cables are available on the market?

A4: The two tables below show the main types of 400G transceivers and cables on the market:

400G TransceiversStandardsMax Cable DistanceConnectorMediaTemperature Range
400G QSFP-DD SR8QSFP-DD MSA Compliant70m OM3/100m OM4MTP/MPO-16MMF0 to 70°C
400G QSFP-DD DR4QSFP-DD MSA, IEEE 802.3bs500mMTP/MPO-12SMF0 to 70°C
400G QSFP-DD XDR4/DR4+QSFP-DD MSA2kmMTP/MPO-12SMF0 to 70°C
400G QSFP-DD FR4QSFP-DD MSA2kmLC DuplexSMF0 to 70°C
400G QSFP-DD 2FR4QSFP-DD MSA, IEEE 802.3bs2kmCSSMF0 to 70°C
400G QSFP-DD LR4QSFP-DD MSA Compliant10kmLC DuplexSMF0 to 70°C
400G QSFP-DD LR8QSFP-DD MSA Compliant10kmLC DuplexSMF0 to 70°C
400G QSFP-DD ER8QSFP-DD MSA Compliant40kmLC DuplexSMF0 to 70°C
400G OSFP SR8IEEE P802.3cm; IEEE 802.3cd100mMTP/MPO-16MMF0 to 70°C
400G OSFP DR4IEEE 802.3bs500mMTP/MPO-12SMF0 to 70°C
4000G OSFP XDR4/DR4+/2kmMTP/MPO-12SMF0 to 70°C
400G OSFP FR4100G lambda MSA2kmLC DuplexSMF0 to 70°C
400G OSFP 2FR4IEEE 802.3bs2kmCSSMF0 to 70°C
400G OSFP LR4100G lambda MSA10kmLC DuplexSMF0 to 70°C



QSFP-DD CablesCatagoryProduct DescriptionReachTemperature RangePower Consumption
400G QSFP-DD DACQSFP-DD to QSFP-DD DACwith each 400G QSFP-DD using 8x 50G PAM4 electrical lanesno more than 3m0 to 70°C<1.5W
400G QSFP-DD Breakout DACQSFP-DD to 2x 200G QSFP56 DACwith each 200G QSFP56 using 4x 50G PAM4 electrical lanesno more than 3m0 to 70°C<0.1W
QSFP-DD to 4x 100G QSFPs DACwith each 100G QSFPs using 2x 50G PAM4 electrical lanesno more than 3m0 to 70°C<0.1W
QSFP-DD to 8x 50G SFP56 DACwith each 50G SFP56 using 1x 50G PAM4 electrical laneno more than 3m0 to 80°C<0.1W
400G QSFP-DD AOCQSFP-DD to QSFP-DD AOCwith each 400G QSFP-DD using 8x 50G PAM4 electrical lanes70m (OM3) or 100m (OM4)0 to 70°C<10W
400G QSFP-DD Breakout AOCQSFP-DD to 2x 200G QSFP56 AOCwith each 200G QSFP56 using 4X 50G PAM4 electrical lane70m (OM3) or 100m (OM4)0 to 70°C/
QSFP-DD to 8x 50G SFP56 AOCwith each 50G SFP56 using 1x 50G PAM4 electrical lane70m (OM3) or 100m (OM4)0 to 70°C/
400G OSFP DACOSFP to OSFP DACwith each 400G OSFP using 8x 50G PAM4 electrical lanesno more than 3m0 to 70°C<0.5W
400G OSFP Breakout DACOSFP to 2x 200G QSFP56 DACwith each 200G QSFP56 using 4x 50G PAM4 electrical lanesno more than 3m0 to 70°C/
OSFP to 4x100G QSFPs DACwith each 100G QSFPs using 2x 50G PAM4 electrical lanesno more than 3m0 to 70°C/
OSFP to 8x 50G SFP56 DACwith each 50G SFP56 using 1x 50G PAM4 electrical laneno more than 3m//
400G OSFP AOCOSFP to OSFP AOCwith each 400G OSFP using 8x 50G PAM4 electrical lanes70m (OM3) or 100m (OM4)0 to 70°C<9.5W



Q5: What do the suffixes “SR8, DR4 / XDR4, FR4 / LR4 and 2FR4” mean in 400G transceivers?

A5: The letters refer to reach, and the number refers to the number of optical channels:

  • SR8: SR refers to 100m over MMF. Each of the 8 optical channels from an SR8 module is carried on separate fibers, resulting in a total of 16 fibers (8 Tx and 8 Rx).
  • DR4 / XDR4: DR / XDR refer to 500m / 2km over SMF. Each of the 4 optical channels is carried on separate fibers, resulting in a total of 4 pairs of fibers.
  • FR4 / LR4: FR4 / LR4 refer to 2km / 10km over SMF. All 4 optical channels from an FR4 / LR4 are multiplexed onto one fiber pair, resulting in a total of 2 fibers (1 Tx and 1 Rx).
  • 2FR4: 2FR4 refers to 2 x 200G-FR4 links with 2km over SMF. Each of the 200G FR4 links has 4 optical channels, multiplexed onto one fiber pair (1 Tx and 1 Rx per 200G link). A 2FR4 has 2 of these links, resulting in a total of 4 fibers, and a total of 8 optical channels.

FAQs on 400G Transceivers and Cables Applications

Q1: What are the benefits of moving to 400G technology?

A1: 400G technology can increase the throughput of data and maximize the bandwidth and port density of the data centers. With only 1/4 the number of optical fiber links, connectors, and patch panels when using 100G platforms for the same aggregate bandwidth, 400G optics can also reduce operating expenses. With these benefits, 400G transceivers and QSFP-DD cables can provide ideal solutions for data centers and high-performance computing environments.

Q2: What are the applications of QSFP-DD cables?

A2: QSFP-DD cables are mainly used for short-distance 400G Ethernet connectivity in the data centers, and 400G to 2x 200G / 4x 100G / 8x 50G Ethernet applications.

Q3: 400G QSFP-DD vs 400G OSFP/CFP8: What are the differences?

A3: The table below includes detailed comparisons for the three main form factors of 400G transceivers.

400G Transceiver400G QSFP-DD400G OSFPCFP8
Application ScenarioData centerData center & telecomTelecom
Size18.35mm× 89.4mm× 8.5mm22.58mm× 107.8mm× 13mm40mm× 102mm× 9.5mm
Max Power Consumption12W15W24W
Backward Compatibility with QSFP28YesThrough adapterNo
Electrical signaling (Gbps)8× 50G
Switch Port Density (1RU)363616
Media TypeMMF & SMF
Hot PluggableYes
Thermal ManagementIndirectDirectIndirect
Support 800GNoYesNo



For more details about the differences, please refer to the blog: Differences Between QSFP-DD and QSFP+/QSFP28/QSFP56/OSFP/CFP8/COBO

Q4: What does it mean when an electrical or optical channel is PAM4 or NRZ in 400G transceivers?

A4: NRZ is a modulation technique that has two voltage levels to represent logic 0 and logic 1. PAM4 uses four voltage levels to represent four combinations of two bits logic-11, 10, 01, and 00. PAM4 signal can transmit twice faster than the traditional NRZ signal.

When a signal is referred to as “25G NRZ”, it means the signal is carrying data at 25 Gbps with NRZ modulation. When a signal is referred to as “50G PAM4”, or “100G PAM4”, it means the signal is carrying data at 50 Gbps, or 100 Gbps, respectively, using PAM4 modulation. The electrical connector interface of 400G transceivers is always 8x 50Gb/s PAM4 (for a total of 400Gb/s).

FAQs on Using 400G Transceivers and Cables in Data Centers

Q1: Can I plug an OSFP module into a 400G QSFP-DD port, or a QSFP-DD module into an OSFP port?

A1: No. OSFP and QSFP-DD are two physically distinct form factors. If you have an OSFP system, then 400G OSFP optics must be used. If you have a QSFP-DD system, then 400G QSFP-DD optics must be used.

Q2: Can a QSFP module be plugged into a 400G QSFP-DD port?

A2: Yes. A QSFP (40G or 100G) module can be inserted into a QSFP-DD port as QSFP-DD is backward compatible with QSFP modules. When using a QSFP module in a 400G QSFP-DD port, the QSFP-DD port must be configured for a data rate of 100G (or 40G).

Q3: Is it possible with a 400G OSFP on one end of a 400G link, and a 400G QSFP-DD on the other end?

A3: Yes. OSFP and QSFP-DD describe the physical form factors of the modules. As long as the Ethernet media types are the same (i.e. both ends of the link are 400G-DR4, or 400G-FR4 etc.), 400G OSFP and 400G QSFP-DD modules will interoperate with each other.

Q4: How can I break out a 400G port and connect to 100G QSFP ports on existing platforms?

A4: There are several ways to break out a 400G port to 100G QSFP ports:

  • QSFP-DD-DR4 to 4x 100G-QSFP-DR over 500m SMF
400G to 4x 100G
  • QSFP-DD-XDR4 to 4x 100G-QSFP-FR over 2km SMF
400G to 4x 100G
  • QSFP-DD-LR4 to 4x 100G-QSFP-LR over 10km SMF
400G to 4x 100G
  • OSFP-400G-2FR4 to 2x QSFP-100G-CWDM4 over 2km SMF
400G to 4x 100G

Apart from the 400G transceivers mentioned above, 400G to 4x 100G breakout cables can also be used.

Article Source: FAQs on 400G Transceivers and Cables

Related Articles:

400G Transceiver, DAC, or AOC: How to Choose?

400G OSFP Transceiver Types Overview

100G NIC: An Irresistible Trend in Next-Generation 400G Data Center

NIC, short for network interface card, which can be called network interface controller, network adapter or LAN adapter, allows a networking device to communicate with other networking devices. Without NIC, networking can hardly be done. There are NICs with different types and speeds, such as wireless and wired NIC, from 10G to 100G. Among them, 100G NIC, as a product appearing in recent years, hasn’t taken a large market share yet. This post gives a description of 100G NIC and the trends in NIC as follows.

What Is 100G NIC?

NIC is installed on a computer and used for communicating over a network with another computer, server or other network devices. It comes in many different forms but there are two main different types of NIC: wired NIC and wireless NIC. Wireless NICs use wireless technologies to access the network, while wired NICs use DAC cable or transceiver and fiber patch cable. The most popular wired LAN technology is Ethernet. In terms of its application field, it can be divided into computer NIC card and server NIC card. For client computers, one NIC is needed in most cases. However, for servers, it makes sense to use more than one NIC to meet the demand for handling more network traffic. Generally, one NIC has one network interface, but there are still some server NICs that have two or more interfaces built in a single card.

100G NIC

Figure 1: FS 100G NIC

With the expanding of data center from 10G to 100G, 25G server NIC has gained a firm foothold in the NIC market. In the meantime, the growth in demand for bandwidth is driving data center to higher bandwidth, 200G/400G and 100G transceivers have been widespread, which paves the way for 100G server.

How to Select 100G NIC?

How to choose the best 100G NIC from all the vendors? If you are stuck in this puzzle, see the following section listing recommendations and considerations to consider.

Connector

Connector types like RJ45, LC, FC, SC are commonly used connectors on NIC. You should check the connector type supported by NIC. Today many networks are only using RJ45, so it may be not that hard to choose the NIC for the right connector type as it has been in the past. Even so, some network may utilize a different interface such as coax. Therefore, check if the card you are planning to buy supports this connection before purchasing.

Bus Type

PCI is a hardware bus used for adding internal components to the computer. There are three main PCI bus types used by servers and workstations now: PCI, PCI-X and PCI-E. Among them, PCI is the most conventional one. It has a fixed width of 32 bits and can handle only 5 devices at a time. PCI-X is a higher upgraded version, providing more bandwidth. With the emergence of PCI-E, PCI-X cards are gradually replaced. PCI-E is a serial connection so that devices no longer share bandwidth like they do on a normal bus. Besides, there are different physical sizes of PCI-E card in the market: x16, x8, x4, and x1. Before purchasing a 100G NIC, it is necessary to make sure which PCI version and slot width can be compatible with your current equipment and network environment.

Hot swappable

There are some NICs that can be installed and removed without shutting down the system, which helps minimize downtime by allowing faulty devices to be replaced immediately. While you are choosing your 100G NIC, be sure to check if it supports hot swapping.

Trends in NIC

NICs were commonly used in desktop computers in the 1990s and early 2000s. Up to now, it has been widely used in servers and workstations with different types and rates. With the popularization of wireless networking and WiFi, wireless NICs gradually grows in popularity. However, wired cards are still popular for relatively immobile network devices owing to the reliable connections.NICs have been upgrading for years. As data centers are expanding at an unprecedented pace and driving the need for higher bandwidth between the server and switches, networking is moving from 10G to 25G and even 100G. Companies like Intel and Mellanox have launched their 100G NIC in succession.

During the upgrading from 10G to 100G in data centers, 25G server connectivity popularized for 100G migration can be realized by 4 strands of 25G. 25G NIC is still the mainstream. However, considering the fact that the overall bandwidth for data centers grows quickly and hardware upgrade cycles for data centers occur every two years, the ethernet speed can be faster than we expect. 400G data center is just on the horizon. It stands a good chance that 100G NIC will play an integral role in next-generation 400G networking.

Meanwhile, the need of 100G NIC will drive the demand for other network devices as well. For instance, 100G transceiver, the device between NIC and network, is bound to pervade. Now 100G transceivers are provided by many brands with different types such as CXP, CFP, QSFP28 transceivers,etc. FS supplies a full series of compatible 100G QSFP28 and CFP transceivers that can be matched with the major brand of 100G Ethernet NIC, such as Mellanox and Intel.

Conclusion

Nowadays with the hyping of the next generation cellular technology, 5G, the higher bandwidth is needed for data flow, which paves the way for 100G NIC. On the occasion, 100G transceivers and 400G network switches will be in great need. We believe that the new era of 5G networks will see the popularization of 100G NIC and change towards a new era of network performance.

Article Source: 100G NIC: An Irresistible Trend in Next-Generation 400G Data Center

Related Articles:

400G QSFP Transceiver Types and Fiber Connections

How Many 400G Transceiver Types Are in the Market?

Data Center Containment: Types, Benefits & Challenges

Over the past decade, data center containment has experienced a high rate of implementation by many data centers. It can greatly improve the predictability and efficiency of traditional data center cooling systems. This article will elaborate on what data center containment is, common types of it, and their benefits and challenges.

What Is Data Center Containment?

Data center containment is the separation of cold supply air from the hot exhaust air from IT equipment so as to reduce operating cost, optimize power usage effectiveness, and increase cooling capacity. Containment systems enable uniform and stable supply air temperature to the intake of IT equipment and a warmer, drier return air to cooling infrastructure.

Types of Data Center Containment

There are mainly two types of data center containment, hot aisle containment and cold aisle containment.

Hot aisle containment encloses warm exhaust air from IT equipment in data center racks and returns it back to cooling infrastructure. The air from the enclosed hot aisle is returned to cooling equipment via a ceiling plenum or duct work, and then the conditioned air enters the data center via raised floor, computer room air conditioning (CRAC) units, or duct work.

Hot aisle containment

Cold aisle containment encloses cold aisles where cold supply air is delivered to cool IT equipment. So the rest of the data center becomes a hot-air return plenum where the temperature can be high. Physical barriers such as solid metal panels, plastic curtains, or glass are used to allow for proper airflow through cold aisles.

Cold aisle containment

Hot Aisle vs. Cold Aisle

There are mixed views on whether it’s better to contain the hot aisle or the cold aisle. Both containment strategies have their own benefits as well as challenges.

Hot aisle containment benefits

  • The open areas of the data center are cool, so that visitors to the room will not think the IT equipment is not being cooled sufficiently. In addition, it allows for some low density areas to be un-contained if desired.
  • It is generally considered to be more effective. Any leakages that come from raised floor openings in the larger part of the room go into the cold space.
  • With hot aisle containment, low-density network racks and stand-alone equipment like storage cabinets can be situated outside the containment system, and they will not get too hot, because they are able to stay in the lower temperature open areas of the data center.
  • Hot aisle containment typically adjoins the ceiling where fire suppression is installed. With a well-designed space, it will not affect normal operation of a standard grid fire suppression system.

Hot aisle containment challenges

  • It is generally more expensive. A contained path is needed for air to flow from the hot aisle all the way to cooling units. Often a drop ceiling is used as return air plenum.
  • High temperatures in the hot aisle can be undesirable for data center technicians. When they need to access IT equipment and infrastructure, a contained hot aisle can be a very uncomfortable place to work. But this problem can be mitigated using temporary local cooling.

Cold aisle containment benefits

  • It is easy to implement without the need for additional architecture to contain and return exhaust air such as a drop ceiling or air plenum.
  • Cold aisle containment is less expensive to install as it only requires doors at ends of aisles and baffles or roof over the aisle.
  • Cold aisle containment is typically easier to retrofit in an existing data center. This is particularly true for data centers that have overhead obstructions such as existing duct work, lighting and power, and network distribution.

Cold aisle containment challenges

  • When utilizing a cold aisle system, the rest of the data center becomes hot, resulting in high return air temperatures. It also may create operational issues if any non-contained equipment such as low-density storage is installed in the general data center space.
  • The conditioned air that leaks from the openings under equipment like PDUs and raised floor tend to enter air paths that return to cooling units. This reduces the efficiency of the system.
  • In many cases, cold aisles have intermediate ceilings over the aisle. This may affect the overall fire protection and lighting design, especially when added to an existing data center.

How to Choose the Best Containment Option?

Every data center is unique. To find the most suitable option, you have to take into account a number of aspects. The first thing is to evaluate your site and calculate the Cooling Capacity Factor (CCF) of the computer room. Then observe the unique layout and architecture of each computer room to discover conditions that make hot aisle or cold aisle containment preferable. With adequate information and careful consideration, you will be able to choose the best containment option for your data center.

Article Source: Data Center Containment: Types, Benefits & Challenges

Related Articles:

What Is a Containerized Data Center: Pros and Cons

The Most Common Data Center Design Missteps

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

The COVID-19 pandemic caused several companies to shut down, and the implications were reduced production and altered supply chains. In the tech world, where silicon microchips are the heart of everything electronic, raw material shortage became a barrier to new product creation and development.

During the lockdown periods, some essential workers were required to stay home, which meant chip manufacturing was unavailable for several months. By the time lockdown was lifted and the world embraced the new normal, the rising demand for consumer and business electronics was enough to ripple up the supply chain.

Below, we’ve discussed the challenges associated with the current chip shortage, what to expect moving forward, and the possible interventions necessary to overcome the supply chain constraints.

Challenges Caused by the Current Chip Shortage

As technology and rapid innovation sweeps across industries, semiconductor chips have become an essential part of manufacturing – from devices like switches, wireless routers, computers, and automobiles to basic home appliances.

devices

To understand and quantify the impact this chip shortage has caused spanning the industry, we’ll need to look at some of the most affected sectors. Here’s a quick breakdown of how things have unfolded over the last eighteen months.

Automobile Industry

in North America and Europe had slowed or stopped production due to a lack of computer chips. Major automakers like Tesla, Ford, BMW, and General Motors have all been affected. The major implication is that the global automobile industry will manufacture 4 million fewer cars by the end of 2021 than earlier planned, and it will forfeit an average of $110 billion in revenue.

Consumer Electronics

Consumer electronics such as desktop PCs and smartphones rose in demand throughout the pandemic, thanks to the shift to virtual learning among students and the rise in remote working. At the start of the pandemic, several automakers slashed their vehicle production forecasts before abandoning open semiconductor chip orders. And while the consumer electronics industry stepped in and scooped most of those microchips, the supply couldn’t catch up with the demand.

Data Centers

Most chip fabrication companies like Samsung Foundries, Global Foundries, and TSMC prioritized high-margin orders from PC and data center customers during the pandemic. And while this has given data centers a competitive edge, it isn’t to say that data centers haven’t been affected by the global chip shortage.

data center

Some of the components data centers have struggled to source include those needed to put together their data center switching systems. These include BMC chips, capacitors, resistors, circuit boards, etc. Another challenge is the extended lead times due to wafer and substrate shortages, as well as reduced assembly capacity.

LED Lighting

LED backlights common in most display screens are powered by hard-to-find semiconductor chips. The prices of gadgets with LED lighting features are now highly-priced due to the shortage of raw materials and increased market demand. This is expected to continue up to the beginning of 2022.

Renewable Energy- Solar and Turbines

Renewable energy systems, particularly solar and turbines, rely on semiconductors and sensors to operate. The global supply chain constraints have hurt the industry and even forced some energy solutions manufacturers like Enphase Energy to

Semiconductor Trends: What to Expect Moving Forward

In response to the global chip shortage, several component manufacturers have ramped up production to help mitigate the shortages. However, top electronics and semiconductor manufacturers say the crunch will only worsen before it gets better. Most of these industry leaders speculate that the semiconductor shortage could persist into 2023.

Based on the ongoing disruption and supply chain volatility, various analysts in a recent CNBC article and Bloomberg interview echoed their views, and many are convinced that the coming year will be challenging. Here are some of the key takeaways:

Pat Gelsinger, CEO of Intel Corp., noted in April 2021 that the chip shortage would recover after a couple of years.

DigiTimes Report found that Intel and AMD server ICs and data centers have seen their lead times extend to 45 to 66 weeks.

The world’s third-largest EMS and OEM provider, Flex Ltd., expects the global semiconductor shortage to proceed into 2023.

In May 2021, Global Foundries, the fourth-largest contract semiconductor manufacturer, signed a $1.6 billion, 3-year silicon supply deal with AMD, and in late June, it launched its new $4 billion, 300mm-wafer facility in Singapore. Yet, the company says its production capacity will only increase component production earliest in 2023.

TMSC, one of the leading pure-play foundries in the industry, says it won’t meaningfully increase the component output until 2023. However, it’s optimistic that the company will ramp up the fabrication of automotive micro-controllers by 60% by the end of 2021.

From the industry insights above, it’s evident that despite the many efforts that major players put into resolving the global chip shortage, the bottlenecks will probably persist throughout 2022.

Additionally, some industry observers believe that the move by big tech companies such as Amazon, Microsoft, and Google to design their own chips for cloud and data center business could worsen the chip shortage crisis and other problems facing the semiconductor industry.

article, the authors hint that the entry of Microsoft, Amazon, and Google into the chip design market will be a turning point in the industry. These tech giants have the resources to design superior and cost-effective chips of their own, something most chip designers like Intel have in limited proportions.

Since these tech giants will become independent, each will be looking to create component stockpiles to endure long waits and meet production demands between inventory refreshes. Again, this will further worsen the existing chip shortage.

Possible Solutions

To stay ahead of the game, major industry players such as chip designers and manufacturers and the many affected industries have taken several steps to mitigate the impacts of the chip shortage.

For many chip makers, expanding their production capacity has been an obvious response. Other suppliers in certain regions decided to stockpile and limit exports to better respond to market volatility and political pressures.

Similarly, improving the yields or increasing the number of chips manufactured from a silicon wafer is an area that many manufacturers have invested in to boost chip supply by some given margin.

chip manufacturing

Here are the other possible solutions that companies have had to adopt:

Embracing flexibility to accommodate older chip technologies that may not be “state of the art” but are still better than nothing.

Leveraging software solutions such as smart compression and compilation to build efficient AI models to help unlock hardware capabilities.

LED Lighting

The latest global chip shortage has led to severe shocks in the semiconductor supply chain, affecting several industries from automobile, consumer electronics, data centers, LED, and renewables.

Industry thought leaders believe that shortages will persist into 2023 despite the current build-up in mitigation measures. And while full recovery will not be witnessed any time soon, some chip makers are optimistic that they will ramp up fabrication to contain the demand among their automotive customers.

That said, staying ahead of the game is an all-time struggle considering this is an issue affecting every industry player, regardless of size or market position. Expanding production capacity, accommodating older chip technologies, and leveraging software solutions to unlock hardware capabilities are some of the promising solutions.

Added

This article is being updated continuously. If you want to share any comments on FS switches, or if you are inclined to test and review our switches, please email us via media@fs.com or inform us on social media platforms. We cannot wait to hear more about your ideas on FS switches.

Article Source: The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Related Articles:

Impact of Chip Shortage on Datacenter Industry

Infographic – What Is a Data Center?

The Most Common Data Center Design Missteps

Introduction

Data center design is to provide IT equipment with a high-quality, standard, safe, and reliable operating environment, fully meeting the environmental requirements for stable and reliable operation of IT devices and prolonging the service life of computer systems. Data center design is the most important part of data center construction directly relating to the success or failure of data center long term planning, so its design should be professional, advanced, integral, flexible, safe, reliable, and practical.

9 Missteps in Data Center Design

Data center design is one of the effective solutions to overcrowded or outdated data centers, while inappropriate design results in obstacles for growing enterprises. Poor planning can lead to a waste of valuable funds and more issues, increasing operating expenses. Here are 9 mistakes to be aware of when designing a data center.

Miscalculation of Total Cost

Data center operation expense is made up of two key components: maintenance costs and operating costs. Maintenance costs refer to the costs associated with maintaining all critical facility support infrastructure, such as OEM equipment maintenance contracts, data center cleaning fees, etc. Operating costs refer to costs associated with day-to-day operations and field personnel, such as the creation of site-specific operational documentation, capacity management, and QA/QC policies and procedures. If you plan to build or expand a business-critical data center, the best approach is to focus on three basic parameters: capital expenditures, operating and maintenance expenses, and energy costs. Taking any component out of the equation, you might face the case that the model does not properly align an organization’s risk profile and business spending profile.

Unspecified Planning and Infrastructure Assessment

Infrastructure assessment and clear planning are essential processes for data center construction. For example, every construction project needs to have a chain of command that clearly defines areas of responsibility and who is responsible for aspects of data center design. Those who are involved need to evaluate the potential applications of the data center infrastructure and what types of connectivity requirements they need. In general, planning involves a rack-by-rack blueprint, including network connectivity and mobile devices, power requirements, system topology, cooling facilities, virtual local and on-premises networks, third-party applications, and operational systems. For the importance of data center design, you should have a thorough understanding of the functionality before it begins. Otherwise, you’ll fall short and cost more money to maintain.

data center

Inappropriate Design Criteria

Two missteps can send enterprises into an overspending death spiral. First of all, everyone has different design ideas, but not everyone is right. Second, the actual business is mismatched with the desired vision and does not support the setting of kilowatts per square foot or rack. Over planning in design is a waste of capital. Higher-level facilities also result in higher operational and energy costs. A data center designer establishes the proper design criteria and performance characteristics and then builds capital expenditure and operating expenses around it.

Unsuitable Data Center Site

Enterprises often need to find a perfect building location when designing a data center. If you don’t get some site-critical information, it will lead to some cases. Large users are well aware of the data center and have concerns about power availability and cost, fiber optics, and irresistible factors. Baseline users often have business model shells in their core business areas that decide whether they need to build or refurbish. Hence, premature site selection or unreasonable geographic location will fail to meet the design requirements.

Pre-design Space Planning

It is also very important to plan the space capacity inside the data center. The raised floor to support ratio can be as high as 1 to 1, while the mechanical and electrical equipment needs enough space to accommodate. In addition, the planning of office and IT equipment storage areas also needed to be considered. Therefore, it is very critical to estimate and plan the space capacity during data center design. Estimation errors can make the design of a data center unsuitable for the site space, which means suspending project re-evaluation and possibly repurchasing components.

Mismatched Business Goals

Enterprises need to clearly understand their business goals when debugging a data center so that they can complete the data center design. After meeting the business goals, something should be considered, such as which specific applications the data center supports, additional computing power, and later business expansion. Additionally, enterprises need to communicate these goals to data center architects, engineers, and builders to ensure that the overall design meets business needs.

Design Limitations

The importance of modular design is well-publicized in the data center industry. Although the modular approach refers to adding extra infrastructure in an immediate mode to preserve capital, it doesn’t guarantee complete success. Modular and flexible design is the key to long-term stable operation, also meets your data center plans. On the power system, you have to take note of adding UPS (Uninterruptible Power Supply) capacity to existing modules without system disruption. Input and output distribution system design shouldn’t be overlooked, it can allow the data center to adapt to any future changes in the underlying construction standards.

Improper Data Center Power Equipment

To design a data center to maximize equipment uptime and reduce power consumption, you must choose the right power equipment based on the projected capacity. Typically, you might use redundant computing to predict triple server usage to ensure adequate power, which is a waste. Long-term power consumption trends are what you need to consider. Install automatic power-on generators and backup power sources, and choose equipment that can provide enough power to support the data center without waste.

Over-complicated Design

In many cases, redundant targets introduce some complexity. If you add multiple ways to build a modular system, things can quickly get complicated. The over-complexity of data center design means more equipment and components, and these components are the source of failure, which can cause problems such as:

  • Human error. Data statistics errors lead to system data vulnerability and increase operational risks.
  • Expensive. In addition to equipment and components, the maintenance of components failure also incurs more charges.
  • Design concept. If maintainability wasn’t considered by the data center design when the IT team has the requirements of operating or servicing, system operational normality even human security get impacts.

Conclusion

Avoid the nine missteps above to find design solutions for data center IT infrastructure and build a data center that suits your business. Data center design missteps have some impacts on enterprises, such as business expansion, infrastructure maintenance, and security risks. Hence, all infrastructure facilities and data center standards must be rigorously estimated during data center design to ensure long-term stable operation within a reasonable budget.

Article Source: The Most Common Data Center Design Missteps

Related Articles:

How to Utilize Data Center Space More Effectively?

Data Center White Space and Gray Space

Impact of Chip Shortage on Datacenter Industry

As the global chip shortage let rip, many chip manufacturers have to slow or even halt semiconductor production. Makers of all kinds of electronics such as switches, PCs, servers are all scrambling to get enough chips in the pipeline to match the surging demand for their products. Every manufacturer, supplier and solution provider in datacenter industry is feeling the impact of the ongoing chip scarcity. However, relief is nowhere in sight yet.

What’s Happening?

Due to the rise of AI and cloud computing, datacenter chips have been a highly charged topic in recent times. As networking switches and modern servers, indispensable equipment in datacenter applications, use more advanced components than an average consumer’s PC, naturally when it comes to chip manufacturers and suppliers, data centers are given the top priority. However, with the demand for data center machines far outstripping supply, chip shortages may continue to be pervasive across the next few years. Coupled with economic uncertainties caused by the pandemic, it further puts stress on datacenter management.

According to a report from the Dell’Oro Group, robust datacenter switch sales over the past year could foretell a looming shortage. As the mismatch in supply and demand keeps growing, enterprises looking to buy datacenter switches face extended lead times and elevated costs over the course of the next year.

“So supply is decreasing and demand is increasing,” said Sameh Boujelbene, leader of the analyst firm’s campus and data-center research team. “There’s a belief that things will get worse in the second half of the year, but no consensus on when it’ll start getting better.”

Back in March, Broadcom said that more than 90% of its total chip output for 2021 had already been ordered by customers, who are pressuring it for chips to meet booming demand for servers used in cloud data centers and consumer electronics such as 5G phones.

“We intend to meet such demand, and in doing so, we will maintain our disciplined process of carefully reviewing our backlog, identifying real end-user demand, and delivering products accordingly,” CEO Hock Tan said on a conference call with investors and analysts.

Major Implications

Extended Lead Times

Arista Networks, one of the largest data center networking switch vendors and a supplier of switches to cloud providers, foretells that switch-silicon lead times will be extended to as long as 52 weeks.

“The supply chain has never been so constrained in Arista history,” the company’s CEO, Jayshree Ullal, said on an earnings call. “To put this in perspective, we now have to plan for many components with 52-week lead time. COVID has resulted in substrate and wafer shortages and reduced assembly capacity. Our contract manufacturers have experienced significant volatility due to country specific COVID orders. Naturally, we’re working more closely with our strategic suppliers to improve planning and delivery.”

Hock Tan, CEO of Broadcom, also acknowledged on an earnings call that the company had “started extending lead times.” He said, “part of the problem was that customers were now ordering more chips and demanding them faster than usual, hoping to buffer against the supply chain issues.”

Elevated Cost

Vertiv, one of the biggest sellers of datacenter power and cooling equipment, mentioned it had to delay previously planned “footprint optimization programs” due to strained supply. The company’s CEO, Robert Johnson, said on an earnings call, “We have decided to delay some of those programs.”

Supply chain constraints combined with inflation would cause “some incremental unexpected costs over the short term,” he said, “To share the cost with our customers where possible may be part of the solution.”

“Prices are definitely going to be higher for a lot of devices that require a semiconductor,” says David Yoffie, a Harvard Business School professor who spent almost three decades serving on the board of Intel.

Conclusion

There is no telling that how the situation will continue playing out and, most importantly, when supply and demand might get back to normal. Opinions vary on when the shortage will end. The CEO of chipmaker STMicro estimated that the shortage will end by early 2023. Intel CEO Patrick Gelsinger said it could last two more years.

As a high-tech network solutions and services provider, FS has been actively working with our customers to help them plan for, adapt to, and overcome the supply chain challenges, hoping that we can both ride out this chip shortage crisis. At least, we cannot lose hope, as advised by Bill Wyckoff, vice president at technology equipment provider SHI International, “This is not an ‘all is lost’ situation. There are ways and means to keep your equipment procurement and refresh plans on track if you work with the right partners.”

Article Source: Impact of Chip Shortage on Datacenter Industry

Related Articles:

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Infographic – What Is a Data Center?FacebookTwitterEmail分享

Data Center White Space and Gray Space

Nowadays, with the advent of the 5G era and the advancement of technology, more and more enterprises rely on IT for almost any choice. Therefore, their demand for better data center services has increased dramatically.

However, due to the higher capital and operating costs caused by the cluttered distribution of equipment in data centers, the space has become one of the biggest factors restricting data centers. In order to solve that problem, it’s necessary to optimize the utilization of existing space, for example, to consolidate white space and gray space in data centers.

What is data center white space?

Data center white space refers to the space where IT equipment and infrastructure are located. It includes servers, storage, network gear, racks, air conditioning units, power distribution systems.

White space is usually measured in square feet, ranging anywhere from a few hundred to a hundred thousand square feet. It can be either raised floor or hard floor (solid floor). Raised floors are developed to provide locations for power cabling, tracks for data cabling, cold air distribution systems for IT equipment cooling, etc. It can have access to all elements easily. Different from raised floors, cooling and cabling systems for hard floors are installed overhead. Today, there is a trend from raised floors to hard floors.

Typically, the white space area is the only productive area where an enterprise can utilize the data center space. Moreover, online activities like working from home have increased rapidly in recent years, especially due to the impact of COVID-19, which has increased business demand for data center white space. Therefore, the enterprise has to design data center white space with care.data center white space

What is data center gray space?

Different from data center white space, data center gray space refers to the space where back-end equipment is located. This includes switchgear, UPS, transformers, chillers, and generators.

The existence of gray space is to support the white space, therefore the amount of gray space in equipment is determined by the space assigned for data center white space. The more white space is needed, the more backend infrastructure is required to support it.data center gray space

How to improve the efficiency of space?

Building more data centers and consuming more energy is not a good option for IT organizations to make use of data center space. To increase data center sustainability and reduce energy costs, it’s necessary to use some strategies to combine data center white space and gray space, thus optimizing the efficiency of data center space.

White Space Efficiency Strategies

  • Virtualized technology: The technology of virtualization can integrate many virtual machines into physical machines, reducing physical hardware and saving lots of data center space. Virtualization management systems such as VMware and Hyper V can create a virtualized environment.
  • Cloud computing resources: With the help of the public cloud, enterprises can transfer data through the public internet, thus reducing their needs for physical servers and other IT infrastructure.
  • Data center planning: DCIM software, a kind of data center infrastructure management tool, can help estimate current and future power and server needs. It can also help data centers track and manage resources and optimize their size to save more space.
  • Monitor power and cooling capacity: In addition to the capacity planning about space, monitoring power, and cooling capacity is also necessary to properly configure equipment.

Gray Space Efficiency Strategies

  • State-of-art technologies: Technologies like flywheels can increase the power of the machine, reducing the number of batteries required for the power supply. Besides, the use of solar panels can reduce data center electricity bills. And water cooling can also help reduce the costs of cooling solutions.

Compared with white space efficiency techniques, grace space efficiency strategies are pretty less. However, the most efficient plan is to combine data center white space with gray space. By doing so, enterprises can realize the optimal utilization of data center space.

Article Source: Data Center White Space and Gray Space

Related Articles:

How to Utilize Data Center Space More Effectively?

What Is Data Center Virtualization?

Infographic – What Is a Data Center?

The Internet is where we store and receive a huge amount of information. Where is all the information stored? The answer is data centers. At its simplest, a data center is a dedicated place that organizations use to house their critical applications and data. Here is a short look into the basics of data centers. You will get to know the data center layout, the data pathway, and common types of data centers.

what is a data center

To know more about data centers, click here.

Article Source: Infographic – What Is a Data Center?

Related Articles:

What Is a Data Center?

Infographic — Evolution of Data Centers

What Is a Containerized Data Center: Pros and Cons

The rise of the digital economy has promoted the rapid and vigorous development of industries like cloud computing, Internet of Things, and big data, which have put forward higher requirements for data centers. The drawbacks of traditional data centers have emerged gradually, which are increasingly unable to meet the needs of the market. The prefabricated containerized data center meets the current market demand and will usher in a period of rapid development.

What Is a Containerized Data Center?

A containerized data center comes equipped with data center infrastructures housed in a container. There are different types of containerized data centers, ranging from simple IT containers to comprehensive all-in-one systems integrating the entire physical IT infrastructure.

Generally, a containerized data center includes networking equipment, servers, cooling system, UPS, cable pathways, storage devices, lighting and physical security systems.

A Containerized Data Center
A Containerized Data Center

Pros of Containerized Data Centers

Portability & Durability

Containerized data centers are fabricated in a manufacturing facility and shipped to the end-user in containers. Due to the container appearance, they are flexible to move and cost-saving compared to traditional data centers. What’s more, containers are dustproof, waterproof, and shock-resistant, making containerized data centers suitable for various harsh environments.

Rapid Deployment

Unlike traditional data centers with limited flexibility and difficult management, containerized data centers are prefabricated and pretested at the factory, and are transported to the deployment site for direct set-up. With access to utility power, network and water, the data center can work well. Therefore, the on-site deployment period for containerized data centers is substantially shortened to 2~3 months, demonstrating rapid and flexible deployment.

Energy Efficiency

Containerized data centers are designed for energy efficiency, which effectively limits ongoing operational costs. They enable power and cooling systems to match capacity and workload well, improving work efficiency and reducing over-configuration. More specifically, containerized data centers adopt in-row cooling systems to deliver air to adjacent hotspots with strict airflow management, which greatly improves cold air utilization, saves space and electricity costs in the server room, and reduces power usage efficiency (PUE).

High Scalability

Because of its unique modular design, a containerized data center is easy to install and scale up. More data centers can be added to the modular architecture of containerized data centers according to the requirements to optimize the IT configuration in a data center. With high scalability, containerized data centers can meet the changing demands of the organization rapidly and effortlessly.

Cons of Containerized Data Centers

Limited Computing Performance: Although it contains the entire IT infrastructure, a containerized data center still lacks the same computing capability as a traditional data center.

Low Security: Isolated containerized data centers are more vulnerable to break-ins than data center buildings. And without numerous built-in redundancies, an entire containerized data center can be shut down by a single point of failure.

Lack of Availability: It is challenging and expensive to provide utilities and networks for containerized data centers placed in edge areas.

Conclusion

Despite some shortcomings, containerized data centers have obvious advantages over traditional data centers. From the perspective of both current short-term investment and future long-term operating costs, containerized data centers have become the future trend of data center construction at this stage.

Article Source: What Is a Containerized Data Center: Pros and Cons

Related Articles:

What Is a Data Center?

Micro Data Center and Edge Computing

Top 7 Data Center Management Challenges

5G and Multi-Access Edge Computing

Over the years, the Internet of Things and IoT devices have grown tremendously, effectively boosting productivity and accelerating network agility. This technology has also elevated the adoption of edge computing while ushering in a set of advanced edge devices. By adopting edge computing, computational needs are efficiently met since the computing resources are distributed along the communication path, i.e., via a decentralized computing infrastructure.

One of the benefits of edge computing is improved performance as analytics capabilities are brought closer to the machine. An edge data center also reduces operational costs, thanks to the reduced bandwidth requirement and low latency.

Below, we’ve explored more about 5G wireless systems and multi-access edge computing (MEC), an advanced form of edge computing, and how both extend cloud computing benefits to the edge and closer to the users. Keep reading to learn more.

What Is Multi-Access Edge Computing

Multi-access edge computing (MEC) is a relatively new technology that offers cloud computing capabilities at the network’s edge. This technology works by moving some computing capabilities out of the cloud and closer to the end devices. Hence data doesn’t travel as far, resulting in fast processing speeds.

Ideally, there are two types of MEC, dedicated MEC and distributed MEC. Dedicated MEC is typically deployed at the customer’s site on a mobile private network and is designed only for one business. On the other hand, distributed MEC is deployed on a public network, either 4G or 5G, and connects shared assets and resources.

With both the dedicated and distributed MEC, applications run locally, and data is processed in real or near real-time. This helps avoid latency issues for faster response rates and decision-making. MEC technology has seen wider adoption in video analytics, augmented reality, location services, data caching, local content distribution, etc.

How MEC and 5G are Changing Different Industries

At the heart of multi-access edge computing are wireless and radio access network technologies that open up different networks to a wide range of innovative services. Today, 5G technology is the ultimate network that supports ultra-reliable low latency communication. It also provides an enhanced mobile broadband (eMBB) capability for use cases involving significant data rates such as virtual reality and augmented reality.

That said, 5G use cases can be categorized into three domains, massive IoT, mission-critical IoT, and enhanced mobile broadband. Each of the three categories requires different network features regarding security, mobility, bandwidth, policy control, latency, and reliability.

Why MEC Adoption Is on the Rise

5G MEC adoption is growing exponentially, and there are several reasons why this is the case. One reason is that this technology aligns with the distributed and scalable nature of the cloud, making it a key driver of technical transformation. Similarly, MEC technology is a critical business transformation change agent that offers the opportunity to improve service delivery and even support new market verticals.

Among the top use cases driving the high level of 5G, MEC implementation includes video content delivery, the emergence of smart cities, smart utilities (e.g., water and power grids), and connected cars. This also showcases the significant role MEC plays in different IoT domains. Here’s a quick overview of the primary use cases:

  • Autonomous vehicles – 5G MEC can help enhance operational functions such as continuous sensing and real-time traffic monitoring. This reduces latency issues and increases bandwidth.
  • Smart homes – MEC technology can process data locally, boosting privacy and security. It also reduces communication latency and allows for fast mobility and relocation.
  • AR/VR – Moving computational capabilities and processes to edge amplifies the immersive experience to users, plus it extends the battery-life of AR/VR devices.
  • Smart energy – MEC resolves traffic congestion issues and delays due to huge data generation and intermittent connectivity. It also reduces cyber-attacks by enforcing security mechanisms closer to the edge.
MEC Adoption
MEC Adoption

Getting Started With 5G MEC

One of the key benefits of adopting 5G MEC technology is openness, particularly API openness and the option to integrate third-party apps. Standards compliance and application agility are the other value propositions of multi-access edge computing. Therefore, enterprises looking to benefit from a flexible and open cloud should base their integration on the key competencies they want to achieve.

One of the challenges common during the integration process is hardware platforms’ limitations, as far as scale and openness are concerned. Similarly, deploying 5G MEC technology is costly, especially for small-scale businesses with limited financial backing. Other implementation issues include ecosystem and standards immaturity, software limitations, culture, and technical skillset challenges.

To successfully deploy multi-access edge computing, you need an effective 5G MEC implementation strategy that’s true and tested. You should also consider partnering with an expert IT or edge computing company for professional guidance.

5G MEC Technology: Key Takeaways

Edge-driven transformation is a game-changer in the modern business world, and 5G multi-access edge computing technology is undoubtedly leading the cause. Enterprises that embrace this new technology in their business models benefit from streamlined operations, reduced costs, and enhanced customer experience.

Even then, MEC integration isn’t without its challenges. Companies looking to deploy multi-access edge computing technology should have a solid implementation strategy that aligns with their entire digital transformation agenda to avoid silos.

Article Source: 5G and Multi-Access Edge Computing

Related Articles:

What is Multi-Access Edge Computing?https://community.fs.com/blog/what-is-multi-access-edge-computing.html

Edge Computing vs. Multi-Access Edge Computing

What Is Edge Computing?

Carrier Neutral vs. Carrier Specific: Which to Choose?

As the need for data storage drives the growth of data centers, colocation facilities are increasingly important to enterprises. A colocation data center brings many advantages to an enterprise data center, such as carriers helping enterprises manage their IT infrastructure that reduces the cost for management. There are two types of hosting carriers: carrier-neutral and carrier-specific. In this article, we will discuss the differentiation of them.

Carrier Neutral and Carrier Specific Data Center: What Are They?

Accompanied by the accelerated growth of the Internet, the exponential growth of data has led to a surge in the number of data centers to meet the needs of companies of all sizes and market segments. Two types of carriers that offer managed services have emerged on the market.

Carrier-neutral data centers allow access and interconnection of multiple different carriers while the carriers can find solutions that meet the specific needs of an enterprise’s business. Carrier-specific data centers, however, are monolithic, supporting only one carrier that controls all access to corporate data. At present, most enterprises choose carrier-neutral data centers to support their business development and avoid some unplanned accidents.

There is an example, in 2021, about 1/3 of the cloud infrastructure in AWS was overwhelmed and down for 9 hours. This not only affected millions of websites, but also countless other devices running on AWS. A week later, AWS was down again for about an hour, bringing down the Playstation network, Zoom, and Salesforce, among others. The third downtime of AWS also impacted Internet giants such as Slack, Asana, Hulu, and Imgur to a certain extent. 3 outages of cloud infrastructure in one month took a beyond measure cost to AWS, which also proved the fragility of cloud dependence.

In the above example, we can know that the management of the data center by the enterprise will affect the business development due to some unplanned accidents, which is a huge loss for the enterprise. To lower the risks caused by using a single carrier, enterprises need to choose a carrier-neutral data center and adjust the system architecture to protect their data center.

Why Should Enterprises Choose Carrier Neutral Data Center?

Carrier-neutral data centers are data centers operated by third-party colocation providers, but these third parties are rarely involved in providing Internet access services. Hence, the existence of carrier-neutral data centers enhances the diversity of market competition and provides enterprises with more beneficial options.

Another colocation advantage of a carrier-neutral data center is the ability to change internet providers as needed, saving the labor cost of physically moving servers elsewhere. We have summarized several main advantages of a carrier-neutral data center as follows.

Why Should Enterprises Choose Carrier Neutral Data Center

Redundancy

A carrier-neutral colocation data center is independent of the network operators and not owned by a single ISP. Out of this advantage, it offers enterprises multiple connectivity options, creating a fully redundant infrastructure. If one of the carriers loses power, the carrier-neutral data center can instantly switch servers to another online carrier. This ensures that the entire infrastructure is running and always online. On the network connection, a cross-connect is used to connect the ISP or telecom company directly to the customer’s sub-server to obtain bandwidth from the source. This can effectively avoid network switching to increase additional delay and ensure network performance.

Options and Flexibility

Flexibility is a key factor and advantage for carrier-neutral data center providers. For one thing, the carrier neutral model can increase or decrease the network transmission capacity through the operation of network transmission. And as the business continues to grow, enterprises need colocation data center providers that can provide scalability and flexibility. For another thing, carrier-neutral facilities can provide additional benefits to their customers, such as offering enterprise DR options, interconnect, and MSP services. Whether your business is large or small, a carrier-neutral data center provider may be the best choice for you.

Cost-effectiveness

First, colocation data center solutions can provide a high level of control and scalability, expanding opportunity to storage, which can support business growth and save some expenses. Additionally, it also lowers physical transport costs for enterprises. Second, with all operators in the market competing for the best price and maximum connectivity, a net neutral data center has a cost advantage over a single network facility. What’s more, since freedom of use to any carrier in a carrier-neutral data center, enterprises can choose the best cost-benefit ratio for their needs.

Reliability

Carrier-neutral data centers also boast reliability. One of the most important aspects of a data center is the ability to have 100% uptime. Carrier-neutral data center providers can provide users with ISP redundancy that a carrier-specific data center cannot. Having multiple ISPs at the same time gives better security for all clients. Even if one carrier fails, another carrier may keep the system running. At the same time, the data center service provider provides 24/7 security including all the details and uses advanced technology to ensure the security of login access at all access points to ensure that customer data is safe. Also, the multi-layered protection of the physical security cabinet ensures the safety of data transmission.

Summary

While many enterprises need to determine the best option for their company’s specific business needs, by comparing both carrier-neutral and carrier-specific, choosing a network carrier neutral data center service provider is a better option for today’s cloud-based business customers. Several advantages, such as maximizing total cost, lower network latency, and better network coverage, are of working with a carrier-neutral managed service provider. With no downtime and constant concerns about equipment performance, IT decision-makers for enterprise clients have more time to focus on the more valuable areas that drive continued business growth and success.

Article Source: Carrier Neutral vs. Carrier Specific: Which to Choose?

Related Articles:

What Is Data Center Storage?

On-Premises vs. Cloud Data Center, Which Is Right for Your Business?

Data Center Infrastructure Basics and Management Solutions

Data center infrastructure refers to all the physical components in a data center environment. These physical components play a vital role in the day-to-day operations of a data center. Hence, data center management challenges are an urgent issue that IT departments need to pay attention to. On the one hand, it is to improve the energy efficiency of the data center; on the other hand, it is to know about the operating performance of the data center in real-time ensuring its good working condition and maintaining enterprise development.

Data Center Infrastructure Basics

The standard for data center infrastructure is divided into four tiers, each of which consists of different facilities. They mainly include cabling systems, power facilities, cooling facilities, network infrastructure, storage infrastructure, and computing resources.

There are roughly two types of infrastructure inside a data center: the core components and IT infrastructure. Network infrastructure, storage infrastructure, and computing resources belong to the former, while cooling equipment, power, redundancy, etc. belong to the latter.

Core Components

Network, storage, and computing systems are vital infrastructures for data centers to achieve sharing access to applications and data, providing data centers with shared access to applications and data. Also, they are the core components of data centers.

Network Infrastructure

Datacenter network infrastructure is a combination of network resources, consisting of switches, routers, load balancing, analytics, etc., to facilitate the storage and processing of applications and data. Modern data center networking architectures, through using full-stack networking and security virtualization platforms that support a rich set of data services, can achieve connecting everything from VMs, containers, and bare-metal applications, while enabling centralized management and fine-grained security controls.

Storage Infrastructure

Datacenter storage is a general term for the tools, technologies and processes for designing, implementing, managing and monitoring storage infrastructure and resources in data centers, mainly referring to the equipment and software technologies that implement data and application storage in data center facilities. These include hard drives, tape drives and other forms of internal and external storage and backup management software utilities external storage facilities/solutions.

Computing Resources

A data center meter is a memory and processing power to run applications, usually provided by high-end servers. In the edge computing model, the processing and memory used to run applications on servers may be virtualized, physical, distributed among containers or distributed among remote nodes.

IT Infrastructure

As data centers become critical to enterprise IT operations, it is equally important to keep them running efficiently. When designing data center infrastructure, it is necessary to evaluate its physical environment, including cabling system, power system, cooling system to ensure the security of the physical environment of the data center.

Cabling Systems

The integrated cabling is an important part of data center cable management, supporting the connection, intercommunication and operation of the entire data center network. The system is usually composed of copper cables, optical cables, connectors and wiring equipment. The application of the data center integrated wiring system has the characteristics of high density, high performance, high reliability, fast installation, modularization, future-oriented, and easy application.

Power Systems

Datacenter digital infrastructure requires electricity to operate. Even an interruption of a fraction of a second will result in a significant impact. Hence, power infrastructure is one of the most critical components of a data center. The data center power chain starts at the substation and ends up through building transformers, switches, uninterruptible power supplies, power distribution units, and remote power panels to racks and servers.

Cooling Systems

Data center servers generate a lot of heat while running. Based on this characteristic, cooling is critical to data center operations, aiming to keep systems online. The amount of power each rack can keep cool by itself places a limit on the amount of power a data center can consume. Generally, each rack can allow the data center to operate at an average 5-10 kW cooling density, but some may be higher.

data center

Data Center Infrastructure Management Solutions

Due to the complexity of IT equipment in a data center, the availability, reliability, and maintenance of its components require more attention. Efficient data center operations can be achieved through balanced investments in facilities and accommodating equipment.

Energy Usage Monitoring Equipment

Traditional data centers lack the energy usage monitoring instruments and sensors required to comply with ASHRAE standards and collect measurement data for use in calculating data center PUE. It results in a poor monitoring environment for the power system of the data center. One measure is to install energy monitoring components and systems on power systems to measure data center energy efficiency. Enterprise teams can implement effective strategies by the measure to balance overall energy usage efficiency and effectively monitor the energy usage of all other nodes.

Cooling Facilities Optimization

Independent computer room air conditioning units used in traditional data centers often have separate controls and set points, resulting in excessive operation due to temperature and humidity adjustments. It’s a good way for helping servers to achieve cooling by creating hot-aisle/cold-aisle layouts to maximize the flow of cold air to the equipment intakes and the hot exhaust air from the equipment racks. The creation of hot or cold aisles can eliminate the mixing of hot and cold air by adding partitions or ceilings.

CRAC Efficiency Improvement

Packaged DX air conditioners likely compose the most common type of cooling equipment for smaller data centers. These units are often described as CRAC units. There are, however, there are several ways to improve the energy efficiency of the cooling system employing DX units. Indoor CRAC units are available with a few different heat rejection options.

  • – As with rooftop units, adding evaporative spray can improve the efficiency of air-cooled CRAC units.
  • – A pre-cooling water coil can be added to the CRAC unit upstream of the evaporator coil. When ambient conditions allow the condenser water to be cooled to the extent that it provides direct cooling benefits to the air entering the CRAC unit, the condenser water is diverted to the pre-cooling coil. This will reduce or sometimes eliminate the need for compressor-based cooling for the CRAC unit.

DCIM

Data center infrastructure management is the combination of IT and operations to manage and optimize the performance of data center infrastructure within an organization. DCIM tools help data center operators monitor, measure, and manage the utilization and energy consumption of data center-related equipment and facility infrastructure components, effectively improving the relationship between data center buildings and their systems.

DCIM enables bridging of information across organizational domains such as data center operations, facilities, and IT to maximize data center utilization. Data center operators create flexible and efficient operations by visualizing real-time temperature and humidity status, equipment status, power consumption, and air conditioning workloads in server rooms.

Preventive Maintenance

In addition to the above management and operation solutions for infrastructure, unplanned maintenance is also an aspect to consider. Unplanned maintenance typically costs 3-9 times more than planned maintenance, primarily due to overtime labor costs, collateral damage, emergency parts, and service calls. IT teams can create a recurring schedule to perform preventive maintenance on the data center. Regularly checking the infrastructure status and repairing and upgrading the required components promptly can keep the internal infrastructure running efficiently, as well as extend the lifespan and overall efficiency of the data center infrastructure.

Article Source: Data Center Infrastructure Basics and Management Solutions

Related Articles:

Data Center Migration Steps and Challenges

What Are Data Center Tiers?

Why Green Data Center Matters

Background

Green data centers appear in the concept of enterprise construction, due to the continuous growth of new data storage requirements and the steady enhancement of green environmental protection awareness. Newly retained data must be protected, cooled, and transferred efficiently. This means that the huge energy demands of data centers present challenges in terms of cost and sustainability, and enterprises are increasingly concerned about the energy demands of their data centers. It can be seen that sustainable and renewable energy resources have become the development trend of green data centers.

Green Data Center Is a Trend

A green data center is a facility similar to a regular data center that hosts servers to store, manage, and disseminate data. It is designed to minimize environmental impact by providing maximum energy efficiency. Green data centers have the same characteristics as typical data centers, but the internal system settings and technologies can effectively reduce energy consumption and carbon footprints for enterprises.

The internal construction of a green data center requires the support of a series of services, such as cloud services, cable TV services, Internet services, colocation services, and data protection security services. Of course, many enterprises or carriers have equipped their data centers with cloud services. Some enterprises may also need to rely on other carriers to provide Internet and related services.

According to market trends, the global green data center market is worth around $59.32 billion in 2021 and is expected to grow at a CAGR of 23.5% in the future to 2026. It also shows that the transition to renewable energy sources is accelerating because of the growth of green data centers.

As the growing demand for data storage drives the modernization of data centers, it also places higher demands on power and cooling systems. On the one hand, data centers need to convert non-renewable energy into electricity to generate electricity, resulting in rising electricity costs; on the other hand, some enterprises need to complete the construction of cooling facilities and server cleaning through a lot of water, all of which are ample opportunities for the green data center market. For example, Facebook and Amazon continue to expand their businesses, which has also increased the need for data storage of global companies. These enterprises need a lot of data to complete the analysis of potential customers, but these data processing needs will require a lot of energy. Therefore, the realization of green data centers has become an urgent need for enterprises to solve these problems, and this can also bring more other benefits to enterprises.

Green Data Center Benefits

The green data center concept has grown rapidly in the process of enterprise data center development. Many businesses prefer alternative energy solutions for their data centers, which can bring many benefits to the business. The benefits of green data centers are as follows.

Energy Saving

Green data centers are designed not only to conserve energy, but also to reduce the need for expensive infrastructure to handle cooling and power needs. Sustainable or renewable energy is an abundant and reliable source of energy that can significantly reduce power usage efficiency (PUE). The reduction of PUE enables enterprises to use electricity more efficiently. Green data centers can also use colocation services to decrease server usage, lower water consumption, and reduce the cost of corporate cooling systems.

Cost Reduction

Green data centers use renewable energy to reduce power consumption and business costs through the latest technologies. Shutting down servers that are being upgraded or managed can also help reduce energy consumption at the facility and control operating costs.

Environmental Sustainability

Green data centers can reduce the environmental impact of computing hardware, thereby creating data center sustainability. The ever-increasing technological development requires the use of new equipment and technologies in modern data centers, and the power consumption of these new server devices and virtualization technologies reduces energy consumption, which is environmentally sustainable and brings economic benefits to data center operators.

green data center

Enterprise Social Image Enhancement

Today, users are increasingly interested in solving environmental problems. Green data center services help businesses resolve these issues quickly without compromising performance. Many customers already see responsible business conduct as a value proposition. Enterprises, by meeting compliance, regulatory requirements and regulations of the corresponding regions through the construction of green data centers, improve the image of their own social status.

Reasonable Use of Resources

In an environmentally friendly way, green data centers can allow enterprises to make better use of various resources such as electricity, physical space, and heat, integrating the internal facilities of the data center. It promotes the efficient operation of the data center while achieving rational utilization of resources.

5 Ways to Create a Green Data Center

After talking about the benefits of a green data center, then how to build a green data center. Here are a series of green data center solutions.

  • Virtualization extension: Enterprises can build a virtualized computer system with the help of virtualization technology, and run multiple applications and operating systems through fewer servers, thereby realizing the construction of green data centers.
  • Renewable energy utilization: Enterprises can opt for solar panels, wind turbines or hydroelectric plants that can generate energy to power backup generators without any harm to the environment.
  • Enter eco mode: Using an Alternating current USPs is one way to switch eco mode. This setup can significantly improve data center efficiency and PUE. Alternatively, enterprises can reuse equipment, which not only saves money, but also eliminates unnecessary emissions from seeping into the atmosphere.
  • Optimized cooling: Data center infrastructure managers can introduce simple and implementable cooling solutions, such as deploying hot aisle/cold aisle configurations. Data centers can further accelerate cooling output by investing in air handlers and coolers, and installing economizers that draw outside air from the natural environment to build green data center cooling systems.
  • DCIM and BMS systems: DCIM software and BMS software can help data centers managers identify and document ways to use more efficient energy, helping data centers become more efficient and achieve sustainability goals.

Conclusion

Data center sustainability means reducing energy/water consumption and carbon emissions to offset increased computing and mobile device usage to keep business running smoothly. The development of green data centers has become an imperative development trend, and it also caters to the green goals of global environmental protection. As a beneficiary, enterprises can not only save operating costs, but also effectively reduce energy consumption. This is also an important reason for the construction of green data centers.

Article Source: Why Green Data Center Matters

Related Articles:

Data Center Infrastructure Basics and Management Solutions

What Is a Data Center?

Trend of Cloud Computing in Data Center

In the past, traditional data centers were mainly established by hardware and physical servers. However, the data storage is limited to the physical restriction of space. Network expansion became a headache for IT managers. Gladly, virtualized data center with cloud computing service has emerged and continued to be the trend since 2003. More and more data center technicians adopt it as a cost-effective solution to achieve higher bandwidth performance. This post will help you to have a better understanding of cloud computing in data center.

cloud-computing-of-data-center

What Is Cloud Computing?

Cloud computing service is not restricted to one data center. It may includes multiple data centers scattered around the world. Unlike the traditional data center architecture where the network users owned, maintained, and operated their own network infrastructure, server rooms, data servers, and applications, cloud data center is providing business applications online that are accessed from web browsers, while the software and data are stored on the servers or SAN devices. Thus, applications using cloud-based computing are running on servers instead of local laptop or desktop computer. There is no need for users to know the position of data center and no need for experts to operate or maintain the resources in the cloud. Knowing the way to connect to the resources is enough for the clients.

Advantages of Cloud Computing

Cloud computing brings many great changes for data center networking. Here lists some key benefits of cloud computing.

  • Flexibility – Cloud computing has the ability to update hardware and software quickly to adhere to customer demands and updates in technology.
  • Reliability – Many cloud providers replicate their server environments in multiply data centers around the globe, which accounts for business continuity and disaster recovery.
  • Scalability – Multiply resources load balance peak load capacity and utilization across multiply hardware platforms in different locations.
  • Location and hardware independence – Users can access application from a web browser connected anywhere on the internet.
  • Simple maintenance – Centralized applications are much easier to maintain than their distributed counter parts. All updates and changes are made in one centralized server instead of on each user’s computer.

cloud-computing-advantages

Traditional & Cloud Data Centers Cost Comparison

Cost is always an important concern for data center building. One reason why cloud computing is so popular among data centers is because its cost is much lower than the same service provided by traditional data centers. Generally, the number of cost mainly depends on the size, location and application of a data center.

Traditional data center is more complicated by running a lot of different applications, but this has also increased the workloads and most applications are only used by few employees making it less cost-effective. 42 percent of the money is spent on hardware, software, disaster recovery arrangements, uninterrupted power supplies, and networking, and 58 percent for heating, air conditioning, property and sales taxes, and labor costs. While cloud data center is performing the service in a different way and saves the cost for servers, infrastructure, power and networking. Less money is wasted for extra maintenance and more for cloud computing, which greatly raises the working efficiency.

Is It Secure to Use Cloud Computing?

Data security is always essential to data centers. Centralization of sensitive data in cloud computing service improves security by removing data from the users’ computers. Cloud providers also have the staff resources to maintain all the latest security features to help protect data. Many large providers will safeguard data security in cloud computing by operating multiple data centers with data replicated across facilities.

Conclusion

Cloud computing service has greatly enhanced the high performance of data centers by reducing the need for maintenance and improving the ability of productivity. More data centers are turning into cloud-based these days. It is definitely an efficient way to provide quality data service with cloud technology.

Field Terminated vs. Pre-Terminated: Which Do You Prefer?

Fiber optic termination refers to the addition of fiber optic connectors, such as LC, SC, FC, MPO, etc. to each fiber in a fiber optic cable. It is an essential step in fiber optic connectivity. Nowadays, two major termination solutions including field terminated and pre-terminated (factory pre-terminated) are used to achieve the fiber termination. For these two solutions, which do you prefer?

Field Termination

Field termination, as its name suggests, is to terminate the end of a fiber in the field. Field terminated solutions including no-epoxy, no-polish (NENP), epoxy-and-polish (EP) connectors and pigtail splicing are applied on the majority of fiber optic cables today. Field termination not only requires various of steps and tools, but also the proper training and skills of technicians to properly terminate the fiber.

field termination

Note: pigtail splicing is accomplished by fusing the field fiber with a factory-made pigtail in a splice tray.

Factory Termination

Factory termination, also called factory pre-termination, refers that cables and fibers are terminated with a connector in the factory. In fact, factory termination has the same procedures as field termination, but all the steps are taken at the manufacturers’ facility. The pre-terminated solution mainly including the fiber patch cables, the pre-terminated cassettes and enclosures features superior performance, good consistency, low insertion loss and good end-to-end attenuation in the system with the design of high-quality connector end-face geometry. In addition, by reducing the cumbersome process and tools, factory pre-terminated solution is easier to install and requires less technical skills.

factory pretermination

Field Terminated vs. Pre-Terminated

Field terminated solution and pre-terminated solution, with different strengths and weaknesses, are likely to attract different types of users. As technicians face important trade-offs in deciding which method to choose, we are going to provide a detailed comparison between them from several aspects in this section.

Preparation
Field terminated solution needs a series of preparations before termination. Procedures including stripping the cable, preparing the epoxy, applying the connector, scribe and polishing, inspection and testing are required. Additionally, tools and consumables such as epoxy and syringes, polishing products, cable installation tools, etc. are also necessary. Conversely, the pre-terminated solution doesn’t need any cable termination preparation, no connector scrap, no cumbersome tool kits or consumables and no specialized testers needed.

field terminated preparation

Cost & Time Spent
Traditional field terminated solution has the lowest material cost with no pre-terminated pigtails or assemblies required, but with the highest labor cost as it takes much longer to field install connectors. For pigtail splicing, though the factory pre-terminated pigtails cost less but the higher labor rates are typically required for technicians with fusion splicing equipment and expertise, or fusion splicing equipment and expertise must be on hand. The pre-terminated solution typically costs more than other options on materials. However, it greatly reduces the labor cost. Because less expertise and resources are required of installation staff.

As mentioned above, field terminated solution takes more time in preparation and connectors field installation. In contrast, with pre-terminated solution, connectors are factory terminated and tested in a clean environment with comprehensive quality control processes and documented test results that allows for immediate installation, saving up to 70% on installation time.

To sum up, mainly with time and labor saving, the pre-terminated solution can help users save cost at an average of 20-30% over field terminated solutions.

Performance
In terms of performance, the pre-terminated solution is more stable than the field terminated. Factory pre-terminated assemblies with documented test results are generally available in lower insertion loss and better performance. Field terminated solution works weaker in stability. Because there are many uncertainties in field installation. When for high density applications, the pre-terminated cable assemblies offer better manageability and density which are more suitable for high-density connectivity than the field terminated practices.

Applications
Field terminated solution, as a traditional termination method, is still used in many application fields. But now, for the case that cable distances are less than 100 meters and cable lengths are pre-determined, pre-terminated solution is more preferred by users. The pre-terminated solution is widely used for cross-connect or interconnect in the MDA (Main Distribution Area), EDA (Equipment Distribution Area), or other areas of the data centre, as well as for fixed lengths in the interbuilding or intrabuilding backbones.

Warm Tips: Click here to view the Field Termination vs. Factory Termination in LAN application.

Conclusion

Field terminated and factory pre-terminated solutions play a very important role in fiber optic termination, though they have different features. Choose the right method for your network according to your plan. For data center applications, FS.COM highly recommends you the pre-terminated solution as it can help keep costs down and network up, and meets the demands on high density. Contact us over sales@fs.com for detailed information.