What Is InfiniBand and InfiniBand Switch?

In 1999, with the rapid development of CPU performance, the existing defective I/O systems had become a bottleneck restricting server performance. The telecommunication industry had urgent need for a powerful next generation I/O standard and technology to cater for the high speed communication network. Under this circumstance InfiniBand originated. Accordingly InfiniBand switch combined high-speed fiber switch with InfiniBand technology was invented to achieve node to node communication in IB networking. This post will introduce what is InfiniBand, what is InfiniBand switch and how to bridge InfiniBand to Ethernet.

What Is InfiniBand?

It was until 2005 that InfiniBand Architecture (IBA) has been widely used in clustered supercomputers. And ever since more and more telecom. giants are joining to the camp. Now InfiniBand has become one of the mainstream high performance computer (HPC) interconnect technologies in HPC, enterprise data centers and cloud computing environments. InfiniBand, infinite bandwidth, as the name reveals, is a high-performance computing networking communication standard. It features high throughput, low latency and high system scalability. InfiniBand as a cutting-edge technology, is ideal for communications between servers, server and storage, server and LAN/WAN/Internet. InfiniBand architecture is to use this technology to achieve multiple link networking for data follow between processors and I/O devices with non-blocking bandwidth.

InfiniBand topology HPC cluster an InfiniBand switch is integrated in each of the chassis

Figure 1: InfiniBand topology HPC cluster – an InfiniBand switch is integrated in each of the classis.

What Is InfiniBand Switch?

InfiniBand switch is also called as IB switch. Similar to PoE switch, SDN switch and NVGRE/VXLAN switch, IB switch is to add InfiniBand capability to network switch hardware. In the market Mellanox InfiniBand switch, Intel and Oracle InfiniteBand switch are three name-brand leading IB switches. InfiniBand switch price also varies from vendors and switch configurations. IB switch ports comes with different numbers, connector types and IB types. For instance, the leading IB switch vendor Mellanox manufactures 8 to 648-port QSFP/QSFP28 FDR/EDR InfiniBand switches. In a common 4 × links, FDR and EDR InfiniBand support respectively 56Gb/s and 100Gb/s. In addition to the popular FDR 56Gb/s and EDR 100Gb/s IinfiniBand, you can go for HDR 200G switch for higher speed and SDR 10GbE switch for lower speed. Other IB types available are DDR 20G, QDR 40G and FDR10 40G.

InfiniBand switch in a basic InfiniBand Architecture

Figure 2: InfiniBand switches in a basic InfiniBand Architecture by Mellanox to ensure higher bandwidth, lower latency, and enhanced scalability.

How to Bridge InfiniBand to Ethernet?

As Ethernet and InfiniBand are two different network standards, one question is of great concern – how to bridge InfiniBand to Ethernet? In fact many modern InfiniBand switches have built-in Ethernet ports and Ethernet gateway to improve network environment adaptability. But for cases where IB ports are only on InfiniBand switch, how to connect the layer 2 InfiniBand host to layer 1 multiple gigabit Etherne switches? You may need NICs such as Infiniband card/Ethernet converged network adapters (CNAs) to bridge the InfinBand over Ethernet.

Ethernet gateway Bridge-group bridges InfiniBand switch to Ethernet

Figure 3: An illustration of Ethernet gateway Bridge-group bridges InfiniBand to Ethernet by Cisco.

Or you can buy Mellanox InfiniBand switch series based on ConnectX series network card and SwitchX switch, which supports virtual protocol interconnection (VPI) between InfiniBand and Ethernet. As thus it enables link protocol display or automatic adaptation and one physical Mellanox IB switch can implement various technical supports. The VPI supports 3 modes – the whole machine VPI, port VPI and VPI bridging. The whole VPI enables all ports of the InfiniBand switch run in InfiniBand or Ethernet mode. The port VPI commands some ports of the switch run in IB network and some ports run in Ethernet mode. The VPI bridging mode implements InfiniBand bridging to Ethernet.

Conclusion

InfiniBnad technology simplifies and accelerates link aggreagation between servers and supports server connectivity to remote storage and network devices. InfiniBand switch combines IB technology with fiber switch hardware. It achieves high capacity, low latency and excellent scalability for HPC, enterprise data centers and cloud computing environments. How to bridge InfiniBand to Ethernet in a topology built with InfiniBand switch and Ethernet switch? Devices like channel adapter (CNA), InfiniBand router/Ethernet gateway, InfiniBand connector and InfiniBand cable may be required. To ensure flexible bridging, go for IB switch with optional Ethernet ports or Mellanox InfiniBand switch series with VPI functionality. Of course such InfiniBand switch price can be rather exorbitant, but its advanced features make it worthy of that.

NVGRE vs VXLAN: What’s the Difference?

What is network virtualization? Network virtualization is a software-defined networking process to combine hardware and software into a single virtual network. Over the years, network virtualization has always been upgrading as different virtual network technologies have popping out. It has a transitional period from dummy virtualization networking to more advanced one like virtual VLAN. Then the appearance of two tunneling protocols – NVGRE and VXLAN have brought in new network virtualization technologies. Software-defined networking (SDN) NVGRE vs VXLAN: What’s the difference? This post will introduce SDN NVGRE vs VXLAN definition, NVGRE/VXLAN network switch features and the difference between NVGRE and VXLAN.

NVGRE vs VXLAN What's the Difference

NVGRE vs VXLAN:What Are NVGRE and VXLAN?

NVGRE (Network Virtualization using Generic Routing Encapsulation) and VXLAN (Virtual Extensive Local Area Network) are two different tunneling protocols for network virtualization technology. They don’t provide substantial functionality but define how various virtual devices like network switches encapsulate and forward packets. However many times people mention software-defined NVGRE/VXLAN as network virtualization technologies. Both NVGRE and Virtual Extensive LAN encapsulate layer 2 protocols with layer 3 protocols, which solve the scalability problem of large cloud computing and enable layer 2 packets exchange across IP networks.

NVGRE vs VXLAN: What’s the difference?

  • NVGRE is mainly supported by Microsoft whereas VXLAN is introduced by Cisco. The two tech giants are scrambling to make their standards become the unified standard in the industry.
  • Both technologies change the situation of fixed VLAN size – 4096 virtual networks while creating up to 16 million virtual networks. However, VXLAN vs NVGRE deployment method and header format are quite different. VXLAN uses the standard tunneling protocol UDP to generate a 24-bit ID segment on the VXLAN header. Instead, NVGRE employs GRE (Generic Routing Encapsulation) to tunnel layer 2 packets over layer 3 networks. NVGRE header format is lower 24 bits GRE header, which can also support 16 million virtual networks.
  • VXLAN can guarantee load balancing and reserve the data packet order between different virtual machines (VMs). However, as NVGRE needs to provide a flow to describe the bandwidth utilization granularity, the tunneling network must use GRE header. This causes NVGRE incompatible with traditional load balancing. To solve this problem, NVGRE host requires multiple IP addresses to ensure balanced traffic load.

NVGRE vs VXLAN: NVGRE/VXLAN Enabled Network Switch

As Power over Ethernet technology booming, PoE enabled switch such as gigabit PoE switch had been invented to add PoE to networks. Similarly, software-based technologies like LACP, SND, NVGRE and VXLAN have also penetrated to hardware devices. For example, NVGRE/VXLAN enabled data switch owns NVGRE/VXLAN capability to expand VLAN size compared. Such NVGRE or VXLAN enabled switches come with different capacity ranging from 1G to 100G in the market.

FS recommends S and N series high-end L2/L3 switches. Say S5850-48T4Q 48 port 10Gb Ethernet switch with 4 40G QSFP+ ports and N5850-48S6Q 48 port 10Gb SFP+ Top-of-Rack (ToR)/ Leaf switch with 6 40G QSFP+ ports. Both of the 10GbE switches support NVGRE and VXLAN to support over 16M virtual networks.

S5850-48T4Q high performance Ethernet copper switch supports advanced features like VxLAN, IPv4/IPv6, MLAG, NVGRE, best fit for enterprise/data center/Metro ToR access requiring complete software with comprehensive protocols and applications deployment. N5850-48S6Q fiber switch supports advanced features including MLAG, VXLAN/NVGRE, SFLOW, SNMP, MPLS etc, ideal for fully virtualized data center. Besides, the optional ONIE type of this model supports any ONIE-enabled software to be installed in the open switch, natural fit for open network installation network.

S5850-48T4Q NVGRE vs VXLAN 10Gb switch with 4 40G QSFP+

Figure 1: FS provides various NVGRE vs VXLAN capable network switches ranging from 1G to 100G.

Conclusion

VXLAN and NVGRE are advanced network virtualization implement tunneling protocols/technologies compared with VLAN. They expand virtual networks size from 4096 up to 16 million and allow layer 2 packets to transmit across IP fabric such as layer 3 networks. NVGRE vs VXLAN differences lie in supported tech giants, tunneling method, header format and load balancing compatibility. Adding NVGRE and VXLAN capability to network switch overcomes VLAN scalability limits in large cloud computing and enables an agile VM networking environment.