Starting its commercial life by mid 70s and with around 1Mbit speeds, Ethernet has become a technology that changed the fate of the world and ignited the Internet when it hit 10Mbit/sec speed. This technology which we used it for a very long time, and which has connected us to the world has recently made those 40 Gbit/sec and 100Gbit/sec speeds we talked about merchantable and usable.
When talking about broad bandwidth, there is another technology, almost forgotten by most of us, that has emerged from the combination of two rival designs – Future I/O (Compaq, IBM, HP) and Next Generaton I/O (Intel, Microsoft, and Sun) – in 1999, and has been regenerating itself since then: Infiniband. Usually most of us are making mistake by confusing it with Ethernet, but in fact Infiniband is an I/O technology whereas Ethernet is a network technology. Connecting the distributed and remote positioned systems through Infiniband and connecting systems that need high I/O in a data center or systems in a high performance cluster through Ethernet mean the same thing. Infiniband mostly replaced I/O technologies such as Myrinet, Quadrics, etc.
Infinibandis a point-to-point and serial I/O technology, and its bandwidth can be improved by adding run-in connections, just like with a PCI Express databus.
Figure 1 – Infiniband Links
The speed of an Infiniband connection is determined, similarly to memory technology, by the data communicated within an hour. As shown in the table below, Infiniband operates in six different data rate: SDR (Single Data Rate), DDR (Double Data Rate), QDR (Quad Data Rate), FDR (Fourteen Data Rate), FDR-10 (Fourteen Data Date, 10 Gbit/s per link), EDR
SDR
DDR
QDR
FDR-10
FDR
EDR
1X
2 Gbit/s
4 Gbit/s
8 Gbit/s
10.3 Gbit/s
13.64 Gbit/s
25 Gbit/s
4X
8 Gbit/s
16 Gbit/s
32 Gbit/s
41.2 Gbit/s
54.54 Gbit/s
100 Gbit/s
12X
24 Gbit/s
48 Gbit/s
96 Gbit/s
123.6 Gbit/s
163.64 Gbit/s
300 Gbit/s
Actual Infiniband Speeds (Source: Wikipedia)
While SDR represents 2.5 Gbit/sec per connection and ODR represents as four times as much as SDR, with the renewed Infiniband definitions after 2010 FDR represents 14Gbit/sec, and EDR represents 25Gbit/sec.
Again while 8b/10b coding where each ten bit data actually carries 8 bit data and 2 bit control data is used for SDR, DDR, and ODR speeds, 64B/66B coding where each 66 bit data carries 64 bit data is used for FDR and EDR. So, despite its 2.5 Gbit/sec link rate an SDR link has actually 2Gbit/sec data rate, and a FDR link has 13.64 Gbit/sec data rate, despite having 14 Gbit/sec link rate. Though Infiniband road map has HDR (High Data Rate) and NDR (Next Data Rate), link rates of those remain to be seen.
An Infiniband cable may be made of copper or Fiber Optic materials. At the end of such cables could be one of many connector types (CX4, SFP+ , QSFP , etc).
In order to be able to use Infiniband architecture, three basic components are required: First of these is the Infiniband cards (HCA : Host Channel Adapter) mounted on the servers; second is the cards mounted on devices served as storage systems (TCA: Target Channel Adapter), and the third is the components that consisted of an Infiniband network (switching devices, cables, etc.)
One of the most important difference that distinguishes Infiniband from other architectures is the ability to build virtual lanes, each of which has its own queue structure.
Figure 2 – Virtual Lanes
By means of those virtual lanes, more than one protocol can be operated on one Infiniband cable at the same time and at different rates. For example; on an Infiniband link of 56Gbit 2 FC storage systems with 8 Gbit/sec rate and 4 ethernet connections of 10Gbit/sec rate could be built. This provides a significant consolidation between card and cable as well as hugely benefiting server virtualization technologies.
Figure 3 – Consolidation with Infiniband
While Vmware has been supporting Infiniband virtual lanes since its ESX 3.5 version, such support is absence in Microsoft Hyper-V 2.0. Whether or not recently launched Hyper-V version has such support remains to be seen. Use of virtual lanes provides a significant flexibility to the virtualization administrator.
Figure 4 – Vmware and Infiniband
Conclusion
Infiniband is currently the fastest databus in the sector and the fastest communication architecture in the near future. Also, thanks to its very low latencies, it has become the invariable linkage interface for high performance clusters. On the other hand, it’s an unrivaled linkage technology due to its lesser-known virtual lane architecture. We will address its closest rival, that is FcoE technology, in other paper.
High-speed virtualization – Infiniband
Starting its commercial life by mid 70s and with around 1Mbit speeds, Ethernet has become a technology that changed the fate of the world and ignited the Internet when it hit 10Mbit/sec speed. This technology which we used it for a very long time, and which has connected us to the world has recently made those 40 Gbit/sec and 100Gbit/sec speeds we talked about merchantable and usable.
When talking about broad bandwidth, there is another technology, almost forgotten by most of us, that has emerged from the combination of two rival designs – Future I/O (Compaq, IBM, HP) and Next Generaton I/O (Intel, Microsoft, and Sun) – in 1999, and has been regenerating itself since then: Infiniband. Usually most of us are making mistake by confusing it with Ethernet, but in fact Infiniband is an I/O technology whereas Ethernet is a network technology. Connecting the distributed and remote positioned systems through Infiniband and connecting systems that need high I/O in a data center or systems in a high performance cluster through Ethernet mean the same thing. Infiniband mostly replaced I/O technologies such as Myrinet, Quadrics, etc.
Infiniband is a point-to-point and serial I/O technology, and its bandwidth can be improved by adding run-in connections, just like with a PCI Express databus.
Figure 1 – Infiniband Links
The speed of an Infiniband connection is determined, similarly to memory technology, by the data communicated within an hour. As shown in the table below, Infiniband operates in six different data rate: SDR (Single Data Rate), DDR (Double Data Rate), QDR (Quad Data Rate), FDR (Fourteen Data Rate), FDR-10 (Fourteen Data Date, 10 Gbit/s per link), EDR
Actual Infiniband Speeds (Source: Wikipedia)
While SDR represents 2.5 Gbit/sec per connection and ODR represents as four times as much as SDR, with the renewed Infiniband definitions after 2010 FDR represents 14Gbit/sec, and EDR represents 25Gbit/sec.
Again while 8b/10b coding where each ten bit data actually carries 8 bit data and 2 bit control data is used for SDR, DDR, and ODR speeds, 64B/66B coding where each 66 bit data carries 64 bit data is used for FDR and EDR. So, despite its 2.5 Gbit/sec link rate an SDR link has actually 2Gbit/sec data rate, and a FDR link has 13.64 Gbit/sec data rate, despite having 14 Gbit/sec link rate. Though Infiniband road map has HDR (High Data Rate) and NDR (Next Data Rate), link rates of those remain to be seen.
An Infiniband cable may be made of copper or Fiber Optic materials. At the end of such cables could be one of many connector types (CX4, SFP+ , QSFP , etc).
In order to be able to use Infiniband architecture, three basic components are required: First of these is the Infiniband cards (HCA : Host Channel Adapter) mounted on the servers; second is the cards mounted on devices served as storage systems (TCA: Target Channel Adapter), and the third is the components that consisted of an Infiniband network (switching devices, cables, etc.)
One of the most important difference that distinguishes Infiniband from other architectures is the ability to build virtual lanes, each of which has its own queue structure.
Figure 2 – Virtual Lanes
By means of those virtual lanes, more than one protocol can be operated on one Infiniband cable at the same time and at different rates. For example; on an Infiniband link of 56Gbit 2 FC storage systems with 8 Gbit/sec rate and 4 ethernet connections of 10Gbit/sec rate could be built. This provides a significant consolidation between card and cable as well as hugely benefiting server virtualization technologies.
Figure 3 – Consolidation with Infiniband
While Vmware has been supporting Infiniband virtual lanes since its ESX 3.5 version, such support is absence in Microsoft Hyper-V 2.0. Whether or not recently launched Hyper-V version has such support remains to be seen. Use of virtual lanes provides a significant flexibility to the virtualization administrator.
Figure 4 – Vmware and Infiniband
Conclusion
Infiniband is currently the fastest databus in the sector and the fastest communication architecture in the near future. Also, thanks to its very low latencies, it has become the invariable linkage interface for high performance clusters. On the other hand, it’s an unrivaled linkage technology due to its lesser-known virtual lane architecture. We will address its closest rival, that is FcoE technology, in other paper.