Link Operation - HP 4X - DDR InfiniBand Mezzanine HCA Using Manual

Using infiniband for a scalable compute infrastructure
Hide thumbs Also See for 4X - DDR InfiniBand Mezzanine HCA:
Table of Contents

Advertisement

The operating distances of InfiniBand interconnects are contingent upon the type of cabling (copper or
fiber optic), the connector, and the signal rate. The common connectors used in InfiniBand
interconnects today are CX4 and quad small-form-factor pluggable (QSFP) as shown in Figure 6.
Figure 6. InfiniBand connectors
CX4
QSFP
Fiber optic cable with CX4 connectors generally offers the greatest distance capability. The adoption
of 4X DDR products is widespread, and deployment of QDR systems is expected to increase.

Link operation

Each link can be divided (multiplexed) into a set of virtual lanes, similar to highway lanes (Figure 7).
Each virtual lane provides flow control and allows a pair of devices to communicate autonomously.
Typical implementations have each link accommodating eight lanes
1
; one lane is reserved for fabric
management and the other lanes for packet transport. The virtual lane design allows an InfiniBand
link to share bandwidth between various sources and targets simultaneously. For example, if a
10Gb/s link were divided into five virtual lanes, each lane would have a bandwidth of 2Gb/s. The
InfiniBand architecture defines a virtual lane mapping algorithm to ensure inter-operability between
end nodes that support different numbers of virtual lanes.
Figure 7. InfiniBand virtual lane operation
Sources
Targets
The IBTA specification defines a minimum of two and a maximum of 16 virtual lanes per link.
1
9

Advertisement

Table of Contents
loading

This manual is also suitable for:

489183-b21 - infiniband ddr switch4x

Table of Contents