nVIDIA ConnectX-7 Adapter Cards User Manual

July 27, 2024
Nvidia

Table of Contents

nVIDIA ConnectX-7 Adapter Cards

Product Information

Specifications

  • Product Name: NVIDIA ConnectX-7 Adapter Cards
  • Supported Interfaces: PCIe x16 Stand-up Adapter Cards, Socket Direct Ready Cards
  • Connectivity: Networking Interfaces, PPS IN/OUT Interface, Clock IN/OUT Interface, SMBus Interface
  • LEDs: Networking Ports LEDs Specifications

Product Usage Instructions

Hardware Installation

Safety Warnings

  • Before starting the installation process, make sure to read all safety warnings provided in the user manual to prevent any accidents.

Installation Procedure Overview

  • Refer to the user manual for a detailed overview of the installation procedure to ensure proper setup of the adapter cards.

System Requirements

  • Hardware Requirements: Ensure your system meets the hardware requirements specified in the manual.
  • Airflow Requirements: Maintain proper airflow to prevent overheating of the adapter cards.
  • Software Requirements: Check and install any necessary software required for the adapter cards to function correctly.

Identifying the Card in Your System

  • After installation, identify the card in your system using the guidelines provided in the manual.

FAQ

Frequently Asked Questions

  • Q: What should I do if my system does not recognize the NVIDIA ConnectX-7 Adapter Cards after installation?
    • A: If your system does not recognize the adapter cards, please check the connections, ensure proper installation, and verify that all software requirements are met. If issues persist, contact customer support for further assistance.

“`

Notes:
1The MCX75310AAS-NEAT card supports InfiniBand and Ethernet protocols from hardware version AA and higher.
2The MCX75310AAS-HEAT card supports InfiniBand and Ethernet protocols from hardware version A7 and higher.

ConnectX-7 for Telecommunication Applications

NVIDI Legac A SKU y OPN

Form Factor

Data Transmi
ssion Rate

No. of Ports and Type

PCIe Support

Sec Cr Timing Brac L

ure yp Capabil ket i

Boo to ities Type f

t

e

c

y

c

l

e

900-9X7 MCX7131 PCIe Full AH-004N 14TC- Height, Half -CT0 GEAT Length
4.53 in. x 6.6 in. (115.15 mm x 167.65 mm)

Ethernet: Quad50/25GbE port
SFP56

PCIe x16 Gen 4.0 @ SERDES 16GT/s

PPS In / Tall En

Out,

Brack gin

SMAs,

et

ee

SycnE

rin

g

Sa

m

pl

es

ConnectX-7 Socket Direct Ready Cards for Dual-Slot Servers

6

NVID Legac IA y OPN SKU

Form Factor

Data Transmis sion Rate

No. of Ports and Type

PCIe Support

Socket Direct Ready PCIe Extension Option

Se C Bra L

cu r cke i

re y t f

Bo p Typ e

ot t e c

o

y

c

l

e

900-9X MCX715
7AH-00 105AS-
39- WEAT STZ

PCIe Half

InfiniBand:

Height, Half NDR

Length

400Gb/s

2.71 in. x 6.6 Ethernet:

in.

400GbE

(68.90mm x (Default

167.65 mm) Speed)

Single- PCIe x16 Gen Optional:

port 4.0/5.0 @

PCIe x16 Gen

QSFP11 SERDES 16GT/ 4.0 @ SERDES

2

s/32GT/s

16GT/s

900-9X
721-00
3NDT0

MCX755 10AASNEAT

PCIe Half

InfiniBand:

Height, Half NDR

Length

400Gb/s

2.71 in. x 6.6

in.

(68.90mm x

167.65 mm)

Singleport OSFP

PCIe x16 Gen Optional:

4.0/5.0 @

PCIe x16 Gen

SERDES 16GT/ 4.0 @ SERDES

s/32GT/s

16GT/s

900-9X
721-00
3NDT1

MCX755 10AASHEAT

PCIe Half

InfiniBand:

Height, Half NDR200

Length

200Gb/s

2.71 in. x 6.6

in.

(68.90mm x

167.65 mm)

Singleport OSFP

PCIe x16 Gen Optional:

4.0/5.0 @

PCIe x16 Gen

SERDES 16GT/ 4.0 @ SERDES

s/32GT/s

16GT/s

900-9X MCX755
7AH-00 106AS-
78- HEAT DTZ

PCIe Half

InfiniBand:

Height, Half NDR200

Length

200Gb/s

2.71 in. x 6.6 Ethernet:

in.

200GbE

(68.90mm x (Default

167.65 mm) Speed)

Dual- PCIe x16 Gen Optional:

port 4.0/5.0 @

PCIe x16 Gen

QSFP11 SERDES 16GT/ 4.0 @ SERDES

2

s/32GT/s

16GT/s

900-9X MCX755
7AH-00 106AC-
79- HEAT DTZ

PCIe Half

InfiniBand:

Height, Half NDR200

Length

200Gb/s

2.71 in. x 6.6 Ethernet:

in.

200GbE

(68.90mm x (Default

167.65 mm) Speed)

Dual- PCIe x16 Gen Optional:

port 4.0/5.0 @

PCIe x16 Gen

QSFP11 SERDES 16GT/ 4.0 @ SERDES

2

s/32GT/s

16GT/s

– Tall En Brac gi ket ne eri ng Sa m pl es
– Tall Ma Brac ss ket Pr od uc tio n
– Tall Ma Brac ss ket Pr od uc tio n
– Tall Ma Brac ss ket Pr od uc tio n
Tall Ma Brac ss ket Pr od uc tio n

Legacy (EOL) Ordering Part Numbers

7

NVIDI Legac A SKU y OPN

Form Factor

Data Trans missio n Rate

No. of Ports and Type

PCIe Support

Secu Cr Timing Brac Life re yp Capabili ket cycl
Boot to ties Type e

900-9X7 MCX713 PCIe Half Ethernet Dual- PCIe x16

AH-0088 106AC- Height,

:

port

Gen

-ST0 VEAT Half Length 200GbE QSFP112 4.0/5.0 @

2.71 in. x

SERDES

6.6 in.

16GT/s/

(68.90mm

32GT/s

x 167.65

mm)

900-9X7 MCX713 PCIe Half Ethernet Dual- PCIe x16

AH-0078 106AS- Height,

:

port

Gen

-ST0 VEAT Half Length 200GbE QSFP112 4.0/5.0 @

2.71 in. x

SERDES

6.6 in.

16GT/s/

(68.90mm

32GT/s

x 167.65

mm)

900-9X7 MCX713 PCIe Half Ethernet Single- PCIe x16

AH-0039 105AS- Height,

:

port

Gen

-ST1

WEAT Half Length 400GbE QSFP112 4.0/5.0 @

2.71 in. x

SERDES

6.6 in.

16GT/s/

(68.90mm

32GT/s

x 167.65

mm)

900-9X7 AH-004 N-GT0

MCX713 114GCGEAT

PCIe Full Ethernet Quad-

Height,

:

port

Half Length 50/25Gb SFP56

4.53 in. x E

6.6 in.

(115.15

mm x

167.65

mm)

PCIe x16 Gen 4.0 @ SERDES 16GT/s

Tall End of Bracke Life t

Tall End of Bracke Life t

Tall End of Bracke Life t

Enhanced- Tall End of SyncE & Bracke Life PTP Grand t Master support and GNSS/ PPS Out

For more information, please refer to PCIe Auxiliary Card Kit.
Technical Support
Customers who purchased NVIDIA products directly from NVIDIA are invited to contact us through the following methods:
· URL: https://www.nvidia.com . E-mail: enterprisesupport@nvidia.com
Customers who purchased NVIDIA Global Support Services, please see your contract for details regarding Technical Support. Customers who purchased NVIDIA products through an NVIDIA-approved reseller should first seek assistance through their reseller.
Related Documentation

8

MLNX_OFED for Linux User Manual and Release Notes

User Manual describing OFED features, performance, band diagnostic, tools content and configuration. See MLNX_OFED for Linux Documentation.

WinOF-2 for Windows User
Manual and Release
Notes

User Manual describing WinOF-2 features, performance, Ethernet diagnostic, tools content and configuration. See WinOF-2 for Windows Documentation.

NVIDIA VMware for Ethernet User Manual

User Manual and release notes describing the various components of the NVIDIA ConnectX® NATIVE ESXi stack. See VMware® ESXi Drivers Documentation.

NVIDIA Firmware Utility (mlxup) User Manual and Release Notes

NVIDIA firmware update and query utility used to update the firmware. Refer to Firmware Utility (mlxup) Documentation.

NVIDIA Firmware Tools (MFT) User Manual

User Manual describing the set of MFT firmware management tools for a single node. See MFT User Manual.

InfiniBand Architecture Specification Release 1.2.1, Vol 2 – Release 1.4, and Vol 2 – Release 1.5

InfiniBand Specifications

IEEE Std 802.3 Specification

IEEE Ethernet Specifications

PCI Express 5.0 Specifications

Industry Standard PCI Express Base and Card Electromechanical Specifications. Refer to PCI-SIG Specifications.

LinkX Interconnect Solutions

LinkX cables and transceivers are designed to maximize the performance of High-
Performance Computing networks, requiring high-bandwidth, low-latency connections between compute nodes and switch nodes. NVIDIA offers one of the industry’s most complete line of 10, 25, 40, 50, 100, 200, and 400GbE in Ethernet and EDR, HDR, and NDR, including Direct Attach Copper cables (DACs), copper splitter cables, Active Optical Cables (AOCs) and transceivers in a wide range of lengths from 0.5m to 10km.
In addition to meeting Ethernet and IBTA standards, NVIDIA tests every product in an end-to-end environment ensuring a Bit Error Rate of less than 1E-15. Read more at LinkX Cables and Transceivers.

NVIDIA ConnectX-7 Electrical and Thermal Specifications

You can access the “NVIDIA ConnectX-7 Electrical and Thermal Specifications” document either by logging into NVOnline or by contacting your NVIDIA representative.

When discussing memory sizes, MB and MBytes are used in this document to mean size in MegaBytes. The use of Mb or Mbits (small b) indicates size in MegaBits. IB is used in this document to mean InfiniBand. In this document, PCIe is used to mean PCI Express.
Revision History
A list of the changes made to this document is provided in Document Revision History.

Introduction

1.1 Product Overview
The NVIDIA ConnectX-7 family of network adapters supports both the InfiniBand and Ethernet protocols. It enables a wide range of smart, scalable, and feature-rich networking solutions that address traditional enterprise needs up to the world’s most demanding AI, scientific computing, and hyperscale cloud data center workloads.
ConnectX-7 network adapters are offered in two form factors and various flavors: stand-up PCIe and Open Compute Project (OCP) Spec 3.0 cards. This user manual covers the PCIe stand-up cards, for the OCP 3.0 cards, please refer to NVIDIA ConnectX-7 Cards for OCP Spec 3.0 User Manual.
Make sure to use a PCIe slot capable of supplying the required power and airflow to the
ConnectX-7, as stated in the Specifications chapter.

1.1.1 PCIe x16 Stand-up Adapter Cards
ConnectX-7 HCAs are available in various configurations; Single-port 400Gb/s or 200Gb/s, with octal small form-factor pluggable (OSFP) connectors or Dual- port 100 or 200Gb/s with quad small formfactor pluggable (QSFP112) connectors on PCIe standup half-height, half-length (HHHL) form factor, with options for NVIDIA Socket Direct. Also available, Dual-port 50/25 GbE with quad small formfactor pluggable (SFP56) connectors on PCIe standup full-height, half- length (FHHL) form factor, with timing capabilities.
ConnectX-7 cards can either support both InfiniBand and Ethernet, or Ethernet only, as described in the below table. The inclusive list of OPNs is available here.
ConnectX-7 adapter cards with OSFP form factor only support RHS (Riding Heat Sink) cage.

Supported Protocols Ethernet Only Card
InfiniBand and Ethernet Cards

Port Type Dual-port QSFP112 Quad-port SFP56 Single-port OSFP

Supported Speed · 100GbE · 50/25GbE · NDR 400Gb/s and 400GbE · NDR200 200Gb/s and 200GbE

1.1.2 Socket Direct Ready Cards
The Socket Direct technology offers improved performance to dual-socket servers by enabling direct access from each CPU in a dual-socket server to the network through its dedicated PCIe interface.
NVIDIA offers ConnectX-7 Socket Direct adapter cards, which enable 400Gb/s or 200Gb/s connectivity, and also for servers with PCIe Gen 4.0 capability. The adapter’s 32-lane PCIe bus is split into two 16-lane buses, with one bus accessible through a PCIe x16 edge connector and the other bus through an x16 Auxiliary PCIe Connection card. The two cards should be installed into two PCIe x16 slots and connected using two Cabline SA-II Plus harnesses.

10

To use this card in the Socket-Direct configuration, please order the additional PCIe Auxiliary Card kit according to the desired harness length. Cards that support socket direct can function as separate x16 PCIe cards.

Socket Direct cards can support both InfiniBand and Ethernet, or InfiniBand only, as described below.

Supported Protocols

Port Type

Supported Speed

InfiniBand Only InfiniBand and Ethernet

Single-port OSFP
Dual-port QSFP112 Single-port QSFP112

· NDR 400Gb/s · NDR200 200Gb/s · NDR200 200Gb/s and 200GbE · NDR 400Gb/s and 400GbE

For more information on the passive PCIe Auxiliary kit, please refer to PCIe Auxiliary Card Kit.

1.2 System Requirements

Item

Description

PCI Express slot

In PCIe x16 Configuration
PCIe Gen 5.0 (32GT/s) through x16 edge connector.
In Socket Direct Configuration (2x PCIe x16) · PCIe Gen 4.0/5.0 SERDES @16/32GT/s through edge connector · PCIe Gen 4.0 SERDES @16GT/s through PCIe Auxiliary Connection Card

System Power Supply

Refer to Specifications

Operating System

· In-box drivers for major operating systems: · Linux: RHEL, Ubuntu · Windows · Virtualization and containers · VMware ESXi (SR-IOV) · Kubernetes · OpenFabrics Enterprise Distribution (OFED) · OpenFabrics Windows Distribution (WinOF-2)

Connectiv ity

· Interoperable with 1/10/25/40/50/100/200/400 Gb/s Ethernet switches and SDR/DDR/EDR/
HDR100/HDR/NDR200/NDR InfiniBand switches · Passive copper cable with ESD protection · Powered connectors for optical and active cable support

1.3 Package Contents

Category

Qty

Cards

1

Accessories

1

1

Item
ConnectX-7 adapter card Adapter card short bracket Adapter card tall bracket (shipped assembled on the card)

11

1.4 Features and Benefits
Make sure to use a PCIe slot capable of supplying the required power and airflow to the
ConnectX-7 cards as stated in the Specifications chapter.

This section describes hardware features and capabilities. Please refer to the relevant
driver and firmware release notes for feature availability.

PCI Express (PCIe)

According to the OPN you have purchased, the card uses the following PCIe express interfaces: · PCIe x16 configurations: PCIe Gen 4.0/5.0 (16GT/s / 32GT/s) through x16 edge connector. · 2x PCIe x16 configurations (Socket- Direct): PCIe Gen 4.0/5.0 ( SERDES @ 16GT/s / 32GT/s) through x16 edge connector
PCIe Gen 4.0 SERDES @ 16GT/s through PCIe Auxiliary Connection Card

InfiniBa nd
Architec ture
Specific ation v1.5
complia nt

ConnectX-7 delivers low latency, high bandwidth, and computing efficiency for high-performance
computing (HPC), artificial intelligence (AI), and hyperscale cloud data center applications.
ConnectX-7 is InfiniBand Architecture Specification v1.5 compliant. InfiniBand Network Protocols and Rates:

Protocol

Standard

Rate (Gb/s)

Comments

4x Port (4 Lanes)

2x Ports (2 Lanes)

NDR/NDR200

IBTA Vol2 1.5

425

HDR/HDR100

IBTA Vol2 1.4

212.5

EDR

IBTA Vol2 1.3.1

103.125

FDR

IBTA Vol2 1.2

56.25

212.5 106.25 51.5625 N/A

PAM4 256b/ 257b encoding and RS-FEC
PAM4 256b/ 257b encoding and RS-FEC
NRZ 64b/66b encoding
NRZ 64b/66b encoding

12

Up to 400 Gigabit Ethernet

ConnectX-7 adapter cards comply with the following IEEE 802.3 standards: 400GbE / 200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE

Protocol

MAC Rate

IEEE802.3ck

100/200/400Gb/s Gigabit Ethernet (Include ETC enhancement)

IEEE802.3cd IEEE802.3bs IEEE802.3cm IEEE802.3cn IEEE802.3cu

50/100/200/400Gb/s Gigabit Ethernet (Include ETC enhancement)

IEEE 802.3bj IEEE 802.3bm

100 Gigabit Ethernet

IEEE 802.3by Ethernet Technology Consortium

25/50 Gigabit Ethernet

IEEE 802.3ba

40 Gigabit Ethernet

IEEE 802.3ae

10 Gigabit Ethernet

IEEE 802.3cb

2.5/5 Gigabit Ethernet (For 2.5: support only 2.5 x1000BASE-X)

IEEE 802.3ap

Based on auto-negotiation and KR startup

IEEE 802.3ad IEEE 802.1AX

Link Aggregation

IEEE 802.1Q IEEE 802.1P VLAN tags and priority

IEEE 802.1Qau (QCN) Congestion Notification IEEE 802.1Qaz (ETS) EEE 802.1Qbb (PFC) IEEE 802.1Qbg IEEE 1588v2 IEEE 802.1AE (MACSec) Jumbo frame support (9.6KB)

Memory Compon
ents

· SPI – includes 256Mbit SPI Quad Flash device. · FRU EEPROM – Stores the parameters and personality of the card. The EEPROM capacity is
128Kbit. FRU I2C address is (0x50) and is accessible through the PCIe SMBus. (Note: Address 0x58 is reserved.)

Overlay Network
s

In order to better scale their networks, datacenter operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-7 effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol.

Quality of
Service (QoS)

Support for port-based Quality of Service enabling various application requirements for latency and SLA.

13

Hardware-based I/O Virtualization

ConnectX-7 provides dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server.

Storage Acceleration

A consolidated compute and storage network achieves significant cost- performance advantages
over multi-fabric networks. Standard block and file access protocols can leverage: · RDMA for high-performance storage access · NVMe over Fabric offloads for the target machine · NVMe over TCP acceleration

SR-IOV ConnectX-7 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server.

HighPerformance Accelerations

· Collective operations offloads · Vector collective operations offloads · MPI tag matching · MPI_Alltoall offloads · Rendezvous protocol offload

RDMA 330-370 million messages per second. Message
Rate

Secure Boot

The secure boot process assures booting of authentic firmware/software that is intended to run on ConnectX-7. This is achieved using cryptographic primitives using asymmetric cryptography. ConnectX-7 supports several cryptographic functions in its HW Root-of-Trust (RoT) that has its key stored in on-chip FUSES.

Secure Firmware Update

The Secure firmware update feature enables a device to verify digital signatures of new
firmware binaries to ensure that only officially approved versions can be installed from the host, the network, or a Board Management Controller (BMC). The firmware of devices with “secure firmware update” functionality (secure FW), restricts access to specific commands and registers that can be used to modify the firmware binary image on the flash, as well as commands that
can jeopardize security in general.
For further information, refer to the MFT User Manual.

Advanced
storage capabilities

Block-level encryption and checksum offloads.

Host Management

ConnectX-7 technology maintains support for host manageability through a BMC. ConnectX-7 PCIe stand-up adapter can be connected to a BMC using MCTP over SMBus or MCTP over PCIe protocols as if it is a standard NVIDIA PCIe stand-up adapter card. For configuring the adapter for the specific manageability solution in use by the server, please contact NVIDIA Support.
· Protocols: PLDM, NCSI · Transport layer ­ RBT, MCTP over SMBus and MCTP over PCIe · Physical layer: SMBus 2.0 / I2C interface for device control and configuration, PCIe · PLDM for Monitor and Control DSP0248 · PLDM for Firmware Update DSP026 · IEEE 1149.6 · Secured FW update · FW Recovery · NIC reset · Monitoring and control · Network port settings · Boot setting

14

Accurat e timing

NVIDIA offers a full IEEE 1588v2 PTP software solution, as well as time- sensitive related features called “5T”. NVIDIA PTP and 5T software solutions are designed to meet the most demanding PTP profiles. ConnectX-7 incorporates an integrated Hardware Clock (PHC) that allows ConnectX-7 to achieve sub 20u Sec accuracy and also offers many timing-related functions such as time- triggered scheduling or time-based SND accelerations (time-based ASAP²). Furthermore, 5T technology enables the software application to transmit fronthaul (ORAN) compatible in high bandwidth. The PTP part supports the subordinate clock, master clock, and boundary clock. ConnectX-7 PTP solution allows you to run any PTP stack on your host. With respect to testing and measurements, selected NVIDIA adapters allow you to use the PPSout signal from the onboard SMA connecter, ConnectX-7 also allows measuring PTP in scale, with a PPS-In signal. The PTP HW clock on the Network adapter will be sampled on each PPS-In signal, and the timestamp will be sent to the SW.

RDMA and RDMA over Converg ed Ethernet (RoCE)

ConnectX-7, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high-performance over InfiniBand and Ethernet networks. Leveraging datacenter bridging (DCB) capabilities as well as ConnectX-7 advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.

NVIDIA PeerDire
ctTM

PeerDirectTM communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-7 advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.

CPU Offload

Adapter functionality enables reduced CPU overhead allowing more available CPU for
computation tasks. · Flexible match-action flow tables · Open VSwitch (OVS) offload using ASAP2® · Tunneling encapsulation/decapsulation

PPS In/ Out
SMAs

Applies to MCX713114TC-GEAT only: NVIDIA offers a full IEEE 1588v2 PTP software solution, as well as time-sensitive related features called “5T”. NVIDIA PTP and 5T software solutions are designed to meet the most demanding PTP profiles. ConnectX-6 Dx incorporates an integrated Hardware Clock (PHC) that allows ConnectX-7 to achieve sub 20u Sec accuracy and also offers many timing-related functions such as timetriggered scheduling or time-based SND accelerations (time-based ASAP²). Furthermore, 5T technology enables the software application to transmit fronthaul (ORAN) compatible in high bandwidth. The PTP part supports the subordinate clock, master clock, and boundary clock. ConnectX-7 PTP solution allows you to run any PTP stack on your host. With respect to testing and measurements, selected NVIDIA adapters allow you to use the PPS-out signal from the onboard SMA connecter, ConnectX-7 also allows measuring PTP in scale, with PPS-In signal. The PTP HW clock on the Network adapter will be sampled on each PPS-In signal, and the timestamp will be sent to the SW. The SyncE cards also includes an improved holdover to meet ITU-T G.8273.2 class C.

15

Supported Interfaces

This section describes the ConnectX-7 supported interfaces. Each numbered interface that is referenced in the figures is described in the following table with a link to detailed information.
The below figures are for illustration purposes only and might not reflect the current
revision of the adapter card.

2.1 ConnectX-7 Layout and Interface Information

Single-Port QSFP112 Adapter Cards OPNs: MCX715105AS-WEAT

Dual-Port QSFP112 Adapter Cards OPNs: MCX755106AS-HEAT, MCX755106AC-HEAT,
MCX713106AC-CEAT, MCX713106AS-CEAT, MCX713106AC-VEAT,
MCX713106AS-VEAT

Single-Port OSFP Adapter Cards OPNs: MCX75310AAS-NEAT, MCX75310AAC-NEAT,
MCX75310AAS-HEAT, MCX75510AAS-NEAT, MCX75510AAS-HEAT

Quad-Port SFP56 Cards OPNs: MCX713104AC-ADAT, MCX713104AS-ADAT

Quad-port SFP56 Cards with PPS IN/OUT OPN: MCX713114TC-GEAT

16

It

Interface

e

m

1 ConnectX-7 IC

2 PCI Express Interface

3 Networking Interfaces

4 Networking Ports LEDs
5 Cabline CA-II Plus Connectors

6 PPS IN/OUT Interface

Description
ConnectX-7 Integrated Circuit PCIe Gen 4.0/5.0 through x16 edge connector
Network traffic is transmitted through the adapter card networking connectors. The networking connectors allow for the use of modules, optical and passive cable interconnect solutions Two I/O LEDs per port to indicate speed and link status
In Socket-Direct ready cards, two Cabline CA-II plus connectors are populated to allow connectivity to an additional PCIe x16 Auxiliary card. Applicable to OPNs: MCX715105AS-WEAT, MCX75510AAS-NEAT, MCX75510AAS-HEAT, MCX755106AS-HEAT and MCX755106AC-HEAT. Allows PPS IN/OUT Applies to OPN: MCX713114TC-GEAT only.

17

2.2 Interfaces Detailed Description
2.2.1 ConnectX-7 IC
The ConnectX-7 family of adapter IC devices delivers InfiniBand and Ethernet connectivity paired with best-in-class hardware capabilities that accelerate and secure cloud and data-center workloads.
2.2.2 PCI Express Interface
ConnectX-7 adapter cards support PCI Express Gen 5.0 (4.0 and 3.0 compatible) through x16 edge connector. The following lists PCIe interface features:
· PCIe Gen 5.0 compliant, 4.0, 3.0, 2.0 and 1.1 compatible · 2.5, 5.0, 8.0, 16.0 and 32GT/s link rate x16/x32 (Socket-Direct configuration) · Support for PCIe bifurcation: Auto-negotiates to x32, x16, x8, x4, x2, or x1 · NVIDIA Multi-HostTM supports connection of up to 4x hosts · Transaction layer packet (TLP) processing hints (TPH) · PCIe switch Downstream Port Containment (DPC) · Advanced error reporting (AER) · Access Control Service (ACS) for peer-to-peer secure communication · Process Address Space ID (PASID) · Address translation services (ATS) · Support for MSI/MSI-X mechanisms · Support for SR-IOV

2.2.3 Networking Interfaces

The adapter card includes special circuits to protect from ESD shocks to the card/server
when plugging copper cables.

Ethernet

Protocol

InfiniBand

Specifications
The network ports comply with the IEEE 802.3 Ethernet standards listed in Features and Benefits. Ethernet traffic is transmitted through the networking connectors on the adapter card.
The network ports are compliant with the InfiniBand Architecture Specification, Release 1.5. InfiniBand traffic is transmitted through the cards’ networking connectors.

2.2.4 Networking Ports LEDs Specifications
For the networking ports LEDs description, follow the below table depending on the ConnectX-7 SKU you have purchased.

18

SKU
900-9X7AO-0003-ST0 900-9X7AO-00C3-STZ All cards

LEDs Scheme Scheme 1: One Bi-Color LED
Scheme 2: Two LEDs

2.2.4.1 Scheme 1: One Bi-Color LED

There is one bi-color (Yellow and Green) I/O LED per port to indicate port speed and link status.

State

Bi-Color LED (Yellow/Green)

Beacon command for locating the adapter card Error

1Hz blinking Yellow

4Hz blinking Yellow Indicates an error with the link. The error can be one of the following:

Error Type

Description

LED Behavior

I2C

I2C access to the

networking ports

fails

Over-current Over-current condition of the networking ports

Blinks until error is fixed
Blinks until error is fixed

Physical Activity Link Up Physical Up (IB Only)

The Green LED will blink. The Green LED will be solid. The Yellow LED will be solid.

2.2.4.2 Scheme 2: Two LEDs
There are two I/O LEDs per port to indicate port speed and link status. · LED1 is a bi-color LED (Yellow and Green) · LED2 is a single-color LED (Green)

19

State
Beacon command for locating the adapter card Error

Bi-Color LED (Yellow/Green) 1Hz blinking Yellow

Single Color LED (Green) OFF

4Hz blinking Yellow Indicates an error

ON

with the link. The error can be one of the

following:

Error Description

LED

Type

Behavior

I2C

I2C access to Blinks until

the

error is

networking fixed

ports fails

Overcurrent

Over-current condition of the networking ports

Blinks until error is fixed

Physical Activity Link Up

The Green LED will blink.
In full port speed: the Green LED is solid In less than full port speed: the Yellow LED is solid

Blinking ON

2.2.5 Cabline CA-II Plus Connectors
Socket-Direct is currently not supported.
Applies to OPNs: MCX755106AC-HEAT, MCX755106AS-HEAT, MCX75510AAS-HEAT,
MCX75510AAS-NEAT.
The Cabline CA-II connectors on the Socket-Direct ready cards enable connectivity to an additional Auxiliary PCIe x16 Connection card through the Cabline CA-II harnesses.
2.2.6 PPS IN/OUT Interface
Applicable to MCX713114TC-GEAT only.
Pulse Per Second (PPS) is an out-of-band signal used in synchronized systems. 5T technology support PPS-in and PPS-out on selected devices. Selected ConnectX-7 adapter cards incorporate an integrated Hardware Clock (PHC) that allows the adapter to achieve sub-20u Sec accuracy and also offers many timing-related functions such as time-triggered scheduling or time-based SND accelerations (time-based ASAP²). Furthermore, 5T technology enables the software application to transmit fronthaul (ORAN) at high bandwidth. The PTP part supports the subordinate clock, master clock, and boundary clock. The PTP solution allows you to run any PTP stack on your host. With respect to testing and measurements, selected ConnectX-7 adapters allow you to use the PPSout signal from the onboard MMCX RA connecter. The adapter also allows measuring PTP in scale with the PPS-In signal. The PTP HW clock on the Network adapter is sampled on each PPS-In signal, and the timestamp is sent to the SW. After the DPU installation, use two standard SMA plug 50Ohm cables to connect to the SMA connectors on the board. The cables are not included in the package. See the below example:
2.2.7 Clock IN/OUT Interface
Applicable to MCX713114TC-GEAT only.
After the adapter card installation, use two standard MMCX 50Ohm, right angled, plugs to connect to the MMCX connectors on the board. The cables are not included in the package. See the below example:
2.2.8 SMBus Interface
ConnectX-7 technology maintains support for manageability through a BMC. ConnectX-7 PCIe standup adapter can be connected to a BMC using MCTP over SMBus or MCTP over PCIe protocols as if it is a standard NVIDIA PCIe stand-up adapter. For configuring the adapter for the specific manageability solution in use by the server, please contact NVIDIA Support.
21

2.2.9 Voltage Regulators
The voltage regulator power is derived from the PCI Express edge connector 12V supply pins. These voltage supply pins feed on-board regulators that provide the necessary power to the various components on the card.
22

Hardware Installation

Installation and initialization of ConnectX-7 adapter cards require attention to the mechanical attributes, power specification, and precautions for electronic equipment.
3.1 Safety Warnings
Safety warnings are provided here in the English language. For safety warnings in other
languages, refer to the Adapter Installation Safety Instructions.
Please observe all safety warnings to avoid injury and prevent damage to system components. Note that not all warnings are relevant to all models. Note that not all warnings are relevant to all models.
General Installation Instructions Read all installation instructions before connecting the equipment to the power source.
Jewelry Removal Warning Before you install or remove equipment that is connected to power lines, remove jewelry such as bracelets, necklaces, rings, watches, and so on. Metal objects heat up when connected to power and ground and can meltdown, causing serious burns and/or welding the metal object to the terminals. Over-temperature This equipment should not be operated in an area with an ambient temperature exceeding the maximum recommended: 55°C (131°F). An airflow of 200LFM at this maximum ambient temperature is required for HCA cards and NICs. To guarantee proper airflow, allow at least 8cm (3 inches) of clearance around the ventilation openings. During Lightning – Electrical Hazard During periods of lightning activity, do not work on the equipment or connect or disconnect cables.
Copper Cable Connecting/Disconnecting Some copper cables are heavy and not flexible, as such, they should be carefully attached to or detached from the connectors. Refer to the cable manufacturer for special warnings and instructions. Equipment Installation This equipment should be installed, replaced, or serviced only by trained and qualified personnel.
Equipment Disposal The disposal of this equipment should be in accordance to all national laws and regulations.
Local and National Electrical Codes This equipment should be installed in compliance with local and national electrical codes.
Hazardous Radiation Exposure · Caution ­ Use of controls or adjustment or performance of procedures other than those specified herein may result in hazardous radiation exposure.For products with optical ports. · CLASS 1 LASER PRODUCT and reference to the most recent laser standards: IEC 60 825-1:1993 + A1:1997 + A2:2001 and EN 60825-1:1994+A1:1996+ A2:20
23

3.2 Installation Procedure Overview

The installation procedure of ConnectX-7 adapter cards involves the following steps:

Step

Procedure

Direct Link

1

Check the system’s hardware and software

System Requirements

requirements.

2

Pay attention to the airflow consideration

Airflow Requirements

within the host system

3

Follow the safety precautions

Safety Precautions

4

Unpack the package

Unpack the package

5

Follow the pre-installation checklist

Pre-Installation Checklist

6

(Optional) Replace the full-height mounting

Bracket Replacement Instructions

bracket with the supplied short bracket

7

Install the ConnectX-7 PCIe x16 adapter card in ConnectX-7 PCIe x16 Adapter Cards

the system

Installation Instructions

Install the ConnectX-7 2x PCIe x16 Socket Direct adapter card in the system

ConnectX-7 Socket Direct (2x PCIe x16) Installation Instructions

8

Connect cables or modules to the card

Cables and Modules

9

Identify ConnectX-7 in the system

Identifying Your Card

3.3 System Requirements
3.3.1 Hardware Requirements
Unless otherwise specified, NVIDIA products are designed to work in an environmentally
controlled data center with low levels of gaseous and dust (particulate) contamination. The operating environment should meet severity level G1 as per ISA 71.04 for gaseous contamination and ISO 14644-1 class 8 for cleanliness level.

For proper operation and performance, please make sure to use a PCIe slot with a
corresponding bus width that can supply sufficient power to your card. Refer to the Specifications section of the manual for more power requirements.

Please make sure to install the ConnectX-7 cards in a PCIe slot that is capable of supplying
the required power as stated in Specifications.

ConnectX-7 Configuration PCIe x16

Hardware Requirements
A system with a PCI Express x16 slot is required for installing the card.

24

ConnectX-7 Configuration Socket Direct 2x PCIe x16 (dual-slot server)

Hardware Requirements
A system with two PCIe x16 slots is required for installing the cards.

3.3.2 Airflow Requirements

ConnectX-7 adapter cards are offered with two airflow patterns: from the heatsink to the network ports, and vice versa, as shown below.

Please refer to the Specifications section for airflow numbers for each specific card model.

Airflow from the heatsink to the network ports

Airflow from the network ports to the heatsink

All cards in the system should be planned with the same airflow direction.
3.3.3 Software Requirements
· See System Requirements section under the Introduction section. · Software Stacks – NVIDIA® OpenFabrics Enterprise Distribution for Linux (MLNX_OFED),
WinOF-2 for Windows, and VMware. See the Driver Installation section.
3.4 Safety Precautions
The adapter is being installed in a system that operates with voltages that can be lethal. Before opening the case of the system, observe the following precautions to avoid injury and prevent damage to system components.
· Remove any metallic objects from your hands and wrists. · Make sure to use only insulated tools. · Verify that the system is powered off and is unplugged. · It is strongly recommended to use an ESD strap or other antistatic devices.
3.5 Pre-Installation Checklist
· Unpack the ConnectX-7 Card; Unpack and remove the ConnectX-7 card. Check against the package contents list that all the parts have been sent. Check the parts for visible damage that may have occurred during shipping. Please note that the cards must be placed on an antistatic surface. For package contents please refer to Package Contents.
Please note that if the card is removed hastily from the antistatic bag, the plastic
ziplock may harm the EMI fingers on the networking connector. Carefully remove the card from the antistatic bag to avoid damaging the EMI fingers.
· Shut down your system if active; Turn off the power to the system, and disconnect the power cord. Refer to the system documentation for instructions. Before you install the ConnectX-7 card, make sure that the system is disconnected from power.
· (Optional) Check the mounting bracket on the ConnectX-7 or PCIe Auxiliary Connection Card; If required for your system, replace the full-height mounting bracket that is shipped mounted on the card with the supplied low-profile bracket. Refer to Bracket Replacement Instructions.
3.6 Bracket Replacement Instructions
The ConnectX-7 card and PCIe Auxiliary Connection card are usually shipped with an assembled high-profile bracket. If this form factor is suitable for your requirements, you can skip the remainder of this section and move to Installation Instructions. If you need to replace the highprofile bracket with the short bracket that is included in the shipping box, please follow the instructions in this section.
During the bracket replacement procedure, do not pull, bend, or damage the EMI fingers
cage. It is recommended to limit bracket replacements to three times.
To replace the bracket you will need the following parts: · The new brackets of the proper height · The 2 screws saved from the removal of the bracket
Removing the Existing Bracket 1. Using a torque driver, remove the two screws holding the bracket in place. 2. Separate the bracket from the ConnectX-7 card.
Be careful not to put stress on the LEDs on the adapter card.
3. Save the two screws.
Installing the New Bracket 1. Place the bracket onto the card until the screw holes line up.
Do not force the bracket onto the adapter card.
2. Screw on the bracket using the screws saved from the bracket removal procedure above.
26

Use a torque driver to apply up to 2 lbs-in torque on the screws.

3.7 Installation Instructions

This section provides detailed instructions on how to install your adapter card in a system.

Choose the installation instructions according to the ConnectX-7 configuration you would like to use.

OPNs

Installation Instructions

All ConnectX-7 cards
MCX755106AC-HEAT MCX755106AS-HEAT MCX75510AAS-HEAT MCX75510AAS-NEAT

ConnectX-7 (PCIe x16) Adapter Card ConnectX-7 Socket Direct (2x PCIe x16) Adapter Card

3.7.1 Cables and Modules
Cable Installation
Before connecting a cable to the adapter card, ensure that the bracket is fastened to the server chassis using a screw to prevent movement or unplugging of the card when the cable is inserted or extracted.
1. All cables can be inserted or removed with the unit powered on. 2. To insert a cable, press the connector into the port receptacle until the connector is firmly
seated. a. Support the weight of the cable before connecting the cable to the adapter card. Do this by using a cable holder or tying the cable to the rack. b. Determine the correct orientation of the connector to the card before inserting the connector. Do not try and insert the connector upside down. This may damage the adapter card. c. Insert the connector into the adapter card. Be careful to insert the connector straight into the cage. Do not apply any torque, up or down, to the connector cage in the adapter card. d. Make sure that the connector locks in place.
When installing cables make sure that the latches engage.
Always install and remove cables by pushing or pulling the cable and
connector in a straight line with the card.
3. After inserting a cable into a port, the Green LED indicator will light when the physical connection is established (that is, when the unit is powered on and a cable is plugged into the port with the other end of the connector plugged into a functioning port). See LED Interface under the Interfaces section.

27

4. After plugging in a cable, lock the connector using the latching mechanism particular to the cable vendor. When data is being transferred the Green LED will blink. See LED Interface under the Interfaces section.
5. Care should be taken so as not to impede the air exhaust flow through the ventilation holes. Use cable lengths that allow for routing horizontally around to the side of the chassis before bending upward or downward in the rack.
6. To remove a cable, disengage the locks and slowly pull the connector away from the port receptacle. The LED indicator will turn off when the cable is unseated.

3.8 Identifying the Card in Your System

On Linux
Get the device location on the PCI bus by running lspci and locating lines with the string “Mellanox Technologies”:

ConnectX-7 Card Configuration

Output Example

Single-port Socket Direct Card (2x PCIe x16)

[root@mftqa-009 ~]# lspci |grep mellanox -i a3:00.0 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7] e3:00.0 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7]

Dual-port Socket Direct Card (2x PCIe x16)

[root@mftqa-009 ~]# lspci |grep mellanox -i 05:00.0 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7] 05:00.1 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7] 82:00.0 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7] 82:00.1 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7]

In the output example above, the first two rows indicate that one card is installed in a PCI slot with PCI Bus address 05 (he xadecimal), PCI Device number 00, and PCI Function numbers 0 and 1. The other card is installed in a PCI slot with PCI Bus address 82 (hexadecimal), PCI Device number 00, and PCI Function numbers 0 and 1. Since the two PCIe cards are installed in two PCIe slots, each card gets a unique PCI Bus and Device number. Each of the PCIe x16 busses sees two network ports; in effect, the two physical ports of the ConnectX-7 Socket Direct adapter are viewed as four net devices by the system.

Single-port PCIe x16 Card

[root@mftqa-009 ~]# lspci |grep mellanox -ia 3:00.0 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7]

On Windows
1. Open Device Manager on the server. Click Start => Run, and then enter devmgmt.msc. 2. Expand System Devices and locate your ConnectX-7 adapter card. 3. Right-click the mouse on your adapter’s row and select Properties to display the adapter card
properties window. 4. Click the Details tab and select Hardware Ids (Windows 2022/2019/2016/2012 R2) from the
Property pull-down menu.

28

PCI Device (Example)
5. In the Value display box, check the fields VEN and DEV (fields are separated by `&’). In the display example above, notice the sub-string “PCIVEN_15B3&DEV_1021”: VEN is equal to 0x15B3 ­ this is the Vendor ID of Mellanox Technologies, and DEV is equal to 1021 (for ConnectX-7) ­ this is a valid NVIDIA PCI Device ID.
If the PCI device does not have an NVIDIA adapter ID, return to Step 2 to check
another device.
The list of NVIDIA PCI Device IDs can be found at the PCI ID repository.
3.9 ConnectX-7 PCIe x16 Installation Instructions
3.9.1 Installing the Card
This section applies to all cards.
In case you would like to use the Socket Direct configuration (PCIe x32) that is available in MCX75510AAS-HEAT, MCX75510AAS-NEAT and MCX755106AS-HEAT, please refer to ConnectX-7 Socket Direct (2x PCIe x16) Installation Instructions.
Please make sure to install the ConnectX-7 cards in a PCIe slot that is capable of supplying
the required power and airflow as stated in Specifications.
29

The below images are for illustration purposes only.
Connect the adapter Card in an available PCI Express x16 slot in the chassis. Step 1: Locate an available PCI Express x16 slot and insert the adapter card to the chassis.
Step 2: Applying even pressure at both corners of the card, insert the adapter card in a PCI Express slot until firmly seated.
Do not use excessive force when seating the card, as this may damage the chassis.
Secure the adapter card to the chassis. Secure the bracket to the chassis with the bracket screw.
30

3.9.2 Uninstalling the Card
Safety Precautions The adapter is installed in a system that operates with voltages that can be lethal. Before uninstalling the adapter card, please observe the following precautions to avoid injury and prevent damage to system components.
1. Remove any metallic objects from your hands and wrists. 2. It is strongly recommended to use an ESD strap or other antistatic devices. 3. Turn off the system and disconnect the power cord from the server. Card Removal
Please note that the following images are for illustration purposes only.
1. Verify that the system is powered off and unplugged. 2. Wait 30 seconds. 3. To remove the card, disengage the retention mechanisms on the bracket (clips or screws).
4. Holding the adapter card from its center, gently pull the ConnectX-7 card out of the PCI Express slot.
31

3.10 ConnectX-7 Socket Direct (2x PCIe x16) Installation Instructions
This section applies to the following adapter cards when used as Socket Direct cards in
dual-socket servers. · MCX755106AS-HEAT · MCX755106AC-HEAT · MCX75510AAS-NEAT · MCX75510AAS-HEAT · MCX715105AS-WEAT
The below images are for illustration purposes only.
The hardware installation section uses the terminology of white and black harnesses to differentiate between the two supplied cables. Due to supply chain variations, some cards may be provided with two black harnesses instead. To clarify the difference between these two harnesses, one black harness was marked with a “WHITE” label and the other with a “BLACK” label. The Cabline harness marked with the “WHITE” label should be connected to the connector on the ConnectX-7 and PCIe card engraved with “White Cable,” while the one marked with the “BLACK” label should be connected to the connector on the ConnectX-7 and PCIe card engraved with “Black Cable”.
The harnesses’ minimal bending radius is 10[mm].

3.10.1 Installing the Cards
The installation instructions include steps that involve a retention clip to be used while
connecting the Cabline harnesses to the cards. Please note that this is an optional accessory.
Please make sure to install the ConnectX-7 cards in a PCIe slot capable of supplying the
required power and airflow as stated in the Specifications. Connect the adapter card with the Auxiliary connection card using the supplied Cabline CA- II Plus harnesses. Step 1: Slide the black and white Cabline CA-II Plus harnesses through the retention clip while ensuring the clip opening is facing the plugs.
Step 2: Plug the Cabline CA-II Plus harnesses into the ConnectX-7 adapter card while paying attention to the color coding. As indicated on both sides of the card, plug the black harness into the component side and the white harness into the print side.
33

Step 2: Verify the plugs are locked.
Step 3: Slide the retention clip latches through the cutouts on the PCB. The latches should face the annotation on the PCB.
Step 4: Clamp the retention clip. Verify both latches are firmly locked.
34

Step 5: Slide the Cabline CA-II Plus harnesses through the retention clip. Make sure that the clip opening is facing the plugs.
35

Step 6: Plug the Cabline CA-II Plus harnesses into the PCIe Auxiliary Card. As indicated on both sides of the Auxiliary connection card, plug the black harness into the component side and the white harness into the print side.
Step 7: Verify the plugs are locked.
Step 8: Slide the retention clip through the cutouts on the PCB. Ensure latches are facing “Blthe ack Cable” annotation, as seen in the picture below.
Step 9: Clamp the retention clip. Verify both latches are firmly locked.
36

Connect the ConnectX-7 adapter and PCIe Auxiliary Connection cards in available PCI Express x16 slots in the chassis.
Step 1: Locate two available PCI Express x16 slots. Step 2: Applying even pressure at both corners of the cards, insert the adapter card in the PCI Express slots until firmly seated.
37

Do not use excessive force when seating the cards, as this may damage the system or the
cards. Step 3: Applying even pressure at both corners of the cards, insert the Auxiliary Connection card in the PCI Express slots until firmly seated.
Secure the ConnectX-7 adapter and PCIe Auxiliary Connection Cards to the chassis. Step 1: Secure the brackets to the chassis with the bracket screws.
38

3.10.2 Uninstalling the Cards
Safety Precautions The adapter is installed in a system that operates with voltages that can be lethal. Before uninstalling the adapter card, please observe the following precautions to avoid injury and prevent damage to system components.
1. Remove any metallic objects from your hands and wrists. 2. Using an ESD strap or other antistatic devices is strongly recommended. 3. Turn off the system and disconnect the power cord from the server. Card Removal
Please note that the following images are for illustration purposes only.
1. Verify that the system is powered off and unplugged. 2. Wait 30 seconds. 3. To remove the card, disengage the retention mechanisms on the brackets (clips or screws).
39

4. Holding the adapter card from its center, gently pull the ConnectX-7 and Auxiliary Connections cards out of the PCI Express slot.

Driver Installation

Please refer to the relevant driver installation section.
· Linux Driver Installation · Windows Driver Installation · VMware Driver Installation

4.1 Linux Driver Installation
This section describes how to install and test the MLNX_OFED for Linux package on a single server with a ConnectX-7 adapter card installed.

4.1.1 Prerequisites
Requirements Platforms
Required Disk Space for Installation Operating System
Installer Privileges

Description
A server platform with a ConnectX-7 InfiniBand/Ethernet adapter card installed.
1GB
Linux operating system. For the list of supported operating system distributions and kernels, please refer to the MLNX_OFED Release Notes.
The installation requires administrator (root) privileges on the target machine.

4.1.2 Downloading MLNX_OFED
1. Verify that the system has a network adapter installed by running lspci command. The below table provides output examples per ConnectX-7 card configuration.

ConnectX-7 Card Configuration

Output Examples

Single-port Socket Direct Card (2x PCIe x16)

[root@mftqa-009 ~]# lspci |grep mellanox -i a3:00.0 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7] e3:00.0 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7]

41

ConnectX-7 Card Configuration
Dual-port Socket Direct Card (2x PCIe x16)
Single-port PCIe x16 Card

Output Examples
[root@mftqa-009 ~]# lspci |grep mellanox -i 05:00.0 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7] 05:00.1 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7] 82:00.0 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7] 82:00.1 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7] In the output example above, the first two rows indicate that one card is installed in a PCI slot with PCI Bus address 05 (hexadecimal), PCI Device number 00 and PCI Function number 0 and 1. The other card is installed in a PCI slot with PCI Bus address 82 (hexadecimal) , PCI Device number 00 and PCI Function number 0 and 1. Since the two PCIe cards are installed in two PCIe slots, each card gets a unique PCI Bus and Device number. Each of the PCIe x16 busses sees two network ports; in effect, the two physical ports of the ConnectX-7 Socket Dire ct adapter are viewed as four net devices by the system.
[root@mftqa-009 ~]# lspci |grep mellanox -ia 3:00.0 Infiniband controller: Mellanox Technologies MT2910 Family [ConnectX-7]

Dual-port PCIe x16 Card

[root@mftqa-009 ~]# lspci |grep mellanox -ia 86:00.0 Network controller: Mellanox Technologies MT2910 Family [ConnectX-7] 86:00.1 Network controller: Mellanox Technologies MT2910 Family [ConnectX-7]

2. Download the ISO image to your host. The image’s name has the format MLNX_OFED_LINUX--<CPU
arch>.iso . You can download and install the latest OpenFabrics Enterprise Distribution (OFED)
software package available via the NVIDIA web site at nvidia.com/en- us/ networking Products Software InfiniBand Drivers NVIDIA MLNX_OFED
i. Scroll down to the Download wizard, and click the Download tab. ii. Choose your relevant package depending on your host operating system. iii. Click the desired ISO/tgz package. iv. To obtain the download link, accept the End User License Agreement (EULA). 3. Use the Hash utility to confirm the file integrity of your ISO image. Run the following command and compare the result to the value provided on the download page.
SHA256 MLNX_OFED_LINUX--.iso

4.1.3 Installing MLNX_OFED
4.1.3.1 Installation Script
The installation script, mlnxofedinstall, performs the following: · Discovers the currently installed kernel · Uninstalls any software stacks that are part of the standard operating system distribution or another vendor’s commercial stack

42

· Installs the MLNX_OFED_LINUX binary RPMs (if they are available for the current kernel)
· Identifies the currently installed InfiniBand and Ethernet network adapters and automatically upgrades the firmware Note: To perform a firmware upgrade using customized firmware binaries, a path can be provided to the folder that contains the firmware binary files, by running –fwimage-dir. Using this option, the firmware version embedded in the MLNX_OFED package will be ignored. Example:
./mlnxofedinstall –fw-image-dir /tmp/my_fw_bin_files
If the driver detects unsupported cards on the system, it will abort the installation
procedure. To avoid this, make sure to add –skip-unsupported-devices-check flag during installation.
Usage
./mnt/mlnxofedinstall [OPTIONS] The installation script removes all previously installed OFED packages and re-installs from scratch. You will be prompted to acknowledge the deletion of the old packages.
Pre-existing configuration files will be saved with the extension “.conf.rpmsave”.
· If you need to install OFED on an entire (homogeneous) cluster, a common strategy is to mount the ISO image on one of the cluster nodes and then copy it to a shared file system such as NFS. To install on all the cluster nodes, use cluster-aware tools (suchaspdsh).
· If your kernel version does not match with any of the offered pre-built RPMs, you can add your kernel version by using the “mlnx_add_kernel_support.sh” script located inside the MLNX_OFED package.
On Redhat and SLES distributions with errata kernel installed there is no need
to use the mlnx_add_kernel_support.sh script. The regular installation can be performed and weak-updates mechanism will create symbolic links to the MLNX_OFED kernel modules.
If you regenerate kernel modules for a custom kernel (using –add-kernel-
support ), the packages installation will not involve automatic regeneration of the initramfs. In some cases, such as a system with a root filesystem mounted over a ConnectX card, not regenerating the initramfs may even cause the system to fail to reboot.
43

In such cases, the installer will recommend running the following command to update the initramfs:
dracut -f
On some OSs, dracut -f might result in the following error message which can be safely ignore.
libkmod: kmod_module_new_from_path: kmod_module ‘mdev’ already exists with different path
The “mlnx_add_kernel_support.sh” script can be executed directly from the mlnxofedinstall script. For further information, please see ‘–add-kernel- support’ option below.
On Ubuntu and Debian distributions drivers installation use Dynamic Kernel
Module Support (DKMS) framework. Thus, the drivers’ compilation will take place on the host during MLNX_OFED installation. Therefore, using “mlnx_add_kernel_support.sh” is irrelevant on Ubuntu and Debian distributions.
Example: The following command will create a MLNX_OFED_LINUX ISO image for RedHat 7.3 under the /tmp directory.

./MLNX_OFED_LINUX-x.x-x-rhel7.3-x86_64/mlnx_add_kernel_support.sh -m

/tmp/MLNX_OFED_LINUX-x.x-xrhel7.3-x86_64/ –make-tgz Note: This program will create MLNX_OFED_LINUX TGZ for rhel7.3 under /tmp directory. All Mellanox, OEM, OFED, or Distribution IB packages will be removed. Do you want to continue?[y/N]:y See log file /tmp/mlnx_ofed_iso.21642.log Building OFED RPMs. Please wait… Removing OFED RPMs… Created /tmp/MLNX_OFED_LINUX-x.x-x-rhel7.3-x86_64-ext.tgz
· The script adds the following lines to /etc/security/limits.conf for the userspace components such as MPI: · soft memlock unlimited · hard memlock unlimited · These settings set the amount of memory that can be pinned by a userspace application to unlimited. If desired, tune the value unlimited to a specific amount of RAM.
For your machine to be part of the InfiniBand/VPI fabric, a Subnet Manager must be running on one of the fabric nodes. At this point, OFED for Linux has already installed the OpenSM Subnet Manager on your machine. For the list of installation options, run:
./mlnxofedinstall –h
4.1.3.2 Installation Procedure
This section describes the installation procedure of MLNX_OFED on NVIDIA adapter cards.
44

a. Log in to the installation machine as root. b. Mount the ISO image on your machine.
host1# mount -o ro,loop MLNX_OFED_LINUX---.iso /mnt
c. Run the installation script.
/mnt/mlnxofedinstall Logs dir: /tmp/MLNX_OFED_LINUX-x.x-x.logs This program will install the MLNX_OFED_LINUX package on your machine. Note that all other Mellanox, OEM, OFED, RDMA or Distribution IB packages will be removed. Those packages are removed due to conflicts with MLNX_OFED_LINUX, do not reinstall them. Starting MLNX_OFED_LINUX-x.x.x installation … …….. …….. Installation finished successfully. Attempting to perform Firmware update… Querying Mellanox devices firmware …
For unattended installation, use the –force installation option while running
the MLNX_OFED installation script: /mnt/mlnxofedinstall –force
MLNX_OFED for Ubuntu should be installed with the following flags in chroot
environment: ./mlnxofedinstall –without-dkms –add-kernel-support –kernel

–without-fw-update –force For example: ./mlnxofedinstall –without-dkms –add-kernel-support –kernel 3.13.0-85generic –without-fw-update –force Note that the path to kernel sources (–kernel- sources) should be added if the sources are not in their default location. In case your machine has the latest firmware, no firmware update will occur and the installation script will print at the end of installation a message similar to the following: Device #1: ———Device Type: ConnectX-X Part Number: MCXXXX-XXX PSID: MT_ PCI Device Name: 0b:00.0 Base MAC: 0000e41d2d5cf810 Versions: Current Available FW XX.XX.XXXX Status: Up to date In case your machine has an unsupported network adapter device, no firmware update will occur and one of the error messages below will be printed. Please contact your hardware vendor for help with firmware updates. 45

Error message #1: Device #1: ———Device Type: ConnectX-X Part Number: MCXXXX- XXX PSID: MT_ PCI Device Name: 0b:00.0 Base MAC: 0000e41d2d5cf810 Versions: Current Available FW XX.XX.XXXX Status: No matching image found
Error message #2: The firmware for this device is not distributed inside NVIDIA driver: 0000:01:00.0 (PSID: IBM2150110033) To obtain firmware for this device, please contact your HW vendor.

d. Case A: If the installation script has performed a firmware update on your network adapter, you need to either restart the driver or reboot your system before the firmware update can take effect. Refer to the table below to find the appropriate action for your specific card.

Action Adapter

Driver Restart

Standard Reboot Cold Reboot (Hard

(Soft Reset)

Reset)

Standard ConnectX-4/ –

ConnectX-4 Lx or

higher

Adapters with Multi- –

Host Support

Socket Direct Cards –

Case B: If the installations script has not performed a firmware upgrade on your network adapter, restart the driver by running: “/etc/init.d/openibd restart”.
e. (InfiniBand only) Run the hca_self_test.ofed utility to verify whether or not the InfiniBand link is up. The utility also checks for and displays additional information such as: · HCA firmware version · Kernel architecture · Driver version · Number of active HCA ports along with their states · Node GUID For more details on hca_self_test.ofed, see the file docs/ readme_and_user_manual/hca_self_test.readme.
After installation completion, information about the OFED installation, such as prefix, kernel version, and installation parameters can be retrieved by running the command /etc/ infiniband/info. Most of the OFED components can be configured or reconfigured after the installation, by modifying the relevant configuration files. See the relevant chapters in this manual for details.

46

The list of the modules that will be loaded automatically upon boot can be found in the /etc/ infiniband/openib.conf file.
Installing OFED will replace the RDMA stack and remove existing 3rd party RDMA
connectors.

4.1.3.3 Installation Results

Software

· Most of MLNX_OFED packages are installed under the “/usr”
directory except for the following packages which are installed under the “/opt” directory:
· fca and ibutils · iproute2 (rdma tool) – installed under /opt/
Mellanox/iproute2/sbin/rdma · The kernel modules are installed under
· /lib/modules/uname -r/updates on SLES and
Fedora Distributions · /lib/modules/uname -r/extra/mlnx-ofa_kernel on
RHEL and other RedHat like Distributions · /lib/modules/uname -r/updates/dkms/ on Ubuntu

Firmware

· The firmware of existing network adapter devices will be updated if the following two conditions are fulfilled: · The installation script is run in default mode; that is, without the option `–without- fw-update’ · The firmware version of the adapter device is older than the firmware version included with the OFED
ISO image Note: If an adapter’s Flash was originally programmed with an Expansion ROM image, the
automatic firmware update will also burn an
Expansion ROM image. · In case your machine has an unsupported network adapter
device, no firmware update will occur and the error message
below will be printed.
“The firmware for this device is not distributed inside NVIDIA
driver: 0000:01:00.0 (PSID: IBM2150110033)
To obtain firmware for this device, please contact your HW
vendor.”

4.1.3.4 Installation Logging
While installing MLNX_OFED, the install log for each selected package will be saved in a separate log file. The path to the directory containing the log files will be displayed after running the installation script in the following format: Example:
Logs dir: /tmp/MLNX_OFED_LINUX-4.4-1.0.0.0.IBMM2150110033.logs
4.1.4 Driver Load Upon System Boot
Upon system boot, the NVIDIA drivers will be loaded automatically.

47

To prevent the automatic load of the NVIDIA drivers upon system boot: a. Add the following lines to the “/etc/modprobe.d/mlnx.conf” file.

blacklist mlx5_core blacklist mlx5_ib
b. Set “ONBOOT=no” in the “/etc/infiniband/openib.conf” file. c. If the modules exist in the initramfs file, they can automatically be loaded by the
kernel. To prevent this behavior, update the initramfs using the operating systems’ standard tools. Note: The process of updating the initramfs will add the blacklists from step 1, and will prevent the kernel from loading the modules automatically.

4.1.4.1 mlnxofedinstall Return Codes
The table below lists the mlnxofedinstall script return codes and their meanings.

Return Code

Meaning

0

The Installation ended successfully

1

The installation failed

2

No firmware was found for the adapter device

22

Invalid parameter

28

Not enough free space

171

Not applicable to this system configuration. This can occur when the required hardware

is not present on the system

172

Prerequisites are not met. For example, missing the required software installed or the

hardware is not configured correctly

173

Failed to start the mst driver

Software

· Most of MLNX_OFED packages are installed under the “/usr”
directory except for the following packages which are installed under the “/opt” directory:
· fca and ibutils · iproute2 (rdma tool) – installed under /opt/
Mellanox/iproute2/sbin/rdma · The kernel modules are installed under
· /lib/modules/uname -r/updates on SLES and
Fedora Distributions · /lib/modules/uname -r/extra/mlnx-ofa_kernel on
RHEL and other RedHat like Distributions · /lib/modules/uname -r/updates/dkms/ on Ubuntu

48

Firmware

· The firmware of existing network adapter devices will be updated if the following two conditions are fulfilled: · The installation script is run in default mode; that is, without the option `–without- fw-update’ · The firmware version of the adapter device is older than the firmware version included with the OFED
ISO image Note: If an adapter’s Flash was originally programmed with an Expansion ROM image, the
automatic firmware update will also burn an
Expansion ROM image. · In case your machine has an unsupported network adapter
device, no firmware update will occur and the error message
below will be printed.
“The firmware for this device is not distributed inside NVIDIA
driver: 0000:01:00.0 (PSID: IBM2150110033)
To obtain firmware for this device, please contact your HW
vendor.”

4.1.4.2 Installation Logging
While installing MLNX_OFED, the install log for each selected package will be saved in a separate log file. The path to the directory containing the log files will be displayed after running the installation script in the following format: Example:
Logs dir: /tmp/MLNX_OFED_LINUX-4.4-1.0.0.0.IBMM2150110033.logs
4.1.4.3 Uninstalling MLNX_OFED
Use the script /usr/sbin/ofed_uninstall.sh to uninstall the MLNX_OFED package. The script is part of the ofed-scripts RPM.
4.1.5 Additional Installation Procedures
4.1.5.1 Installing MLNX_OFED Using YUM
This type of installation is applicable to RedHat/OL and Fedora operating systems.
4.1.5.1.1 Setting up MLNX_OFED YUM Repository
a. Log into the installation machine as root. b. Mount the ISO image on your machine and copy its content to a shared location in your
network.

mount -o ro,loop MLNX_OFED_LINUX---.iso /mnt

c. Download and install NVIDIA’s GPG-KEY: The key can be downloaded via the following link: http://www.mellanox.com/downloads/ofed/RPM-GPG-KEY-Mellanox

49

wget http://www.mellanox.com/downloads/ofed/RPM-GPG-KEY-Mellanox

–2018-01-25 13:52:30– http://www.mellanox.com/downloads/ofed/RPM-GPG-KEY- Mellanox Resolving www.mellanox.com… 72.3.194.0 Connecting to www.mellanox.com|72.3.194.0|:80… connected. HTTP request sent, awaiting response… 200 OK Length: 1354 (1.3K) [text/plain] Saving to: ?RPM-GPG-KEY-Mellanox?

100%[=================================================>] 1,354

–.-K/s in 0s

2018-01-25 13:52:30 (247 MB/s) – ?RPM-GPG-KEY-Mellanox? saved [1354/1354]

d. Install the key.

sudo rpm –import RPM-GPG-KEY-Mellanox warning: rpmts_HdrFromFdno: Header V3

DSA/SHA1 Signature, key ID 6224c050: NOKEY Retrieving key from file:///repos/MLNX_OFED//RPM-GPG-KEY-Mellanox Importing GPG key 0x6224C050:
Userid: “Mellanox Technologies (Mellanox Technologies – Signing Key v2) <support@mellanox.com>” From : /repos/MLNX_OFED//RPM-GPG-KEY-Mellanox Is this ok [y/N]:
e. Check that the key was successfully imported.

rpm -q gpg-pubkey –qf ‘%{NAME}-%{VERSION}-%{RELEASE}t%{SUMMARY}n’ | grep

Mellanox gpg-pubkey-a9e4b643-520791ba gpg(Mellanox Technologies <support@mellanox.com>)
f. Create a yum repository configuration file called “/etc/yum.repos.d/mlnx_ofed.repo” with the following content:

[mlnx_ofed] name=MLNX_OFED Repository baseurl=file:///<path to extracted MLNX_OFED package>/RPMS enabled=1 gpgkey=file:///<path to the downloaded key RPM-GPG-KEY-Mellanox> gpgcheck=1
g. Check that the repository was successfully added.

yum repolist

Loaded plugins: product-id, security, subscription-manager

This system is not registered to Red Hat Subscription Management. You can use subscription-manager

to register.

repo id repo name

status

mlnx_ofed MLNX_OFED Repository

108

rpmforge RHEL 6Server – RPMforge.net – dag

4,597

repolist: 8,351

4.1.5.1.1.1 Setting up MLNX_OFED YUM Repository Using –add-kernel-support
a. Log into the installation machine as root. b. Mount the ISO image on your machine and copy its content to a shared location in your
network.

mount -o ro,loop MLNX_OFED_LINUX---.iso /mnt

c. Build the packages with kernel support and create the tarball.

/mnt/mlnx_add_kernel_support.sh –make-tgz <optional –kmp> -k $(uname -r) -m

/mnt/ Note: This program will create MLNX_OFED_LINUX TGZ for rhel7.6 under /tmp directory. Do you want to continue?[y/N]:y See log file /tmp/mlnx_iso.4120_logs/mlnx_ofed_iso.4120.log
Checking if all needed packages are installed… Building MLNX_OFED_LINUX RPMS . Please wait… Creating metadata-rpms for 3.10.0-957.21.3.el7.x86_64 … WARNING: If you are going to configure this package as a repository, then please note WARNING: that it contains unsigned rpms, therefore, you need to disable the gpgcheck WARNING: by setting ‘gpgcheck=0’ in the repository conf file. Created /tmp/MLNX_OFED_LINUX-5.2-0.5.5.0-rhel7.6-x86_64-ext.tgz
d. Open the tarball.

50

cd /tmp/ # tar -xvf /tmp/MLNX_OFED_LINUX-5.2-0.5.5.0-rhel7.6-x86_64-ext.tgz

e. Create a YUM repository configuration file called “/etc/yum.repos.d/mlnx_ofed.repo” with the following content:

[mlnx_ofed] name=MLNX_OFED Repository baseurl=file:///<path to extracted MLNX_OFED package>/RPMS enabled=1 gpgcheck=0
f. Check that the repository was successfully added.

yum repolist

Loaded plugins: product-id, security, subscription-manager

This system is not registered to Red Hat Subscription Management. You can use subscription-manager

to register.

repo id repo name

status

mlnx_ofed MLNX_OFED Repository

108

rpmforge RHEL 6Server – RPMforge.net – dag

4,597

repolist: 8,351

4.1.5.1.2 Installing MLNX_OFED Using the YUM Tool
After setting up the YUM repository for MLNX_OFED package, perform the following: a. View the available package groups by invoking:

yum search mlnx-ofedmlnx-ofed-all.noarch : MLNX_OFED all installer package

(with KMP support) mlnx-ofed-all-user-only.noarch : MLNX_OFED all-user-only installer package (User Space packages only) mlnx-ofed-basic.noarch : MLNX_OFED basic installer package (with KMP support) mlnx-ofed-basic-user- only.noarch : MLNX_OFED basic-user-only installer package (User Space packages only) mlnx-ofed-bluefield.noarch : MLNX_OFED bluefield installer package (with KMP support) mlnx-ofed-bluefield-user-only.noarch : MLNX_OFED bluefield-user- only installer package (User Space packages only) mlnx-ofed-dpdk.noarch : MLNX_OFED dpdk installer package (with KMP support) mlnx-ofed-dpdk-upstream- libs.noarch : MLNX_OFED dpdk-upstream-libs installer package (with KMP support) mlnx-ofed-dpdk-upstream-libs-user-only.noarch : MLNX_OFED dpdk- upstream-libs-user-only installer package (User Space packages only) mlnx- ofed-dpdk-user-only.noarch : MLNX_OFED dpdk-user-only installer package (User Space packages only) mlnx-ofed-eth-only-user-only.noarch : MLNX_OFED eth-only- user-only installer package (User Space packages only) mlnx-ofed-guest.noarch : MLNX_OFED guest installer package (with KMP support) mlnx-ofed-guest-user- only.noarch : MLNX_OFED guest-user-only installer package (User Space packages only) mlnx-ofed-hpc.noarch : MLNX_OFED hpc installer package (with KMP support) mlnx-ofed-hpc-user-only.noarch : MLNX_OFED hpc-user-only installer package (User Space packages only) mlnx-ofed-hypervisor.noarch : MLNX_OFED hypervisor installer package (with KMP support) mlnx-ofed-hypervisor-user- only.noarch : MLNX_OFED hypervisor-user-only installer package (User Space packages only) mlnx-ofed-kernel-only.noarch : MLNX_OFED kernel-only installer package (with KMP support) mlnx-ofed-vma.noarch : MLNX_OFED vma installer package (with KMP support) mlnx-ofed-vma-eth.noarch : MLNX_OFED vma-eth installer package (with KMP support) mlnx-ofed-vma-eth-user-only.noarch : MLNX_OFED vma-eth-user-only installer package (User Space packages only) mlnx- ofed-vma-user-only.noarch : MLNX_OFED vma-user-only installer package (User Space packages only) mlnx-ofed-vma-vpi.noarch : MLNX_OFED vma-vpi installer package (with KMP support) mlnx-ofed-vma-vpi-user-only.noarch : MLNX_OFED vma- vpi-user-only installer package (User Space packages only

where: mlnx-ofed-all

Installs all available packages in MLNX_OFED

mlnx-ofed-basic

Installs basic packages required for running NVIDIA cards

mlnx-ofed-guest

Installs packages required by guest OS

mlnx-ofed-hpc

Installs packages required for HPC

51

mlnx-ofed-hypervisor mlnx-ofed-vma mlnx-ofed-vma-eth
mlnx-ofed-vma-vpi bluefield dpdk dpdk-upstream-libs kernel-only

Installs packages required by hypervisor OS Installs packages required by VMA Installs packages required by VMA to work over Ethernet Installs packages required by VMA to support VPI Installs packages required for BlueField Installs packages required for DPDK Installs packages required for DPDK using RDMA-Core Installs packages required for a non-default kernel

Note: MLNX_OFED provides kernel module RPM packages with KMP support for RHEL
and SLES. For other operating systems, kernel module RPM packages are provided only
for the operating system’s default kernel. In this case, the group RPM packages have the supported kernel version in their package’s name. Example:

mlnx-ofed-all-3.17.4-301.fc21.x86_64.noarch : MLNX_OFED all installer package for kernel 3.17.4-301. fc21.x86_64 (without KMP support) mlnx-ofed- basic-3.17.4-301.fc21.x86_64.noarch : MLNX_OFED basic installer package for kernel 3.17.4-3 01.fc21.x86_64 (without KMP support) mlnx-ofed- guest-3.17.4-301.fc21.x86_64.noarch : MLNX_OFED guest installer package for kernel 3.17.4-3 01.fc21.x86_64 (without KMP support) mlnx-ofed- hpc-3.17.4-301.fc21.x86_64.noarch : MLNX_OFED hpc installer package for kernel 3.17.4-301. fc21.x86_64 (without KMP support) mlnx-ofed- hypervisor-3.17.4-301.fc21.x86_64.noarch : MLNX_OFED hypervisor installer package for
kernel 3.17.4-301.fc21.x86_64 (without KMP support) mlnx-ofed- vma-3.17.4-301.fc21.x86_64.noarch : MLNX_OFED vma installer package for kernel 3.17.4-301. fc21.x86_64 (without KMP support) mlnx-ofed-vma- eth-3.17.4-301.fc21.x86_64.noarch : MLNX_OFED vma-eth installer package for kernel 3.17.4-301.fc21.x86_64 (without KMP support) mlnx-ofed-vma- vpi-3.17.4-301.fc21.x86_64.noarch : MLNX_OFED vma-vpi installer package for kernel 3.17.4-301.fc21.x86_64 (without KMP support) mlnx-ofed- hypervisor-3.17.4-301.fc21.x86_64.noarch : MLNX_OFED hypervisor installer package for
kernel 3.17.4-301.fc21.x86_64 (without KMP support) mlnx-ofed- vma-3.17.4-301.fc21.x86_64.noarch : MLNX_OFED vma installer package for kernel 3.17.4-301. fc21.x86_64 (without KMP support) mlnx-ofed-vma- eth-3.17.4-301.fc21.x86_64.noarch : MLNX_OFED vma-eth installer package for kernel 3.17.4-301.fc21.x86_64 (without KMP support) mlnx-ofed-vma- vpi-3.17.4-301.fc21.x86_64.noarch : MLNX_OFED vma-vpi installer package for kernel 3.17.4-301.fc21.x86_64 (without KMP support)
When using an operating system different than RHEL or SLES, or you have installed a
kernel that is not supported by default in MLNX_OFED, you can use the
mlnx_add_kernel_support.sh script to build MLNX_OFED for your kernel.
The script will automatically build the matching group RPM packages for your kernel so
that you can still install MLNX_OFED via yum.
Please note that the resulting MLNX_OFED repository will contain unsigned RPMs,
therefore, you should set ‘gpgcheck=0’ in the repository configuration file. b. Install the desired group.

yum install mlnx-ofed-all Loaded plugins: langpacks, product-id,

subscription-manager Resolving Dependencies –> Running transaction check —> Package mlnx-ofed-all.noarch 0:3.1-0.1.2 will be installed –> Processing Dependency: kmod-isert = 1.0-OFED.3.1.0.1.2.1.g832a737.rhel7u1 for package: mlnxofed-all-3.1-0.1.2.noarch ……………… ………………
qperf.x86_64 0:0.4.9-9 rds-devel.x86_64 0:2.0.7-1.12 rds-tools.x86_64 0:2.0.7-1.12 sdpnetstat.x86_64 0:1.60-26 srptools.x86_64 0:1.0.2-12
Complete!

52

Installing MLNX_OFED using the “YUM” tool does not automatically update the
firmware. To update the firmware to the version included in MLNX_OFED package, run: # yum install mlnx-fw-updater
4.1.5.2 Installing MLNX_OFED Using apt-get
This type of installation is applicable to Debian and Ubuntu operating systems.
4.1.5.2.1 Setting up MLNX_OFED apt-get Repository
a. Log into the installation machine as root. b. Extract the MLNX_OFED package on a shared location in your network.
It can be downloaded from https://www.nvidia.com/en-us/networking/ Products Software InfiniBand Drivers. c. Create an apt-get repository configuration file called “/etc/apt/sources.list.d/ mlnx_ofed.list” with the following content:
deb file://DEBS ./
d. Download and install NVIDIA’s Technologies GPG-KEY.

wget -qO – http://www.mellanox.com/downloads/ofed/RPM-GPG-KEY-Mellanox |

sudo apt-key add –
e. Verify that the key was successfully imported.

apt-key list pub 1024D/A9E4B643 2013-08-11 uid Mellanox Technologies

<support@mellanox.com> sub 1024g/09FCC269 2013-08-11
f. Update the apt-get cache.

sudo apt-get update

4.1.5.2.1.1 Setting up MLNX_OFED apt-get Repository Using –add-kernel-support
a. Log into the installation machine as root. b. Mount the ISO image on your machine and copy its content to a shared location in your
network.

mount -o ro,loop MLNX_OFED_LINUX---.iso /mnt

c. Build the packages with kernel support and create the tarball.

/mnt/mlnx_add_kernel_support.sh –make-tgz <optional –kmp> -k $(uname -r) -m

/mnt/ Note: This program will create MLNX_OFED_LINUX TGZ for rhel7.6 under /tmp directory. Do you want to continue?[y/N]:y See log file /tmp/mlnx_iso.4120_logs/mlnx_ofed_iso.4120.log
53

Checking if all needed packages are installed… Building MLNX_OFED_LINUX RPMS . Please wait… Creating metadata-rpms for 3.10.0-957.21.3.el7.x86_64 … WARNING: If you are going to configure this package as a repository, then please note WARNING: that it contains unsigned rpms, therefore, you need to disable the gpgcheck WARNING: by setting ‘gpgcheck=0’ in the repository conf file. Created /tmp/MLNX_OFED_LINUX-5.2-0.5.5.0-rhel7.6-x86_64-ext.tgz
d. Open the tarball.

cd /tmp/ # tar -xvf /tmp/MLNX_OFED_LINUX-5.2-0.5.5.0-rhel7.6-x86_64-ext.tgz

e. Create an apt-get repository configuration file called “/etc/apt/sources.list.d/ mlnx_ofed.list” with the following content:
deb [trusted=yes] file://DEBS ./
f. Update the apt-get cache.

sudo apt-get update

4.1.5.2.2 Installing MLNX_OFED Using the apt-get Tool
After setting up the apt-get repository for MLNX_OFED package, perform the following: a. View the available package groups by invoking:

apt-cache search mlnx-ofedapt-cache search mlnx-ofed …….. knem-dkms – DKMS

support for mlnx-ofed kernel modules mlnx-ofed-kernel-dkms – DKMS support for mlnx-ofed kernel modules mlnx-ofed-kernel-utils – Userspace tools to restart and tune mlnx-ofed kernel modules mlnx-ofed-vma-vpi – MLNX_OFED vma-vpi installer package (with DKMS support) mlnx-ofed-kernel-only – MLNX_OFED kernel-only installer package (with DKMS support) mlnx-ofed-bluefield – MLNX_OFED bluefield installer package (with DKMS support) mlnx-ofed-hpc-user- only – MLNX_OFED hpc-user-only installer package (User Space packages only) mlnx-ofed-dpdk-user-only – MLNX_OFED dpdk-user-only installer package (User Space packages only) mlnx-ofed-all-exact – MLNX_OFED all installer package (with DKMS support) (exact) mlnx-ofed-all – MLNX_OFED all installer package (with DKMS support) mlnx-ofed-vma-vpi-user-only – MLNX_OFED vma-vpi-user-only installer package (User Space packages only) mlnx-ofed-eth-only-user-only – MLNX_OFED eth-only-user-only installer package (User Space packages only) mlnx-ofed-vma-user-only – MLNX_OFED vma-user-only installer package (User Space packages only) mlnx-ofed-hpc – MLNX_OFED hpc installer package (with DKMS support) mlnx-ofed-bluefield-user-only – MLNX_OFED bluefield-user-only installer package (User Space packages only) mlnx-ofed-dpdk – MLNX_OFED dpdk installer package (with DKMS support) mlnx-ofed-vma-eth-user-only – MLNX_OFED vma-eth-user-only installer package (User Space packages only) mlnx-ofed-all- user-only – MLNX_OFED all-user-only installer package (User Space packages only) mlnx-ofed-vma-eth – MLNX_OFED vma-eth installer package (with DKMS support) mlnx-ofed-vma – MLNX_OFED vma installer package (with DKMS support) mlnx-ofed-dpdk-upstream-libs-user-only – MLNX_OFED dpdk-upstream-libs-user- only installer package (User Space packages only) mlnx-ofed-basic-user-only – MLNX_OFED basic-user-only installer package (User Space packages only) mlnx- ofed-basic-exact – MLNX_OFED basic installer package (with DKMS support) (exact) mlnx-ofed-basic – MLNX_OFED basic installer package (with DKMS support) mlnx-ofed-dpdk-upstream-libs – MLNX_OFED dpdk-upstream-libs installer package (with DKMS support)

where: mlnx-ofed-all

MLNX_OFED all installer package

mlnx-ofed-basic

MLNX_OFED basic installer package

mlnx-ofed-vma

MLNX_OFED vma installer package

mlnx-ofed-hpc

MLNX_OFED HPC installer package

mlnx-ofed-vma-eth

MLNX_OFED vma-eth installer package

mlnx-ofed-vma-vpi

MLNX_OFED vma-vpi installer package

54

knem-dkms kernel-dkms kernel-only bluefield mlnx-ofed-all-exact dpdk mlnx- ofed-basic-exact dpdk-upstream-libs

MLNX_OFED DKMS support for mlnx-ofed kernel modules MLNX_OFED kernel-dkms installer package MLNX_OFED kernel-only installer package MLNX_OFED bluefield installer package MLNX_OFED mlnx-ofed-all-exact installer package MLNX_OFED dpdk installer package MLNX_OFED mlnx-ofed-basic-exact installer package MLNX_OFED dpdk-upstream-libs installer package

b. Install the desired group.

apt-get install ‘
Example:

apt-get install mlnx-ofed-all

Installing MLNX_OFED using the “apt-get” tool does not automatically update
the firmware. To update the firmware to the version included in MLNX_OFED package, run:

apt-get install mlnx-fw-updater

4.1.6 Performance Tuning
Depending on the application of the user’s system, it may be necessary to modify the default configuration of network adapters based on the ConnectX® adapters. In case that tuning is required, please refer to the Performance Tuning Guide for NVIDIA Network Adapters.

4.2 Windows Driver Installation
For Windows, download and install the latest WinOF-2 for Windows software package available via the NVIDIA website at: WinOF-2 webpage. Follow the installation instructions included in the download package (also available from the download page).
The snapshots in the following sections are presented for illustration purposes only. The installation interface may slightly vary, depending on the operating system in use.

4.2.1 Software Requirements

Description

Package

Windows Server 2022 Windows Server 2019 Windows Server 2016 Windows Server 2012 R2

MLNX_WinOF2-_All_x64.exe

55

Description Windows 11 Client (64 bit only) Windows 10 Client (64 bit only) Windows 8.1 Client (64 bit only)

Package

Note: The Operating System listed above must run with administrator privileges.
4.2.2 Downloading WinOF-2 Driver
To download the .exe file according to your Operating System, please follow the steps below: 1. Obtain the machine architecture.
a. To go to the Start menu, position your mouse in the bottom-right corner of the Remote Desktop of your screen.
b. Open a CMD console (Click Task Manager–>File –> Run new task and enter CMD). c. Enter the following command.
echo %PROCESSOR_ARCHITECTURE%

On an x64 (64-bit) machine, the output will be “AMD64”.
2. Go to the WinOF-2 web page at: https://www.nvidia.com/en-us/networking/

Products > Software > InfiniBand Drivers (Learn More) > Nvidia WinOF-2.
3. Download the .exe image according to the architecture of your machine (see Step 1). The name of the .exe is in the following format: MLNXWinOF2-.exe.
Installing the incorrect .exe file is prohibited. If you do so, an error message will be
displayed. For example, if you install a 64-bit .exe on a 32-bit machine, the wizard will display the following (or a similar) error message: “The installation package is not supported by this processor type. Contact your vendor”

4.2.3 Installing WinOF-2 Driver
The snapshots in the following sections are for illustration purposes only. The installation interface may slightly vary, depending on the used operating system.
This section provides instructions for two types of installation procedures, and both require administrator privileges:
· Attended Installation An installation procedure that requires frequent user intervention.
· Unattended Installation An automated installation procedure that requires no user intervention.

56

4.2.3.1 Attended Installation
The following is an example of an installation session. 1. Double click the .exe and follow the GUI instructions to install MLNX_WinOF2. 2. [Optional] Manually configure your setup to contain the logs option (replace “LogFile” with the relevant directory).
MLNXWinOF2_All_Arch.exe /v”/l*vx [LogFile]”
3. [Optional] If you do not want to upgrade your firmware version (i.e., MT_SKIPFWUPGRD default value is False).
MLNXWinOF2_All_Arch.exe /v” MT_SKIPFWUPGRD=1″
4. [Optional] If you do not want to install the Rshim driver, run.
MLNXWinOF2_All_Arch.exe /v” MT_DISABLE_RSHIM_INSTALL=1″
The Rshim driver installanion will fail if a prior Rshim driver is already installed. The
following fail message will be displayed in the log: “ERROR!!! Installation failed due to following errors: MlxRshim drivers installation disabled and MlxRshim drivers Installed, Please remove the following oem inf files from driver store: ” 5. [Optional] If you want to skip the check for unsupported devices, run.
MLNXWinOF2_All_Arch.exe /v” SKIPUNSUPPORTEDDEVCHECK=1″
6. Click Next in the Welcome screen.
57

7. Read and accept the license agreement and click Next.
8. Select the target folder for the installation. 58

9. The firmware upgrade screen will be displayed in the following cases: · If the user has an OEM card. In this case, the firmware will not be displayed. · If the user has a standard NVIDIA® card with an older firmware version, the firmware will be updated accordingly. However, if the user has both an OEM card and a NVIDIA® card, only the NVIDIA® card will be updated.
59

10. Select a Complete or Custom installation, follow Step a onward.
a. Select the desired feature to install: · Performances tools – install the performance tools that are used to measure performance in user environment · Documentation – contains the User Manual and Release Notes · Management tools – installation tools used for management, such as mlxstat · Diagnostic Tools – installation tools used for diagnostics, such as mlx5cmd
60

b. Click Next to install the desired tools.
11. Click Install to start the installation.
12. In case firmware upgrade option was checked in Step 7, you will be notified if a firmware upgrade is required (see ). 61

13. Click Finish to complete the installation. 62

4.2.3.2 Unattended Installation
If no reboot options are specified, the installer restarts the computer whenever necessary
without displaying any prompt or warning to the user. To control the reboots, use the /norestart or /forcerestart standard command-line options. The following is an example of an unattended installation session. 1. Open a CMD console-> Click Start-> Task Manager File-> Run new task-> and enter CMD. 2. Install the driver. Run:
MLNXWinOF2-[Driver/Version]All-Arch.exe /S /v/qn
3. [Optional] Manually configure your setup to contain the logs option:
MLNXWinOF2-[Driver/Version]All-Arch.exe /S /v/qn /v”/lvx [LogFile]”
4. [Optional] if you wish to control whether to install ND provider or not (i.e., MT_NDPROPERTY default value is True).
MLNXWinOF2-[Driver/Version]_All_Arch.exe /vMT_NDPROPERTY=1
5. [Optional] If you do not wish to upgrade your firmware version (i.e.,MT_SKIPFWUPGRD default value is False).
MLNXWinOF2-[Driver/Version]_All_Arch.exe /vMT_SKIPFWUPGRD=1
6. [Optional] If you do not want to install the Rshim driver, run.
MLNXWinOF2_All_Arch.exe /v” MT_DISABLE_RSHIM_INSTALL=1″
The Rshim driver installanion will fail if a prior Rshim driver is already installed. The
following fail message will be displayed in the log: “ERROR!!! Installation failed due to following errors: MlxRshim drivers installation disabled and MlxRshim drivers Installed, Please remove the following oem inf files from driver store: ” 7. [Optional] If you want to enable the default configuration for Rivermax, run.
MLNXWinOF2_All_Arch.exe /v”MT_RIVERMAX=1 /l
vx C:Userslog.txt ”
8. [Optional] If you want to skip the check for unsupported devices, run/
MLNXWinOF2_All_Arch.exe /v” SKIPUNSUPPORTEDDEVCHECK=1″
63

4.2.4 Firmware Upgrade
If the machine has a standard NVIDIA® card with an older firmware version, the firmware will be automatically updated as part of the NVIDIA® WinOF-2 package installation. For information on how to upgrade firmware manually, please refer to MFT User Manual.
If the machine has a DDA (pass through) facility, firmware update is supported only in the Host. Therefore, to update the firmware, the following must be performed:
1. Return the network adapters to the Host. 2. Update the firmware according to the steps in the MFT User Manual. 3. Attach the adapters back to VM with the DDA tools.

4.3 VMware Driver Installation
This section describes VMware Driver Installation.

4.3.1 Hardware and Software Requirements

Requirement

Description

Platforms Operating System Installer Privileges

A server platform with an adapter card based on NVIDIA devices: ConnectX®-7 (InfiniBand/Ethernet) (firmware: fw-ConnectX7)
For the complete list of VMware supported operating systems, refer to VMware ESXi async Drivers
The installation requires administrator privileges on the target machine.

4.3.2 Installing NATIVE ESXi Driver for VMware vSphere
Please uninstall all previous driver packages prior to installing the new version.
To install the driver: 1. Log into the ESXi server with root permissions. 2. Install the driver.

> esxcli software vib install ­d /

Example:

> esxcli software vib install -d /tmp/MLNX-NATIVE-ESX-

ConnectX-4-5_4.16.8.8-10EM-650.0.0.4240417.zipesxcli
3. Reboot the machine. 4. Verify the driver was installed successfully.

64

esxcli software vib list | grep nmlx

nmlx5-core

4.16.8.8-1OEM.650.0.0.4240417 MEL PartnerSupported 2017-01-31

nmlx5-rdma

4.16.8.8-1OEM.650.0.0.4240417 MEL PartnerSupported 2017-01-31

After the installation process, all kernel modules are loaded automatically upon boot.

4.3.3 Removing Earlier NVIDIA Drivers
Please unload the previously installed drivers before removing them.
To remove all the drivers: 1. Log into the ESXi server with root permissions. 2. List all the existing NATIVE ESXi driver modules. (See Step 4 in Installing NATIVE ESXi Driver for VMware vSphere.) 3. Remove each module:

> esxcli software vib remove -n nmlx5-rdma #> esxcli software vib remove -n

nmlx5-core
To remove the modules, you must run the command in the same order as shown in
the example above.

4. Reboot the server.

4.3.4 Firmware Programming
1. Download the VMware bootable binary images v4.6.0 from the Firmware Tools (MFT) site. a. ESXi 6.5 File: mft-4.6.0.48-10EM-650.0.0.4598673.x86_64.vib b. MD5SUM: 0804cffe30913a7b4017445a0f0adbe1
2. Install the image according to the steps described in the MFT User Manual.
The following procedure requires custom boot image downloading, mounting and
booting from a USB device.

Updating Adapter Firmware

Each adapter card is shipped with the latest version of qualified firmware at the time of manufacturing. However, NVIDIA issues firmware updates occasionally that provide new features and bug fixes. To check that your card is programmed with the latest available firmware version, download the mlxup firmware update and query utility. The utility can query for available Mellanox adapters and indicate which adapters require a firmware update. If the user confirms, mlxup upgrades the firmware using embedded images. The latest mlxup executable and documentation are available in mlxup – Update and Query Utility.
Firmware Update Example

[server1]# ./mlxup

Querying Mellanox devices firmware …

Device Type:

ConnectX-7

Part Number:

MCX75310AAS-HEAT

Description:

NVIDIA ConnectX-7 adapter card, 200Gb/s NDR200 IB, Single-port OSFP, PCIe 5.0 x16, Secure boot, No

Crypto, Tall Bracket

PCI Device Name: 0b:00.0

Base MAC:

0000e41d2d5cf810

Versions: FW

Current 28.33.0800

Available 28.33.1000

Status:

Update required

Device Type:

ConnectX-7

Part Number:

MCX75310AAS-HEAT

Description:

NVIDIA ConnectX-7 adapter card, 200Gb/s NDR200 IB, Single-port OSFP, PCIe 5.0 x16, Secure boot, No

Crypto, Tall Bracket

PCI Device Name: 0b:00.0

Base MAC:

0000e41d2d5cf810

Versions: FW

Current 28.33.0800

Available 28.33.1000

Status:

Up to date

Perform FW update? [y/N]: y Device #1: Up to date Device #2: Updating FW … Done

Restart needed for updates to take effect. Log File: /var/log/mlxup/mlxup- yyyymmdd.log

66

6 Setting High-Speed-Port Link Type

This section applies to ConnectX-7 cards supporting both Ethernet and InfiniBand protocols –
see the relevant OPNs in the following table.

The following table lists the ConnectX-7 cards supporting both Ethernet and InfiniBand protocols, the supported speeds and the default networking port link type.

OPN

Data Transmission Rate

Default Protocol and Rate

MCX75310AAS-HEAT MCX75310AAS-NEAT MCX75310AAC-NEAT MCX755106AS-HEAT MCX755106AC-HEAT MCX715105AS-WEAT

NDR200 / 200GbE NDR / 400GbE NDR / 400GbE NDR200 / 200GbE NDR200 / 200GbE NDR / 400GbE

InfiniBand NDR200 InfiniBand NDR InfiniBand NDR Ethernet 200GbE Ethernet 200GbE Ethernet 400GbE

To configure the networking high-speed ports mode, you can either use the mlxconfig or the UEFI tools.
UEFI can configure the adapter card device before the operating system is up, while mlxconfig configures the card once the operating system is up. According to your preference, use one of the below tools:
6.1 mlxconfig
The mlxconfig tool allows users to change device configurations without burning the firmware. The configuration is also kept after reset. By default, mlxconfig shows the configurations that will be loaded in the next boot. For more information and instructions, refer to Using mlxconfig to Set IB/ ETH Parameters.
6.2 UEFI
PreBoot drivers initialize the adapter device, check the port protocol type ­ Ethernet or InfiniBand and bring up the port. Then it connects to a DHCP server to obtain its assigned IP address and network parameters and obtain the source location of the kernel/OS to boot from. The DHCP server instructs the PreBoot drivers to access the kernel/OS through a TFTP server, an iSCSI target, or some other service. For more information and instructions, refer to UEFI.

67

Troubleshooting

7.1 General Troubleshooting

Server unable to find the adapter

· Ensure that the adapter is placed correctly · Make sure the adapter slot and the adapter are
compatible
Install the adapter in a different PCI Express slot · Use the drivers that came with the adapter or
download the latest · Make sure your motherboard has the latest BIOS · Try to reboot the server

The adapter no longer works

· Reseat the adapter in its slot or a different slot, if
necessary · Try using another cable · Reinstall the drivers for the network driver files may
be damaged or deleted · Reboot the server

Adapters stopped working after installing another adapter

· Try removing and re-installing all adapters · Check that cables are connected properly · Make sure your motherboard has the latest BIOS

Link indicator light is off

· Try another port on the switch · Make sure the cable is securely attached · Check you are using the proper cables that do not
exceed the recommended lengths · Verify that your switch and adapter port are
compatible

Link light is on, but with no communication established

· Check that the latest driver is loaded · Check that both the adapter and its link are set to
the same speed and duplex settings

7.2 Linux Troubleshooting

Environment Information

cat /etc/issue uname -a cat /proc/cupinfo | grep `model name’ | uniq ofed_info -s ifconfig -a ip link show ethtool ethtool -i

ibdev2netdev

Card Detection

lspci | grep -i Mellanox

Mellanox Firmware Tool (MFT)

Download and install MFT: MFT Documentation Refer to the User Manual for installation instructions. Once installed, run: mst start mst status flint -d

q

Ports Information

ibstat ibv_devinfo

68

Firmware Version Upgrade Collect Log File

To download the latest firmware version, refer to the NVIDIA Update and Query Utility.
cat /var/log/messages dmesg >> system.log journalctl (Applicable on new operating systems) cat /var/log/syslog

7.3 Windows Troubleshooting

Environment Information

From the Windows desktop choose the Start menu and run:
msinfo32 To export system information to a text file, choose the Export option from the File menu. Assign a file name and save.

Mellanox Firmware Tool (MFT)

Download and install MFT: MFT Documentation Refer to the User Manual for installation instructions. Once installed, open a CMD window and run: WinMFT mst start mst status flint ­d q

Ports Information

vstat

Firmware Version Upgrade

Download the latest firmware version using the PSID/board ID
from here. flint ­d ­i b

Collect Log File

· Event log viewer · MST device logs:
· mst start · mst status · flint ­d dc > dump_configuration.log · mstdump dc > mstdump.log

69

Specifications

The ConnectX-7 adapter card is designed and validated for operation in data- center servers
and other large environments that guarantee proper power supply and airflow conditions. The adapter card is not intended for installation on a desktop or a workstation. Moreover, installing the adapter card in any system without proper power and airflow levels can impact the adapter card’s functionality and potentially damage it. Failure to meet the environmental requirements listed in this user manual may void the warranty.
Please make sure to install the ConnectX-7 card in a PCIe slot that is capable of supplying
the required power and airflow as stated in the below table.

8.1 MCX75310AAC-NEAT / MCX75310AAS-NEAT Specifications

ConnectX-7 adapter cards with OSFP form factor support RHS (Riding Heatsink) cage only.

Physical Adapter Card Size: PCIe Half Height, Half Length 2.71 in. x 6.6 in. (68.90mm x 167.65 mm)

Interfaces

See Supported Interfaces PCI Express Gen 4.0/5.0: SERDES @ 16/32GT/s, x16 lanes (Gen 3.0 compatible)

Networking Port: Single OSFP InfiniBand and Ethernet

Data Rate

InfiniBand (Default) Ethernet

NDR/NDR200/HDR/HDR100/EDR/FDR/SDR 400/200/100/50/40/10/1 Gb/s Ethernet

Protocol Support

InfiniBand: IBTA v1.5a Auto-Negotiation: NDR (4 lanes x 100Gb/s per lane) port, NDR200 (2 lanes x 100Gb/s per lane) port, HDR (50Gb/s per lane) port, HDR100 (2 lane x 50Gb/s per lane), EDR (25Gb/s per lane) port, FDR (14.0625Gb/s per lane), 1X/2X/4X SDR (2.5Gb/s per lane).

Ethernet: 400GAUI-4 C2M, 400GBASE-CR4, 200GAUI-2 C2M, 200GAUI-4 C2M, 200GBASE- CR4,
100GAUI-2 C2M, 100GAUI-1 C2M, 100GBASE-CR4, 100GBASE-CR2, 100GBASE-CR1, 50GAUI-2
C2M, 50GAUI-1 C2M, 50GBASE-CR, 50GBASE-R2 , 40GBASE-CR4, 40GBASE-R2, 25GBASE-R, 10GBASE-R, 10GBASE-CX4, 1000BASE-CX, CAUI-4 C2M, 25GAUI C2M, XLAUI C2M , XLPPI, SFI

Capabilities
Electrical and
Thermal Specificatio
ns

MCX75310AAC-NEAT

Secure Boot Enabled, Crypto Enabled

MCX75310AAS-NEAT

Secure Boot Enabled, Crypto Disabled

Voltage: 12V, 3.3VAUX Maximum current: 100mA

Typical power with passive cables in PCIe Gen 5.0 x16

MCX75310AAC-NEAT MCX75310AAS-NEAT

25.9W 24.9W

The complete electrical and thermal specifications are provided in “NVIDIA ConnectX-7 Electrical and Thermal Specifications” document. You can access the document either by logging into NVOnline or by contacting your NVIDIA representative.

70

Environmen Temperature tal
Humidity

Operational Non-operational Operational Non-operational

0°C to 55°C -40°C to 70°Cb 10% to 85% relative humidity 10% to 90% relative humidity

Altitude (Operational) 3050m

Regulatory Safety: CB / cTUVus / CE EMC: CE / FCC / VCCI / ICES / RCM / KC

RoHS: RoHS Compliant

Notes: a. The ConnectX-7 adapters supplement the IBTA auto-negotiation specification to get better bit error rates and longer cable reaches. This supplemental feature only initiates when connected to another NVIDIA InfiniBand product. b. The non-operational storage temperature specifications apply to the product without its package.

8.2 MCX75310AAS-HEAT Specifications

ConnectX-7 adapter cards with OSFP form factor support RHS (Riding Heat Sink) cage only.

Physical

Adapter Card Size: PCIe Half Height, Half Length 2.71 in. x 6.6 in. (68.90mm x 167.65 mm)

Interfaces

See Supported Interfaces PCI Express Interface: Gen 4.0/5.0: SERDES @ 16/32GT/s, x16 lanes (Gen 3.0 compatible)

Networking Port: Single OSFP InfiniBand and Ethernet

Data Rate

InfiniBand (Default) Ethernet

NDR200/HDR/HDR100/EDR/FDR/SDR 200/100/50/40/10/1 Gb/s Ethernet

Protocol Support

InfiniBand: IBTA v1.5a Auto-Negotiation: NDR200 (2 lanes x 100Gb/s per lane) port, HDR (50Gb/s per lane) port, HDR100 (2 lane x 50Gb/s per lane), EDR (25Gb/s per lane) port, FDR (14.0625Gb/s per lane), 1X/2X/4X SDR (2.5Gb/s per lane).

Ethernet: 200GAUI-2 C2M, 200GAUI-4 C2M, 200GBASE-CR4, 100GAUI-2 C2M, 100GAUI-1 C2M,
100GBASE-CR4, 100GBASE-CR2, 100GBASE-CR1, 50GAUI-2 C2M, 50GAUI-1 C2M, 50GBASE- CR,
50GBASE-R2 , 40GBASE-CR4, 40GBASE-R2, 25GBASE-R, 10GBASE-R, 10GBASE-CX4, 1000BASECX, CAUI-4 C2M, 25GAUI C2M, XLAUI C2M , XLPPI, SFI

Capabilities MCX75310AAS-HEAT

Secure Boot Enabled, Crypto Disabled

Electrical and Thermal Specification
s

Voltage: 12V, 3.3VAUX Maximum current: 100mA
Typical power with passive cables in PCIe Gen 5.0 x16

16.7W

The complete electrical and thermal specifications are provided in “NVIDIA ConnectX-7 Electrical and Thermal Specifications” document. You can access the document either by logging into NVOnline or by contacting your NVIDIA representative.

Environment Temperature al

Operational Non-operational

0°C to 55°C -40°C to 70°Cb

71

Humidity

Operational

10% to 85% relative humidity

Non-operational

10% to 90% relative humidity

Altitude (Operational)

3050m

Regulatory

Safety: CB / cTUVus / CE EMC: CE / FCC / VCCI / ICES / RCM / KC

RoHS: RoHS Compliant

Notes: a. The ConnectX-7 adapters supplement the IBTA auto-negotiation specification to get better bit error rates and longer cable reaches. This supplemental feature only initiates when connected to another NVIDIA InfiniBand product. b. The non-operational storage temperature specifications apply to the product without its package.

8.3 MCX755106AC-HEAT / MCX755106AS-HEAT Specifications
The Socket-Direct ready cards kit does not include the PCIe passive auxiliary connection
card and two Cabline SA-II Plus harnesses. For more information, please refer to PCIe Auxiliary Card Kit.

ConnectX-7 adapter cards with OSFP form factor support RHS (Riding Heat Sink) cage only.

Physical Interfaces
Data Rate Protocol Support

Adapter Card Size

PCIe Half Height, Half Length 2.71 in. x 6.6 in. (68.90mm x 167.65 mm)

Auxiliary PCIe Connection 5.09 in. x 2.32 in. (129.30mm x 59.00mm)

Card Size

Two Cabline CA-II Plus harnesses (white and black)

See Supported Interfaces

PCI Express Interface

Gen 5.0/4.0: SERDES @ 16/32GT/s, x16 lanes (4.0 and 3.0 compatible)

Optional: Additional PCIe x16 Gen 4.0 @ SERDES 18GT/s through the PCIe auxiliary passive card and Cabline SA-II Plus harnesses

Networking Ports

Dual QSFP112 InfiniBand and Ethernet

InfiniBand

NDR200/HDR/HDR100/EDR/FDR/SDR

Ethernet (Default Mode) 200/100/50/25/10 Gb/s

InfiniBand: IBTA v1.5a Auto-Negotiation: NDR200 (2 lanes x 100Gb/s per lane) port, HDR (50Gb/s per lane) port, HDR100 (2 lane x 50Gb/s per lane), EDR (25Gb/s per lane) port, FDR (14.0625Gb/s per lane), 1X/2X/4X SDR (2.5Gb/s per lane)

Ethernet Protocols

200GAUI-2 C2M, 200GAUI-4 C2M, 200GBASE-CR4, 100GAUI-2 C2M, 100GAUI-1 C2M, 100GBASE-CR4, 100GBASE-CR2, 100GBASE-CR1, 50GAUI-2 C2M, 50GAUI-1 C2M, 50GBASE- CR, 50GBASE-R2 , 40GBASE-CR4, 40GBASE-R2, 25GBASE-R, 10GBASE-R, 10GBASECX4, 1000BASE-CX, CAUI-4 C2M, 25GAUI C2M, XLAUI C2M , XLPPI, SFI

72

Capabilities MCX755106AC-HEAT MCX755106AS-HEAT

Secure Boot Enabled, Crypto Enabled Secure Boot Enabled, Crypto Disabled

Electrical and
Thermal Specificatio
ns

Voltage: 12V, 3.3VAUX Maximum current: 100mA

Typical power with

MCX755106AC-HEAT

passive cables in PCIe Gen 5.0 x16

MCX755106AS-HEAT

25.9W 24.9W

The complete electrical and thermal specifications are provided in “NVIDIA ConnectX-7 Electrical and Thermal Specifications” document. You can access the document either by logging into NVOnline or by contacting your NVIDIA representative.

Environmen Temperature tal

Operational Non-operational

0°C to 55°C -40°C to 70°Cb

Humidity

Operational

10% to 85% relative humidity

Non-operational

10% to 90% relative humidity

Altitude (Operational)

3050m

Regulatory Safety: CB / cTUVus / CE EMC: CE / FCC / VCCI / ICES / RCM / KC

RoHS: RoHS Compliant

Notes: a. The ConnectX-7 adapters supplement the IBTA auto-negotiation specification to get better bit error rates and longer cable reaches. This supplemental feature only initiates when connected to another NVIDIA InfiniBand product. b. The non-operational storage temperature specifications apply to the product without its package.

8.4 MCX715105AS-WEAT Specifications

The Socket-Direct ready cards kit does not include the PCIe passive auxiliary connection
card and two Cabline SA-II Plus harnesses. For more information, please refer to PCIe Auxiliary Card Kit.

Physical Interfaces
Data Rate

Adapter Card Size

PCIe Half Height, Half Length 2.71 in. x 6.6 in. (68.90mm x 167.65 mm)

Auxiliary PCIe Connection 5.09 in. x 2.32 in. (129.30mm x 59.00mm)

Card Size

Two Cabline CA-II Plus harnesses (white and black)

See Supported Interfaces

PCI Express Interface

Gen 5.0/4.0: SERDES @ 16/32GT/s, x16 lanes (4.0 and 3.0 compatible)

Optional: Additional PCIe x16 Gen 4.0 @ SERDES 18GT/s through the PCIe auxiliary passive card and Cabline SA-II Plus harnesses

Networking Ports

Single QSFP112 InfiniBand and Ethernet

InfiniBand

NDR/NDR200/HDR/HDR100/EDR/FDR/SDR

Ethernet (Default Mode) 400/200/100/50/25/10 Gb/s

73

Protocol Support

InfiniBand: IBTA v1.5a Auto-Negotiation: NDR (4 lanes x 100Gb/s per lane) port, NDR200 (2 lanes x 100Gb/s per lane) port, HDR (50Gb/s per lane) port, HDR100 (2 lane x 50Gb/s per lane), EDR (25Gb/s per lane) port, FDR (14.0625Gb/s per lane), 1X/2X/4X SDR (2.5Gb/s per lane)

Ethernet Protocols

400GAUI-4 C2M, 400GBASE-CR4, 200GAUI-2 C2M, 200GAUI-4 C2M, 200GBASE-CR4, 100GAUI-2 C2M, 100GAUI-1 C2M, 100GBASE-CR4, 100GBASE-CR2, 100GBASE-CR1, 50GAUI-2 C2M, 50GAUI-1 C2M, 50GBASE-CR, 50GBASE-R2 , 40GBASE-CR4, 40GBASE-R2, 25GBASER, 10GBASE-R, 10GBASE-CX4, 1000BASE-CX, CAUI-4 C2M, 25GAUI C2M, XLAUI C2M , XLPPI, SFI

Capabilities Secure Boot Enabled, Crypto Disabled

Electrical and
Thermal Specificatio
ns

Voltage: 12V, 3.3VAUX Maximum current: 100mA

Typical power with passive cables in PCIe Gen 5.0 x16

24.9W

The complete electrical and thermal specifications are provided in “NVIDIA ConnectX-7 Electrical and Thermal Specifications” document. You can access the document either by logging into NVOnline or by contacting your NVIDIA representative.

Environmen Temperature tal
Humidity

Operational Non-operational Operational Non-operational

0°C to 55°C -40°C to 70°Cb 10% to 85% relative humidity 10% to 90% relative humidity

Altitude (Operational) 3050m

Regulatory Safety: CB / cTUVus / CE EMC: CE / FCC / VCCI / ICES / RCM / KC

RoHS: RoHS Compliant

Notes: a. The ConnectX-7 adapters supplement the IBTA auto-negotiation specification to get better bit error rates and longer cable reaches. This supplemental feature only initiates when connected to another NVIDIA InfiniBand product. b. The non-operational storage temperature specifications apply to the product without its package.

8.5 MCX75510AAS-HEAT Specifications

The Socket-Direct ready cards kit does not include the PCIe passive auxiliary connection
card and two Cabline SA-II Plus harnesses. For more information, please refer to PCIe Auxiliary Card Kit.

Physical Adapter Card Size

Interfaces

Auxiliary PCIe Connection Card Size
Supported Interfaces

PCIe Half Height, Half Length 2.71 in. x 6.6 in. (68.90mm x 167.65 mm)
5.09 in. x 2.32 in. (129.30mm x 59.00mm) Two Cabline CA-II Plus harnesses (white and black)

74

PCI Express Interface
Networking Ports Data Rate InfiniBand

Gen 5.0/4.0: SERDES @ 16/32GT/s, x16 lanes (4.0 and 3.0 compatible) Optional: Additional PCIe x16 Gen 4.0 @ SERDES 16GT/s through the PCIe auxiliary passive card and Cabline SA-II Plus harnesses
Single OSFP InfiniBand
NDR200/HDR/HDR100/EDR/FDR/SDR

Protocol Support

InfiniBand: IBTA v1.5a Auto-Negotiation: NDR200 (2 lanes x 100Gb/s per lane) port, HDR (50Gb/s per lane) port, HDR100 (2 lane x 50Gb/s per lane), EDR (25Gb/s per lane) port, FDR (14.0625Gb/s per lane), 1X/2X/4X SDR (2.5Gb/s per lane)

Capabilitie Secure Boot Enabled, Crypto Disabled s

Electrical and
Thermal Specificati
ons

Voltage: 12V, 3.3VAUX Maximum current: 100mA

Typical power with passive cables in PCIe Gen 5.0 x16

19.6W

The complete electrical and thermal specifications are provided in “NVIDIA ConnectX-7 Electrical and Thermal Specifications” document. You can access the document either by logging into NVOnline or by contacting your NVIDIA representative.

Environme Temperature ntal
Humidity

Operational
Nonoperational
Operational

0°C to 55°C -40°C to 70°Cb
10% to 85% relative humidity

Nonoperational

10% to 90% relative humidity

Altitude (Operational) 3050m

Regulatory Safety: CB / cTUVus / CE EMC: CE / FCC / VCCI / ICES / RCM / KC

RoHS: RoHS Compliant

Notes: a. The ConnectX-7 adapters supplement the IBTA auto-negotiation specification to get better bit error rates and longer cable reaches. This supplemental feature only initiates when connected to another NVIDIA InfiniBand product. b. The non-operational storage temperature specifications apply to the product without its package.

8.6 MCX75510AAS-NEAT Specifications

The Socket-Direct ready cards kit does not include the PCIe passive auxiliary connection
card and two Cabline SA-II Plus harnesses. For more information, please refer to PCIe Auxiliary Card Kit.

Physical Adapter Card Size

PCIe Half Height, Half Length 2.71 in. x 6.6 in. (68.90mm x 167.65 mm)

75

Interfaces

Auxiliary PCIe Connection Card Size

5.09 in. x 2.32 in. (129.30mm x 59.00mm) Two Cabline CA-II Plus harnesses (white and black)

See Supported Interfaces

PCI Express Interface

Gen 5.0/4.0: SERDES @ 16/32GT/s, x16 lanes (4.0 and 3.0 compatible)

Optional: Additional PCIe x16 Gen 4.0 @ SERDES 16GT/s through the PCIe auxiliary passive card and Cabline SA-II Plus harnesses

Networking Ports

Single OSFP InfiniBand

Data Rate InfiniBand

NDR/NDR200/HDR/HDR100/EDR/FDR/SDR

Protocol Support

InfiniBand: IBTA v1.5a Auto-Negotiation: NDR (4 lanes x 100Gb/s per lane) port, NDR200 (2 lanes x 100Gb/s per lane) port, HDR (50Gb/s per lane) port, HDR100 (2 lane x 50Gb/s per lane), EDR (25Gb/s per lane) port, FDR (14.0625Gb/s per lane), 1X/2X/4X SDR (2.5Gb/s per lane)

Capabilities Secure Boot Enabled, Crypto Disabled

Electrical and
Thermal Specificatio
ns

Voltage: 12V, 3.3VAUX Maximum current: 100mA

Typical power with passive cables in PCIe Gen 5.0 x16

24.9W

The complete electrical and thermal specifications are provided in “NVIDIA ConnectX-7 Electrical and Thermal Specifications” document. You can access the document either by logging into NVOnline or by contacting your NVIDIA representative.

Environmen Temperature tal
Humidity

Operational Non-operational Operational Non-operational

0°C to 55°C -40°C to 70°Cb 10% to 85% relative humidity 10% to 90% relative humidity

Regulatory

Altitude (Operational) 3050m Safety: CB / cTUVus / CE EMC: CE / FCC / VCCI / ICES / RCM / KC

RoHS: RoHS Compliant
Notes: a. The ConnectX-7 adapters supplement the IBTA auto-negotiation specification to get better bit error rates and longer cable reaches. This supplemental feature only initiates when connected to another NVIDIA InfiniBand product. b. The non-operational storage temperature specifications apply to the product without its package.

8.7 MCX713106AC-CEAT and MCX713106ASCEAT Specifications

Physical

Adapter Card Size: PCIe Half Height, Half Length 2.71 in. x 6.6 in. (68.90mm x 167.65 mm)

Interfaces

See Supported Interfaces PCI Express Gen 4.0/5.0: SERDES @ 16/32GT/s, x16 lanes (4.0 and 3.0 compatible)

76

Physical

Adapter Card Size: PCIe Half Height, Half Length 2.71 in. x 6.6 in. (68.90mm x 167.65 mm)

Networking Ports: Dual-port QSFP112 Ethernet (copper and optical)

Capabilities MCX713106AC- Secure Boot Enabled, Crypto Enabled CEAT

MCX713106AS- Secure Boot Enabled, Crypto Disabled CEAT

Protocol Support

Data Rate

Ethernet

100/50/40/25/10/1GbE

Ethernet Protocols: 100GAUI-2 C2M, 100GAUI-1 C2M, 100GBASE-CR4, 100GBASE-CR2,
100GBASE-CR1, 50GAUI-2 C2M, 50GAUI-1 C2M, 50GBASE-CR, 50GBASE-R2 , 40GBASE- CR4,
40GBASE-R2, 25GBASE-R, 10GBASE-R, 10GBASE-CX4, 1000BASE-CX, CAUI-4 C2M, 25GAUI C2M, XLAUI C2M , XLPPI, SFI

Electrical and Thermal Specification
s

Voltage: 12V, 3.3VAUX Maximum current: 100mA

Typical power with passive cables in PCIe Gen 5.0 x16

MCX713106AC-CEAT MCX713106AS-CEAT

17.5W 16.8W

The complete electrical and thermal specifications are provided in “NVIDIA ConnectX-7 Electrical and Thermal Specifications” document. You can access the document either by logging into NVOnline or by contacting your NVIDIA representative.

Environment Temperature al
Humidity

Operational Non-operational Operational

0°C to 55°C -40°C to 70°Cb 10% to 85% relative humidity

Non-operational

10% to 90% relative humidity

Altitude (Operational)

3050m

Regulatory

Safety: CB / cTUVus / CE EMC: CE / FCC / VCCI / ICES / RCM / KC

RoHS: RoHS Compliant

Notes: a. The ConnectX-7 adapters supplement the IBTA auto-negotiation specification to get better bit error rates and longer cable reaches. This supplemental feature only initiates when connected to another NVIDIA InfiniBand product. b. The non-operational storage temperature specifications apply to the product without its package.

8.8 MCX713106AC-VEAT and MCX713106ASVEAT Specifications

Physical Adapter Card Size: PCIe Half Height, Half Length 2.71 in. x 6.6 in. (68.90mm x 167.65 mm)

Interfaces

See Supported Interfaces Gen5.0: SERDES @ 16.0GT/s/32GT/s, x16 lanes (4.0, 3.0, 2.0 and 1.1 compatible)

Networking Ports: Dual-port QSFP112 Ethernet (copper and optical)

77

Protocol Support

Data Rate

Ethernet

200/100/50/40/25/10/1 GbE

Ethernet Protocols: 200GAUI-2 C2M, 200GAUI-4 C2M, 200GBASE-CR4, 100GAUI-2 C2M,
100GAUI-1 C2M, 100GBASE-CR4, 100GBASE-CR2, 100GBASE-CR1, 50GAUI-2 C2M, 50GAUI-1
C2M, 50GBASE-CR, 50GBASE-R2 , 40GBASE-CR4, 40GBASE-R2, 25GBASE-R, 10GBASE-R, 10GBASE-CX4, 1000BASE-CX, CAUI-4 C2M, 25GAUI C2M, XLAUI C2M , XLPPI, SFI

Capabilities MCX713106AC-CEAT MCX713106AS-CEAT

Secure Boot Enabled, Crypto Enabled Secure Boot Enabled, Crypto Disabled

Electrical and Thermal Specificatio
ns

Voltage: 12V, 3.3VAUX Maximum current: 100mA

Maximum power available through QSFP112 cage

11W per port (Not thermally supported), 5.1W per port (Thermally supported)

The complete electrical and thermal specifications are provided in “NVIDIA ConnectX-7 Electrical and Thermal Specifications” document. You can access the document either by logging into NVOnline or by contacting your NVIDIA representative.

Environment Temperature al

Operational Non-operational

0°C to 55°C -40°C to 70°Ca

Humidity

Operational

10% to 85% relative humidity

Non-operational 10% to 90% relative humidity

Altitude (Operational) 3050m

Regulatory Safety: CB / cTUVus / CE EMC: CE / FCC / VCCI / ICES / RCM / KC

RoHS: RoHS Compliant

Notes: a. The non-operational storage temperature specifications apply to the product without its package.

8.9 MCX713104AC-ADAT and MCX713104AS-ADAT Specifications

The physical board dimensions are compliant with PCI Express Card Electromechanical
Specification Revision 4.0 except for minor differences with the edge finger alignment, bracket mounting scheme, and low-profile bracket opening. These differ slightly from the PCI CEM specification due to the mechanical constraint of the single quad-port SFP56 cage. It is recommended to use the 3D stp file. Please contact your NVIDIA sales representative to get the mechanical simulation.

Physical Interfaces

PCIe Half Height, Half Length 2.71 in. x 5.64 in. (68.90mm x 143.50 mm)
See Supported Interfaces PCI Express Gen 4.0: SERDES @ 16GT/s, x16 lanes (4.0 and 3.0 compatible) Networking Port: Quad-port SFP56 Ethernet (copper and optical)

78

Protocol Support

Data Rate

Ethernet

50/25GbE

Ethernet Protocols: 50GAUI-2 C2M, 50GAUI-1 C2M, 50GBASE-CR, 50GBASE-R2 , 40GBASE-
CR4, 40GBASE-R2, 25GBASE-R, 10GBASE-R, 10GBASE-CX4, 1000BASE-CX, CAUI-4 C2M, 25GAUI C2M, XLAUI C2M , XLPPI, SFI

Capabilities MCX713104AC-ADAT: Secure Boot Enabled, Crypto Enabled MCX713104AS-ADAT: Secure Boot Enabled, Crypto Disabled

Electrical and Thermal Specification
s

Voltage: 12V, 3.3VAUX Maximum current: 100mA

Typical power with passive cables in PCIe Gen 4.0 x16

MCX713104AC-ADAT MCX713104AS-ADAT

15.8W 15.1W

The complete electrical and thermal specifications are provided in “NVIDIA ConnectX-7 Electrical and Thermal Specifications” document. You can access the document either by logging into NVOnline or by contacting your NVIDIA representative.

Environment Temperature al

Operational Non-operational

0°C to 55°C -40°C to 70°Ca

Humidity

Operational

10% to 85% relative humidity

Non-operational

10% to 90% relative humidity

Altitude (Operational) 3050m

Regulatory

Safety: CB / cTUVus / CE EMC: CE / FCC / VCCI / ICES / RCM / KC

RoHS: RoHS Compliant

Notes: a. The non-operational storage temperature specifications apply to the product without its package.

8.10 MCX713114TC-GEAT Specifications

Physical Adapter Card Size: PCIe Full Height, Half Length 4.37 in. x 6.6 in. (111.15mm x 167.65 mm)

Interfaces

See Supported Interfaces PCI Express Gen 4.0: SERDES @ 16GT/s, x16 lanes (4.0 and 3.0 compatible)

Networking Port: Quad-port SFP56 Ethernet (copper and optical)

Protocol Support

Data Rate

Ethernet

50/25 GbE

Ethernet Protocols: 50GAUI-2 C2M, 50GAUI-1 C2M, 50GBASE-CR, 50GBASE-R2 , 40GBASE-CR4,
40GBASE-R2, 25GBASE-R, 10GBASE-R, 10GBASE-CX4, 1000BASE-CX, CAUI-4 C2M, 25GAUI C2M, XLAUI C2M , XLPPI, SFI

Capabilities MCX713114TC-GEAT

Enhanced-SyncE & PTP, PPS In/Out, Secure Boot, Crypto Enabled

Electrical and
Thermal Specificatio
ns

Voltage: 12V, 3.3VAUX Maximum current: 100mA
Typical power with passive 15.8W cables in PCIe Gen 4.0 x16
The complete electrical and thermal specifications are provided in “NVIDIA ConnectX-7 Electrical and Thermal Specifications” document. You can access the document either by logging into NVOnline or by contacting your NVIDIA representative.

79

Environmen Temperature tal
Humidity

Operational Non-operational Operational Non-operational

0°C to 55°C -40°C to 70°Ca 10% to 85% relative humidity 10% to 90% relative humidity

Altitude (Operational)

3050m

Regulatory Safety: CB / cTUVus / CE EMC: CE / FCC / VCCI / ICES / RCM / KC

RoHS: RoHS Compliant

Notes: a. The non-operational storage temperature specifications apply to the product without its package.

8.11 Cards Mechanical Drawings and Dimensions

All dimensions are in millimeters. The PCB mechanical tolerance is +/- 0.13mm.

Dual-port x16 QSFP112 Adapter Card

Dual-port x16 QSFP112 Socket-Direct Ready Adapter Card

Single-port x16 QSFP112 Adapter Card

HHHL Quad-port SFP56 Adapter Card

References

Read User Manual Online (PDF format)

Read User Manual Online (PDF format)  >>

Download This Manual (PDF format)

Download this manual  >>

Related Manuals