Lenovo QLogic 12200 and 12300 InfiniBand Edge Switches User Guide

October 27, 2023
Lenovo

Lenovo-logo

Lenovo QLogic 12200 and 12300 InfiniBand Edge Switches

Lenovo-QLogic-12200-and-12300-InfiniBand-Edge-Switches-PRODUCT-
IMAGE

QLogic 12200 and 12300 InfiniBand Edge Switches for the IBM Intelligent

Cluster and IBM iDataPlex

Product Guide (withdrawn product)
Over the past 10 years, InfiniBand networks have become the preferred means for interconnecting high performance computing (HPC) resources. The QLogic 12200 and 12300 are 36-port Quad Data Rate
(QDR, 40 Gbps) InfiniBand switches designed to cost-effectively link workgroup resources into a cluster or provide an edge switch option for a larger fabric. The 12200 is a fixed configuration, externally managed InfiniBand switch; the 12300 is a modular, internally or externally managed InfiniBand switch.
Both switches are part of the 12000 Series of products that delivers an unmatched set of high speed networking features and functions.
Figure 1 shows the rear of the 12200 switch. The 12300 is very similar in port layout.

Lenovo-QLogic-12200-and-12300-InfiniBand-Edge-Switches-PRODUCT-
IMAGE

Figure 1. QLogic 12200 QDR InfiniBand switch
Did you know?
QLogic 12200 and 12300 InfiniBand switch series are part of IBM Intelligent Cluster solution (formerly IBM Systems Cluster 1350). IBM Intelligent Cluster is your key to a fully integrated HPC solution. IBM clustering solutions include servers, storage, and industry-leading OEM interconnects that are factory-integrated, fully tested, and delivered to your door, ready to plug into your data center, all with a single point of contact for support.

Part number information

Table 1 shows the part numbers to order these switches. Table 1. IBM part numbers for ordering

Description IBM part number
QLogic 12200 36-port Redundant Power QDR IB Switch Bundle 0449-028
QLogic 12200 for iDPx 36-port Redundant Power QDR IB Switch Bundle 0449-029
QLogic 12200 36-port QDR InfiniBand Switch Bundle 0449-019
QLogic 12200 (iDPx) 36-port QDR InfiniBand Switch Bundle 0449-020
QLogic 12200 Management Module 68Y6995 (fc 3028)
QLogic 12300 36-port QDR InfiniBand Switch Bundle 0449-021

The QLogic 12200 and 12300 QDR InfiniBand edge switches are based on QLogic TrueScale ASIC technology and deliver the next evolution in switching for high performance computing (HPC). The available models are tailored for the requirements of the solution:

  • The QLogic 12200 Fixed Configuration switch for the IBM Intelligent Cluster (0449-019 and 0449-028) provides 36 QDR ports in a compact 1U 14.5-inch deep fixed form factor, making it the ideal edge switch for large HPC fabric deployments. Model 0449-028 has two power supplies for redundancy and supports the optional management module, part number 68Y6995.
  • The QLogic 12200 Fixed Configuration switch for iDataPlex (0449-020 and 0449-029) provides 36 QDR ports in a compact 1U 14.5-inch deep fixed form factor. It includes AC power entry, air flow, and mounting hardware tailored for IBM iDataPlex, making it the ideal embedded or edge switch for iDataPlex solutions deploying InfiniBand. Model 0449-029 has two power supplies for redundancy and supports the optional management module, part number 68Y6995.
  • The QLogic 12300 Configurable switch for IBM Intelligent Cluster (0449-021) is an ideal edge switch for enterprise clustering. It provides 36 QDR ports in a standard 1U 26-inch deep form factor and includes a management processor supporting chassis management and an embedded InfiniBand Subnet Manager. The highly available 12300 design is built around state-of-the-art fault detection and recovery capabilities. It ships with redundant hot-swap power and cooling modules.

Benefits
The QLogic 12200 and 12300 InfiniBand switches offer the following benefits:

  • Low latency: The QLogic 12200 and 12300 provide scalable, predictable low latency, even at 90%traffic use. Predictable latency means HPC applications can be scaled easily without worrying about diminished cluster performance or costly system tuning efforts.
  • Flexible partitioning: The QLogic 12200 and 12300 advanced design is based on an architecture that provides comprehensive virtual fabric partitioning capabilities that enable the InfiniBand fabric to support the evolving requirements of an organization. The TrueScale architecture, together with IFS, allows the fabric to be shared by mission-critical applications while delivering maximum bandwidth utilization.
  • Investment protection: The 12000 series of switch products adhere to the IBTA version 1.2 specification, ensuring the ability to interoperate with all other IBTA compliant devices.
  • Highly reliable: The highly available 12200 and 12300 design is built around state of the art fault detection and recovery capabilities. In addition, 12300 ships with hot swappable, redundant power and cooling modules.
  • Easy to manage: The 12200 takes advantage of QLogic’s advanced InfinBand Fabric Suite Software to facilitate quicker installation and configuration. IFS has advanced tools to verify fabric configuration, topology and performance. Faults are automatically isolated to the component level and reported. In addition, an optional embedded capability of 12300 can be used for switch management.
  • Power optimized: Maximum performance is delivered with minimal power and cooling requirements as part of QLogic’s Star Power commitment to developing green solutions for the data center.

Features and specifications

The QLogic 12200 and 12300 InfiniBand switches include the following features and functions:

  • 36 ports of InfiniBand QDR (40Gbps) performance with support for DDR and SDR
    • 40/20/10-Gbps auto-negotiation links
    • Supports Quad Small Form Factor Pluggable (QSFP) optical and copper cable specifications
  • TrueScale architecture, with scalable, predictable low latency
    • 2.88 Tbps aggregate bandwidth
    • Switching latency: < 140 ns
  • Multiple Virtual Lanes (VLs) per physical port
    • Virtual lanes: 8 plus 1 management
  • Maximum MTU size: 4096 bytes
  • Maximum multicast table size: 1024 entries
  • Supports virtual fabric partitioning
  • Redundant power (12200 models 0449-028 and 0449-029 and 12300 model 0449-021)
  • External chassis management via optional InfiniBand Fabric Suite (IFS) management solution, which provides an expanded set of fabric views and fabric tools.
  • Complies with InfiniBand Trade Association (IBTA) version 1.2 standard

The QLogic 12200 InfiniBand switch supports the following management methods:

  • QLogic InfiniBand Fabric Suite 6.0
  • Optional external server-based InfiniBand compliant subnet manager
  • IBTA compliant SMA and PMA

The QLogic 12300 InfiniBand switch supports the following management methods:

  • QLogic InfiniBand Fabric Suite 6.0
  • Command line interface
  • Optional external server-based InfiniBand compliant subnet manager Optional embedded fabric management
  • IBTA compliant SMA,PMA, and BMA
  • Chassis management GUI
  • SNMP support
  • Access methods:
    • 10/100 Ethernet Base-T
    • Serial port (RS-232 with DB9)

The following LEDs are located on QLogic 12200 InfiniBand switches:

  • One per InfiniBand port
  • Two for InfiniBand switch status

The following LEDs are located on QLogic 12300 InfiniBand switches:

  • One per InfiniBand port
  • One for 10/100 Ethernet interface
  • Two for InfiniBand switch status

QLogic InfiniBand Fabric Suite 6.0
InfiniBand Fabric Suite 6.0 is a new fabric management software product that enables high fabric performance. It incorporates Virtual Fabrics configurations with application-specific Class-of-Service (CoS), Adaptive Routing and Dispersive Routing, performance-enhanced versions of vendor- specific MPI libraries, and support for torus and mesh network topologies.
Features of IFS 6.0 include:

  • QLogic Virtual Fabrics – lets you set a Class-of-Service level for each application running on the fabric to ensure the appropriate bandwidth
  • QLogic Adaptive Routing – monitors the fabric and ensures that traffic is routed down the most efficient path thereby eliminating bottlenecks
  • QLogic Dispersive Routing – balances the traffic load on the fabric using QLogic Performance Scaled Messaging (PSM) to ensure high performance
  • MPI Library Support – supports the widely used Message Passing Interface (MPI) libraries to ensure efficient utilization
  • Advanced Topology Support – enables the use of Torus and Mesh topologies for application environments

For more information, see the white paper “Enabling Efficient Performance at Scale: QLogic IFS 6.0” available from:
http://www.qlogic.com/NewsAndEvents/Documents/QLogic%20IFSV6%20InterSect360.pdf
Supported System x and BladeCenter servers, adapters, and cables
The QLogic 12200 and 12300 InfiniBand switches support IBM Intelligent Cluster nodes with InfiniBand adapters installed in the System x or BladeCenter servers. The tables below show core solution components from the IBM Intelligent Cluster Component Guide Release 10A.

QDR InfiniBand solution with System x servers and QLogic 12200 or 12300

series switches

Table 2 shows core components to create a full-speed QDR (40 Gbps) InfiniBand solution using QLogic 12200 or 12300 InfiniBand switches and System x servers.
Table 2. QDR InfiniBand solution core components for System x

Description IBM part number or machine type

InfiniBand Host Channel Adapters
Mellanox ConnectX-2 VPI Single-port QSFP QDR IB/10GbE PCI-E 2.0 HCA| 81Y1531
Mellanox ConnectX-2 VPI Dual-port QSFP QDR IB/10GbE PCI-E 2.0 HCA| 81Y1535
Mellanox ConnectX Single-Port 4X QDR InfiniBand x8 PCI-E 2.0 HCA| 46M2203
Mellanox ConnectX Dual-Port 4X QDR InfiniBand x8 PCI-E 2.0 HCA| 46M2199
Compute Nodes
System x3450| 7948
System x3550 M3| 7944
System x3650 M3| 7945
System x3655| 7943
System x3455| 7940, 7941
System x3755| 7163
System x3850 X5| 7145
System x iDataPlex dx360 M2| 7321, 7323
System x iDataPlex dx360 M3| 6391
Cables
3m QLogic Optical QDR InfiniBand QSFP Cable| Feature code 1767
10m QLogic Optical QDR InfiniBand QSFP Cable| Feature code 1768
30m QLogic Optical QDR InfiniBand QSFP Cable| Feature code 1769
3m IBM Optical QDR InfiniBand QSFP Cable| Feature code 5989
10m IBM Optical QDR InfiniBand QSFP Cable| Feature code 5990
30m IBM Optical QDR InfiniBand QSFP Cable| Feature code 5991
0.5m QLogic Copper QDR InfiniBand QSFP 30AWG Cable| Feature code 3725
1m QLogic Copper QDR InfiniBand QSFP 30AWG Cable| Feature code 3726
3m QLogic Copper QDR InfiniBand QSFP 28AWG Cable| Feature code 3727

DDR InfiniBand (20 Gbps) solution with BladeCenter servers and QLogic

12200 or 12300 switches

Table 4 shows core components to create a full-speed DDR (20 Gbps) InfiniBand solution using QLogic 12200 or 12300 InfiniBand switches and BladeCenter servers.
Table 4. DDR InfiniBand solution core components for BladeCenter

Description IBM part number or machine type

InfiniBand Host Channel Adapters
4X DDR InfiniBand CFFh Expansion Card for BladeCenter| 43W4423
Chaasis and Compute Nodes
BladeCenter H chassis| 8852
BladeCenter HS21 server| 8853
BladeCenter HS21 XM server| 7995
BladeCenter HS22 server| 7870
BladeCenter LS22 server| 7901
BladeCenter LS42 server| 7902
BladeCenter JS22 server| 7998
BladeCenter QS22 server| 0793
Pass-thru Modules and Cables
4X InfiniBand Pass-thru HSSM for BladeCenter*| 43W4419
3m QLogic Optical DDR InfiniBand QSFP-to-CX4 Cable| 59Y1908
10m QLogic Optical DDR InfiniBand QSFP-to-CX4 Cable| 59Y1912
30m QLogic Optical DDR InfiniBand QSFP-to-CX4 Cable| 59Y1916

  • The 4X InfiniBand Pass-thru high-speed switch module supports DDR speeds but does not support QDR speeds.

Related publications

For more information, see the following documents:

Related product families
Product families related to this document are the following:

  • Top-of-Rack Switches

Notices
Lenovo may not offer the products, services, or features discussed in this document in all countries. Consult your local Lenovo representative for information on the products and services currently available in your area. Any reference to a Lenovo product, program, or service is not intended to state or imply that only that Lenovo product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any Lenovo intellectual property right may be used instead. However, it is the user’s responsibility to evaluate and verify the operation of any other product, program, or service. Lenovo may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to:
Lenovo (United States), Inc.
8001 Development Drive
Morrisville, NC 27560
U.S.A.
Attention: Lenovo Director of Licensing
LENOVO PROVIDES THIS PUBLICATION ”AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. Lenovo may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.
The products described in this document are not intended for use in implantation or other life support applications where malfunction may result in injury or death to persons. The information contained in this document does not affect or change Lenovo product specifications or warranties. Nothing in this document shall operate as an express or implied license or indemnity under the intellectual property rights of Lenovo or third parties. All information contained in this document was obtained in specific environments and is presented as an illustration. The result obtained in other operating environments may vary. Lenovo may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this Lenovo product, and use of those Web sites is at your own risk. Any performance data contained herein was determined in a controlled environment. Therefore, the result obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.
© Copyright Lenovo 2022. All rights reserved.
This document, TIPS0723, was created or updated on May 24, 2010.
Send us your comments in one of the following ways:

Trademarks
Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at
https://www.lenovo.com/us/en/legal/copytrade/.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
BladeCenter®
Intelligent Cluster
System x®
X5
iDataPlex®
Other company, product, or service names may be trademarks or service marks of others.

References

Read User Manual Online (PDF format)

Read User Manual Online (PDF format)  >>

Download This Manual (PDF format)

Download this manual  >>

Lenovo User Manuals

Related Manuals