Lenovo QLogic 12200-BS21 InfiniBand Switches Owner’s Manual

October 27, 2023
Lenovo

Lenovo QLogic 12200-BS21 InfiniBand Switches

QLogic 12200-BS21, 12800-040, and 12800-180 InfiniBand Switches

Product Guide (withdrawn product)

InfiniBand is an industry-standard high-performance interconnect for clusters and enterprise grids. This industry-standard fabric creates clusters that address many of the requirements, such as those found in scientific, technical, and financial applications. InfiniBand solutions are designed for high availability and can also deliver the scalability required by distributed database processing.

The QLogic 12200-BS21 is a 36-port, 40 Gbps InfiniBand switch that cost- effectively links workgroup resources into a cluster. This compact 1U solution is used for building small node count fabrics. Included in the switch are redundant power supplies, power cords, a rack mount kit, and the QLogic InfiniBand Fabric Suite. The QLogic 12800-040 is a 72-port, 40 Gbps InfiniBand switch that links resources using a scalable, low-latency fabric. The 12800-040 supports up to four 18-port QDR leaf modules. Included in the switch are redundant QDR Management Modules, redundant power supplies, redundant fans, power cords, a rack mount kit, and the QLogic InfiniBand Fabric Suite. The QLogic 12800-180 is a 324-port 40 Gbps InfiniBand switch designed to maintain larger clusters, supporting up to eighteen 18-port QDR leaf modules. Included in the 12800-180 are redundant QDR Management Modules, a full complement of Management Modules to provide a 100% nonblocking fabric for all ports, redundant power supplies, redundant fans, power cords, rack mount kit, and QLogic InfiniBand Fabric Suite.

These new QDR high-performance server switches enable you to form high- performance clusters and grids that deliver the performance required for you to realize the full potential of your applications and systems.

Figure 1 shows the QLogic 12200 QDR InfiniBand switch. Figure 1. QLogic 12200 QDR InfiniBand switch

Did you know?
QLogic 12800 and 12200 InfiniBand switches series are part of the IBM® Power Systems™ Cluster. The IBM Power Systems Cluster is a fully integrated HPC solution. IBM clustering solutions include servers, storage, and industry- leading OEM interconnects that are factory-integrated, fully tested, and delivered to your door, ready to plug into your data center, all with a single point of contact for support.

Part number information
Table 1 shows the part numbers to order these modules and additional options for them.
Table 1. IBM part numbers for ordering

Description IBM part number Feature
QLogic 12800-180 324-port QDR InfiniBand Switch 7874-324 N/A
QLogic 12800-040 72-port QDR InfiniBand Switch 7874-072 N/A
QLogic 12200 36-port InfiniBand Switch 7874-036 N/A

QLogic 18-Port 4x QDR Ultra-HP InfiniBand Leaf Module for the 12800- 040 and 12800-180 QLogic QDR InfiniBand Switches| 7874-072, 7874-

324

| 3314
QDR Fabric Director Leaf Module Blank| 7874-324| 3315
QDR Fabric Director Leaf Module Blank
| 7874-072| 3315
IBM 19-inch rack for the QLogic 12200 36-port InfiniBand Switch (7874- 036)| 7014-T00, 7014- T42 (1U)| 0379
IBM 19-inch rack for the QLogic 12800-040 72-port QDR InfiniBand Switch (7874-072)| 7014-T00, 7014- T42 (5U)| 0380
IBM 19-inch rack for the QLogic 12800-180 324-port QDR InfiniBand Switch (7874-324)**| 7014-T42 (14U)| 0381

  • The Fabric Director Leaf Module Blank is used to fill unused Leaf Slots within the QLogic QDR InfiniBand Fabric Director Switches. Empty leaf module positions on the 7874-072 or 7874-324 QLogic QDR InfiniBand switch.
    ** The IBM 7874-036 and IBM 7874-072 can be mounted in any IBM 19-inch rack. To address the need for larger system cable management of the InfiniBand cables, the IBM 7874-324 is required to be installed in a 7014-T42 rack, with the rack extender feature number. The rack extender is 20-inches deep.

Note: Larger configurations require the use of an external Host Subnet Manager (HSM). This will be running on an IBM System x3550 or IBM System x3650, with InfiniBand Host Channel Adapter and InfiniBand cable. Commercial configurations can use the Embedded Subnet Manager (ESM), which is monitored from the network. More than 144 nodes will require the use of the HSM.

Figure 2 shows the 12800-180 switch module. Figure 2. QLogic 12800-180 InfiniBand switch

The QLogic 12800-180, 12800-040, and 12200-BS21 QDR InfiniBand switches, based on QLogic TrueScale ASIC technology, deliver the next evolution in switch fabric performance for High Performance Computing (HPC) environments.

These switches deliver high port density and low power per port:

  • The 12800-180 provides 324 QDR ports using 18-port Ultra-High Performance (UHP) leaf modules in a 14U chassis.
  • The 12800-040 provides 72 QDR ports using 18-port UHP leaf modules in a 5U chassis.
  • The 12200-BS21 provides 36 QDR ports a 1U chassis.

The high-availability 12800 design includes hot swappable InfiniBand spine and leaf modules, fully redundant power and cooling, and redundant management processors supporting chassis management (CLI and GUI), as well as embedded InfiniBand Subnet Managers (SMs). The 12800 also supports advanced QLogic fabric features including adaptive routing, Virtual Fabrics (vFabric), Quality of Service (QoS), and management wizards for automated installation, configuration, and monitoring to maximize operational efficiency.

The QLogic 12800 and 12200 InfiniBand switches include redundant power supplies, power cords, a rack mount kit, and the QLogic InfiniBand Fabric Suite.

Benefits
The QLogic 12800 and 12200 InfiniBand switches offer the following benefits:

  • Low latency: The QLogic 12800 and 12200 provide scalable, predictable low latency, even at 90%traffic use. Predictable latency means that HPC applications can be scaled easily without worrying about diminished cluster performance or costly system-tuning efforts.
  • Flexible partitioning: The QLogic 12800 and 12200 advanced design are based on an architecture that provides comprehensive virtual fabric partitioning capabilities that enable the InfiniBand fabric to support the evolving requirements of an organization. The TrueScale architecture, together with IFS, allows the fabric to be shared by mission-critical applications while delivering maximum bandwidth utilization.
  • Modular design: InfiniBand port, power, cooling, and management modules are common in the series, giving customers the flexibility to deploy and grow HPC environments in a cost-effective fashion.
  • Investment protection: The QLogic 12800 and 12200 adheres to the InfiniBand Trade Association Version 1.2 specification, ensuring the ability to interoperate with all other IBTA-compliant devices.
  • Highly reliable: This system is designed for high availability with features that include port-to-port and module-to-module failover, non-disruptive firmware upgrades, component-level diagnostics and alarming, and both in-band and out-of-band management.
  • Easy to manage: The 12800 and 12200 use QLogic’s advanced IFS software for quicker installation and configuration. IFS has advanced tools to verify fabric configuration, topology, and performance. Faults are automatically isolated to the component level and reported.
  • Simple installation and configuration: Using the installation and configuration wizards contained in the IFS package allows users to bring up fabrics in days instead of weeks.
  • Power optimized: Maximum performance is delivered with minimal power and cooling requirements as part of QLogic’s Star Power commitment to developing green solutions for the data center.

Features and specifications
The QLogic 12800 InfiniBand switches include the following features and functions:

  • Between 36 and 324 ports of InfiniBand QDR (40 Gbps) performance with support for DDR and SDR
    • 40/20/10 Gbps auto-negotiation links
    • Supports Quad Small Form Factor Pluggable (QSFP) optical cable specifications
  • TrueScale architecture, with scalable, predictable low latency
    • Scales to 25.92 Tbps aggregate bandwidth
    • Switching latency: 140 to 420 ns
  • Multiple virtual lanes (VLs) per physical port
    • Virtual lanes: Eight plus one management
  • Maximum MTU size: 4096 bytes
  • Maximum multicast table size: 1024 entries
  • Supports virtual fabric partitioning
  • Fully redundant system design
  • Option to use UHD leafs for maximum connectivity and performance (only on QLogic 12800 switches)
    • UHP module: 18 QDR ports
  • Redundant QDR managed spine modules (only on QLogic 12800-180 switch) — a full complement of spine modules to provide a 100% nonblocking fabric for all ports
  • Integrated chassis management capabilities for installation, configuration, and ongoing monitoring
  • Optional InfiniBand Fabric Suite (IFS) management solution that provides expanded fabric views and fabric tools
  • Complies with InfiniBand Trade Association (IBTA) Version 1.2 standard

The QLogic 12200 InfiniBand switch includes the following features and functions:

  • Thirty-six ports of InfiniBand QDR (40 Gbps) performance with support for DDR and SDR
    • 40/20/10-Gbps auto-negotiation links
    • Supports Quad Small Form Factor Pluggable (QSFP) optical cable specifications
  • TrueScale architecture, with scalable, predictable low latency
    • 2.88 Tbps aggregate bandwidth Switching latency: < 140 ns
  • Multiple Virtual Lanes (VLs) per physical port
    • Virtual lanes: Eight plus one management
  • Maximum MTU size: 4096 bytes
  • Maximum multicast table size: 1024 entries
  • Supports virtual fabric partitioning
  • Redundant power (12200 models 0449-028 and 0449-029)
  • External chassis management via optional InfiniBand Fabric Suite (IFS) management solution, which provides an expanded set of fabric views and fabric tools.
  • Complies with InfiniBand Trade Association (IBTA) Version 1.2 standard

QLogic 12800-180 and 12800-040
The QLogic 12800-180 has the following specifications:

  • Eighteen to 324 ports
  • 25.92 Tbps switching capacity
  • Supports up to 18 leaf modules

The QLogic 12800-040 has the following specifications:

  • Eighteen to 96 ports
  • 5.76 Tbps switching capacity
  • Supports up to 4 leaf modules

The QLogic 12800 InfiniBand switch family supports the following management methods:

  • Command-line interface
  • Optional external server-based InfiniBand-compliant subnet manager Optional embedded fabric management
  • IBTA-compliant SMA, PMA, and BMA
  • Chassis management GUI
  • SNMP support
  • Access methods
    • 10/100 Ethernet Base-T
    • Serial port (RS-232 with DB9)

QLogic 12200-SB21
The QLogic 12200-SB21 has the following specifications:

  • 36 ports
  • 5.76 Tbps switching capacity

The QLogic 12200 InfiniBand switch supports the following management methods:

  • QLogic InfiniBand Fabric Suite
  • Optional external server-based InfiniBand compliant subnet manager
  • IBTA compliant SMA and PMA

Supported servers, adapters, and cables
The QLogic 12800 and 12200 InfiniBand switches support IBM Power Systems nodes with InfiniBand adapters installed in Power Systems or BladeCenter® servers. The tables below show core solution components from the IBM Cluster systems release guide.

QDR InfiniBand solution with Power Systems servers and QLogic 12800 series switches
Table 2 shows core components to create full-speed QDR (40 Gbps) InfiniBand solution using QLogic 12800 InfiniBand switches and Power Systems servers.
Table 2. QDR InfiniBand solution core components for Power Systems

Description IBM part number or machine type

InfiniBand Host Channel Adapters
PCIE2 LP 2-PORT 4X IB QDR ADAPTER 40 GB| Feature code 5283
PCIE2 2-PORT 4X IB QDR ADAPTER 40 GB| Feature code 5285
Compute nodes
IBM Power 710| 8231-E1C
IBM Power 720| 8202-E4B, 8202-E4C
IBM Power 730| 8231-E2C
IBM Power 740| 8205-E6B
, 8205-E6C
IBM Power 770| 9117-MMC
IBM Power 780| 9179-MHC
Cables
10 m Quad Data Rate InfiniBand Optical Cable, QSFP/QSFP| Feature code 3290
30 m Quad Data Rate InfiniBand Optical Cable, QSFP/QSFP| Feature code 3293
1 m (3.3-ft) IB/E’Net 40G Copper Cable QSFP/QSFP| Feature code 3287
3 m (9.8-ft.) IB/E’Net 40G Copper Cable QSFP/QSFP| Feature code 3288
5 m QDR IB/E’Net Copper Cable QSFP/QSFP| Feature code 3289

  • A PCIe Gen2 riser is required.

DDR InfiniBand (20 Gbps) solution with BladeCenter servers and QLogic 12800 and QLogic 12200 series switches
Table 3 shows core components to create a full-speed DDR (20 Gbps) InfiniBand solution using QLogic 12800 InfiniBand switches and Power Systems servers.
Table 3. DDR InfiniBand solution core components for BladeCenter

Description IBM part number or machine type Feature code

InfiniBand Host Channel Adapters
QLogic 2-Port QDR 40 GBps InfiniBand| 7891-74x, 7891-
73x| 8272
Chassis and compute nodes
IBM BladeCenter H chassis| 7779-BCH|
IBM BladeCenter HT chassis| 8750-HC1|
IBM BladeCenter S chassis| 7779-BCS|
IBM BladeCenter PS703 server| 7891-73x|
IBM BladeCenter PS704 server| 7891-74x|
IBM BladeCenter PL7B2 server| 7058-76L|
IBM BladeCenter JS22 server| 7998|
IBM BladeCenter JS23 server| 7778|
IBM BladeCenter JS43 server| 7778-23x|
Pass-thru modules and cables
4X InfiniBand Pass-thru HSSM for BladeCenter*| 43W4419|
3 m QLogic Optical DDR InfiniBand QSFP-to-CX4 Cable| 59Y1908|
10 m QLogic Optical DDR InfiniBand QSFP-to-CX4 Cable| 59Y1912|
30 m QLogic Optical DDR InfiniBand QSFP-to-CX4 Cable| 59Y1916|

  • The 4X InfiniBand Pass-thru high-speed switch module supports DDR speeds but does not support QDR speeds.
Related publications

For more information, see the following documents:

Notices

Lenovo may not offer the products, services, or features discussed in this document in all countries. Consult your local Lenovo representative for information on the products and services currently available in your area. Any reference to a Lenovo product, program, or service is not intended to state or imply that only that Lenovo product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any Lenovo intellectual property right may be used instead. However, it is the user’s responsibility to evaluate and verify the operation of any other product, program, or service. Lenovo may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to:

Lenovo (United States), Inc.
8001 Development Drive
Morrisville, NC 27560
U.S.A.
Attention: Lenovo Director of Licensing

LENOVO PROVIDES THIS PUBLICATION ”AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. Lenovo may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

The products described in this document are not intended for use in implantation or other life support applications where malfunction may result in injury or death to persons. The information contained in this document does not affect or change Lenovo product specifications or warranties. Nothing in this document shall operate as an express or implied license or indemnity under the intellectual property rights of Lenovo or third parties. All information contained in this document was obtained in specific environments and is presented as an illustration. The result obtained in other operating environments may vary. Lenovo may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this Lenovo product, and use of those Web sites is at your own risk. Any performance data contained herein was determined in a controlled environment. Therefore, the result obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.

© Copyright Lenovo 2022. All rights reserved.

This document, TIPS0821, was created or updated on October 14, 2011.

Send us your comments in one of the following ways:

This document is available online at https://lenovopress.lenovo.com/TIPS0821.

Trademarks
Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.

The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
BladeCenter®
Other company, product, or service names may be trademarks or service marks of others.

QLogic 12200-BS21, 12800-040, and 12800-180 InfiniBand Switches (withdrawn product)

References

Read User Manual Online (PDF format)

Loading......

Download This Manual (PDF format)

Download this manual  >>

Lenovo User Manuals

Related Manuals