DELL Technologies PowerFlex Appliance Software User Guide

August 15, 2024
DELL Technologies

DELL Technologies PowerFlex Appliance Software

Logo DELL Technologies PowerFlex Appliance Software

Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem.
WARNING : A WARNING indicates a potential for property damage, personal injury, or death.

Introduction

This document describes the high-level design of the Power Flex appliance. It also describes the hardware and software components in each Power Flex appliance.

The target audience for this document includes customers, sales engineers, field consultants, and advanced services specialists who want to deploy a virtualized infrastructure using Power Flex appliance.

Power Flex appliance architecture is based on Dell PowerEdge R650, R750, R6525, R7525, R640, R740xd, and R840 servers.

Power Flex Manager provides the management and orchestration functionality for a Power Flex appliance.

The dv switch names are for example only and may not match the configured system. Do not change these names or a data unavailable or data lost event may occur.

Dell Power Flex appliance was previously known as Dell Flex appliance. Similarly, Dell Power Flex Manager was previously known as Dell Flex Manager, and Dell Power Flex was previously known as Dell Flex OS. References in the documentation will be updated over time.

For additional Power Flex appliance documentation, go to PowerFlex appliance technical documentation.

Revision history

Date| Document rev i s ion| De s cription of change s
---|---|---
January 2024| 5.5| Updated D e p loymen t options overview Added information about S to ra g e schem a s
November 2023| 5.4| Added support for Cisco Nexus 93180YC-FX3
August 2023| 5.3| Added support for Multi -subnet and multi-VLAN configurations
March 2023| 5.2| Added support for PowerFlexR7525 nodes
December 2021| 5.1| Updated architecture diagrams
November 2021| 5.0| Added content for Power Flex management controller 2.0
June 2021| 4.0| Added content for 3-node Power Flex management controller
November 2020| 3.0| Updated

  • System features
  • System architecture

June 2020| 2.0| Add content for

  • Native asynchronous replication
  • Power Flex 3.5
  • LACP bonded logical net work

March 2020| 1.1| Removed all software version numbers from the publication
March 2020| 1.0| Initial release

System overview

The Power Flex appliance is a fully integrated, preconfigured, and validated hype converged infrastructure appliance that integrates Power Flex virtualization software with Dell PowerEdge servers.

Power Flex appliance is used in a hype converged or server SAN architecture, heterogeneous virtualized environments, and high performance databases.

Power Flex appliance provides large storage capacity and scalability, enabling you to start small and grow in discrete increments. The flexible architecture enables you to mix and match configuration types within the same cluster. It allows you to grow and manage your compute and storage resources together or independently depending on your business needs. It provides enterprise-grade data protection, multitenant capabilities, and add-on enterprise features such as quality of service (QoS), thin provisioning, and snapshots.

Power Flex appliance uses Power Flex Manager for management and operations. Power Flex Manager allows you to build, automate and simplify implementation, expansion and life cycle management.

System features

  • Power Flex appliance uses Power Flex, which uses the local disks of existing Power Flex nodes to create a storage pool. It is designed to scale to thousands of Power Flex nodes.
    The integrated scalable network architecture is available with a choice of the following:

  • Hardware consisting of Cisco Nexus switches, Dell Powerwinch switches, or customer switches

  • Bandwidth depending on the customer needs

  • Network configurations

    • Trunk
    • Channel-group
    • Channel-group with LACP
  • Complete flexibility in designing the converged system with the following options:

    • Power Flex hype converged nodes
    • Power Flex storage-only nodes
    • Power Flex compute-only nodes
  • Supports 10 Gbe, 25 Gbe or 100 Gbe ports for backend connectivity

  • Supports native asynchronous replication between sites

  • Power Flex nodes supports SSD and NVMe drive technologies

  • Improved node maintenance using protected maintenance mode (PMM)

  • Supports self-encrypting drives (SEDs) enabled by Dell Cloud Link

  • Supports horizontal scaling by adding Power Flex nodes to extend storage or compute capacity

  • Secure and scalable storage using Power Flex

    • Seamlessly increase storage capacity or performance with horizontal node scaling
    • Optional data compression feature that improves storage efficiency
    • Optional Data at Rest Encryption (D@RE) feature provides data security using Cloud Link as a software encryption layer, or as a key manager for self-encrypting drives
  • End-to-end lifecycle management with Power Flex Manager

  • For the Power Flex management controller 2.0, an optional three-node Power Flex management controller supporting VMware Essie and Power Flex is available for shared storage layer, providing HA for the VM workloads on the Power Flex management controller. The Power Flex management nodes are based on Power Flex R650 nodes and H755 RAID controller.

  • Supports Fibre Channel HBAs on Power Flex nodes to connect to external storage arrays (outside the Power Flex data path) for migrating the data from external storage arrays to Power Flex

  • Power Flex supports multi-subnet or multi-VLAN for all network types other than data, vison, and NSX overlay

  • Power Flex supports multiple Cisco and Dell switch models with different firmware and operating system versions

System components

Power Flex appliance contains compute, network, storage, data encryption, virtualization, and management resources.

The following table shows the supported and optional components of Power Flex appliance:

Resou rc e Components
Compute PowerFlex appliance R650, R750, R6525, R7525. R640, R740xd, and R840
Network Customer supplied switches, or the following switches are also

supported:

  • Dell Power Switch S4148F-ON switch
  • Dell Power Switch S5224F-ON switch
  • Dell Power Switch S5048-ON switch
  • Dell Power Switch S5248F-ON switch
  • Dell Power Switch S5296F-ON switch
  • Cisco Nexus 93180TC-EX switch
  • Cisco Nexus 93180YC-FX switch
  • Cisco Nexus 93180YC-FX3
  • Cisco Nexus 93240YC-FX2 switch

Storage| Power Flex
Virtualization| VMware vSphere Essie
Management| Power Flex Manager, VMware center Server Appliance (vasa)
Power Flex management controller (optional)| Power Flex management controller 2.0 based on Power Flex
Data encryption (optional)| Dell Cloud link or self-encrypting drives (SEDs)

Storage schemas

Protection domains

A protection domain (PD) is a group of nodes or storage data servers that provide data isolation, security, and performance benefits. A node participates in only one protection domain at a time. Only nodes in the same protection domain can affect each other, nodes outside the protection domain are isolated. Secure multi-tenancy can be created with protection domains since data does not mingle across protection domains. You can create different protection domains for different node types with unequal performance profiles. All the hosts in the domain must have the same type and configuration. A Power Flex hype converged node should not be in the same protection domain as a Power Flex storage-only node. The node configuration must match, which includes the drives, CPU, and memory. Any difference in the node configuration leads to an unknown performance impact

Storage pools

Storage pools are a subset of physical storage devices in a protection domain. Each storage device belongs to one (and only one) storage pool. The best practice is to have the same type of storage devices (HDD versus SSD or SSD versus NVMe) within a storage pool to ensure that the volumes are distributed over the same type of storage within the protection domain. Power Flex supports two types of storage pools. You can choose between both layouts. A system can support both fine granularity (FG) and medium granularity (MG) pools on the same storage data server nodes. Volumes can be non-disruptively migrated between the two layouts. Within an fine granularity pool, you can enable or disable compression on a per-volume basis:

  • Medium granularity: Volumes are divided into 1MB allocation units, distributed, and replicated across all disks contributing to a pool. MG storage pools support either thick or thin-provisioned volumes, and no attempt is made to reduce the size of user-data written to disk (except with all-zero data). MG storage pools have higher storage access performance than fine granularity storage pools but use more disk space.
  • Fine granularity: A space efficient layout, with an allocation unit of just 4 KB and a physical data placement scheme based on log structure array (LSA) architecture. Fine granularity layout requires both flash media (SSD or NVMe) as well as NVDIMM to create an fine granularity storage pool. fine granularity layout is thin-provisioned and zero-padded by nature, and enables Power Flex to support in-line compression, more efficient snapshots, and persistent checksums. FG storage pools use less disk space than MG storage pools but have slightly lower storage access performance.

Fault sets

A fault set is a logical entity that contains a group of storage data servers within a protection domain that have a higher chance of going down together; for example, if they are all powered in the same rack. By grouping them into a fault set, Power Flex mirrors data for a fault set on storage data servers that are outside the fault set. Thus, availability is assured even if all the servers within one fault set fail simultaneously.
Storage schemas

System architecture

There are several options for a Power Flex appliance deployment.

Power Flex appliance is deployed as one of the following:

  • Hype converged
  • Two-layer
  • Hybrid-hype converged
  • Storage-only

The following figure is a high-level view of the physical Power Flex appliance. This figure is not specific to a particular system but is a generic representation. The Power Flex management environment depicted below can be deployed on an optional Power Flex appliance management cluster, or a customer provided VMware Essie host.
System architecture
The following figure displays a Power Flex appliance replication environment. The Power Flex management environment depicted below can be deployed on an optional Power Flex appliance management cluster, or a customer provided VMware Essie host.
System architecture
Power Flex metadata managers (MDM) and storage data replication (SDR) need to communicate to each other across the WAN.

Components in a dep l oyment Description
Power Flex appliance management environment Power Flex Manager requires a

VMware Essie server hosting applications. The Power Flex management environment can be deployed on an optional 3-node Power Flex management cluster , or a customer provided VMware ESX host.
Customer provided resources| Embedded operating system jump server and VMware center server. It is anticipated that these resources will already be installed and accessible.
Access A and 8 switches| These switches are partially configured by Power Flex Manager or they are fully customer configured and connect directly to the Power Flex appliance nodes.
Hardware management switch| Provides separate network connectivity y of iDRAC and access switch management interfaces.
Power Flex appliance customer environment|

  • Power Flex hype converged nodes
  • Power Flex compute-only nodes
  • Power Flex storage-only nodes

VLANs| VLANs that are required to be defined on the various switches, interfaces. and network inter face cards (NICs).

Deployment options overview

There are several options for a Power Flex appliance deployment.

Power Flex appliance nodes use Power Flex to operate storage and tie in workloads. Power Flex appliance uses these Power Flex features:

  • Storage data client (SDC), which consumes storage from the Power Flex appliance.
  • Storage data server (SDS), which contributes node storage to Power Flex appliance.
  • Power Flex metadata manager (MDM), which manages the storage blocks and tracks data location across the system.
  • Storage data replication (SDR), which enables native asynchronous replication on Power Flex nodes.

Power Flex enables flexible deployment options by allowing the separation of SDC and SDS components. It addresses data center workload requirements through the following Power Flex appliance deployment options:

Depl o yment type Des c ripti o n
Hype converged Metadata manager, compute, and storage reside within the same

server . SOR is supported on PowerFlex hype converged nodes.
Two-layer| Separate es compute resources from storage resources. allowing the independent expansion of compute or storage resources.

Consists of PowerFlex compute-only nodes (supporting the SOC) and PowerFlex storage- only nodes ( connected to and managed by the SOS).

PowerFlex compute-only nodes host end-user  applications. PowerFlex storage- only nodes contribute storage to the system pool.

PowerFlex metadata manager (MOM) runs on PowerFlex storage -only nodes.

Hybrid hype converged| Consists of e PowerFlex hype converged nodes. PowerFlex compute e-only nodes. and PowerFlex storage-only nodes. Some nodes contribute both compute resources and storage resources (PowerFlex hype converged nodes). some contribute only compute resources (PowerFlex compute-only nodes). and some contribute only storage resources (PowerFlex storage-only nodes).

PowerFlex metadata manager (MOM) runs either on PowerFlex hype converged nodes or PowerFlex storage-only nodes.

PowerFlex compute-only nodes| Consists of PowerFlex compute-only nodes (supporting the SOC) on Microsoft Windows, VMware ESXI. CentOS , and Red Hat Enterprise Linux.

PowerFlex compute-only nodes hosts end-user applications.

NOT E : Red Hat Enterprise Linux. CentOS. and Windows compute- only nodes must be

deployed by the customer instead of using PowerFlex Manager.

PowerFlex storage -only nodes| Consists of embedded operating system nodes which contribute storage resources to the virtual environment. The back-end traffic shares the same PowerFlex data networks. No SOC components are installed on this node.

PowerFlex metadata manager (MOM) runs on PowerFlex storage-only nodes. SOR is supported on PowerFlex storage-only nodes with a dual CPU.

PowerFlex storage-only nodes two-layer deployments use the same four PowerFlex data networks for both SDC and SDS communications. Two-layer deployment allows the rebooting of PowerFlex compute-only nodes without PowerFlex ramifications.

When designing initial deployment or specifying later growth, use PowerFlex hype converged nodes. You can add PowerFlex compute-only nodes, or PowerFlex storage-only nodes as needed.

To control the number of processors or cores, consider separating the compute for the application from the nodes that support storage. This deployment is a pure two-layer deployment. Extra workloads are supported or added using:

  • PowerFlex hype converged nodes
  • PowerFlex compute-only nodes
  • PowerFlex storage-only nodes

When hype converged nodes are mixed with PowerFlex compute-only nodes or PowerFlex storage-only nodes, it creates a hybrid deployment.

Base configurations and scaling in a hype converged deployment

PowerFlex appliance has a base configuration that is a minimum set of hype converged components and fixed network resources. Within the base configuration, the following hardware aspects are customizable:

Hardware Dell Technologies recommends…
Compute and storage A minimum of four PowerFlex appliance nodes.
Network An optional management switch and a minimum of two access switches.

In a leaf-spine configuration, one pair of leaf switches and one pair of border-leaf switches are recommended.

Management| An optional three -node PowerFlex management controller running VMware Essie to host PowerFlex Manager and other system Software’s. A PowerFlex management controller can also be used for this purpose.

NOTE: The single node controller is configured with the internal drives in a RAID5 configuration in order to provide enhanced local storage resiliency as there is only one node. The three node management environment does not include local RAID storage since there is additional redundancy provided by the fact that there are multiple hosts in the cluster. The single node management cluster can also be expanded into a three-node cluster.

Cabling|

  • 25 GB SFP28 direct attached copper – four for each PowerFlex hype converged node.
  • 100 GB QSFP28 direct attached copper – two for access switch uplinks and two for access switch VLT/VPC interconnects.
  • 1 GB CAT5 or CAT6 – one for each node for iDRAC connectivity and one for each access switch for management connectivity.

Base configurations and scaling in a two-layer deployment

PowerFlex appliance has a base configuration that is a minimum set of PowerFlex compute-only nodes and PowerFlex storage only nodes and fixed network resources. Within the base configuration, the following hardware aspects are customizable:

Hardware Dell Technologies recommends…
Compute A minimum of three PowerFlex compute-only nodes.
Storage Six Power Flex storage-only nodes (four minimum).
Network An optional management switch and a minimum of two access switches.
Management An optional three-node PowerFlex management controller running

VMware Essie to host PowerFlex Manager and other system software. A single PowerFlex management  controller  can also be used for this purpose.
| NOTE: The single node controller is configured with the internal drives in a RAID5 configuration in order to provide enhanced local storage resiliency as there is only one node. The three node management environment does not include local RAID storage since there is additional redundancy provided by the fact that there are multiple hosts in the cluster. The single node management cluster can also be expanded into a three-node cluster.
Cabling|

  • 25 GB SFP28 direct attached copper – four for each storage only and compute only node.
  • 100 GB QSFP28 direct attached copper – two for access switch uplinks and two for access switch VLT/VPC interconnects.
  • 1 GB CAT5 or CAT6 – one for each node for iDRAC connectivity and one for each access switch for management connectivity.

Additional references

This section provides references to related documentation for virtualization, compute, network, management and storage components

Network components

Network component information and links to documentation are provided.

Pr oduct Link to documentation
Dell Power Switch S5200 series [http](http://www.delltechnologies.com/asset

/en-us/products/)s://www.delltechnologies.com/asset/en-us/products/ networking/technical support/dell_emc_networking- s5200_on_spec_sheet.pdf
Dell Power Switch S4100 series| hnps://i.dell.com/sites/doccontent/shared- content/data- sheets/ en/ Documents/ dell-emic-net working-S4100-series- spec- sheet.pdf
Dell Power Switch S5000 series| https:// www.dell.com/ support/home Ian -us/product- support/product/force10-s5000/docs
Dell Power Switch S5296F-ON| https:/ / www.delltechnologies.com/asset/ en-us/ products/ net working/ technical-support/ dell_emc_net working- s5200_on_spec_sheet,pdf
Cisco Nexus 93180YC-EX| https://www.cisco.com/ cl en/ us/ products/collateral/ switches/nexus-9000-serles-swit chess/ datasheet- c78-742283.ht ml
Cisco Nexus 93180YC-FX| https:// www.cisco.com/ c/ en/ us/ support/ switches/ nexus-93180yc- fax – switch/ model.html
Cisco Nexus 93180 YC-F X3| https:/ / www.cisco.com/ c/ en/ us/ products/ collateral/ switches/nexus-9000 -series-switches/ datasheet- c78-744052.html
Cisco Nexus 93240YC-FX2| https://www.cisco.com/ cl en/ us/ products/collateral/ switches/nexus-9000-serles-swit chess/ datasheet- c78-742282.ht ml

Storage components

Storage component information and links to documentation are provided.

Product Description Link to documentation
PowerFlex Converges storage and compute resources into a single-layer

architect ure, aggregating capacity and performance, simplifying management, and scaling to thousands of nodes.| dell.com/ support
NVDIMM-N

| Provides compression high-speed DRAM performance that is coupled with flash- backed persistent storage for PowerFlex storage-only nodes.

NVDIMM is used for compression only on PowerFlex storage-only nodes.

| https://www.dell.com/ support/manuals/us/ en/04 I power edge – t640/nvdimmnug _pub/ int reduction?

guide=guid-8884370c-5553-4089- b613-a3c570b56f0e&lang=en- us

Virtualization components

Virtualization component information and links to documentation are provided.

Product Description Link to documentation
VMware center Server App lance (v CSA) vasa is a preconfigured Linux virtual

machine, which is optimized for running VMware  center Server and the associated services on Linux.| https:/ /www.vmware.com/products/vcenter- server.html
VMware vSphere Esti| Virtualized infrastructure for hype converged systems. Virtualizes all application servers and provides VMware High Availability y (HA) and Dynamic Resource Scheduling (DRS).| https://www.vmware.com/products/vsphere.html

Logo

Read User Manual Online (PDF format)

Loading......

Download This Manual (PDF format)

Download this manual  >>

DELL Technologies User Manuals

Related Manuals