DELL PowerFlex Appliance with PowerFlex 4.x Instruction Manual
- July 29, 2024
- Dell
Table of Contents
- DELL PowerFlex Appliance with PowerFlex 4.x
- Specifications
- Product Information
- Overview
- Product Usage Instructions
- FAQ
- Introduction
- Revision history
- Architecture considerations
- PowerFlex software-defined storage architecture
- System hardware
- Management control plane
- PowerFlex file services
- Security considerations
- Additional references
- References
- Read User Manual Online (PDF format)
- Download This Manual (PDF format)
DELL PowerFlex Appliance with PowerFlex 4.x
Specifications
- Product: Dell PowerFlex Appliance with PowerFlex 4.x
- Architecture Overview: May 2024 Rev. 3.0
Product Information
The PowerFlex Appliance with PowerFlex 4.x Architecture Overview provides a high-level description of the architecture and key components of the PowerFlex appliance. It is designed to meet modern data center needs with flexibility in deployment options.
Overview
The PowerFlex appliance is an engineered system that allows for various
deployment options, including separate compute-only and storage-only nodes,
fully converged systems, storage-only configurations, and hybrid combinations.
It supports both block and file storage within the same system.
Key Advantages of PowerFlex Appliance:
- Automated end-to-end life cycle management with PowerFlex Manager
- Flexible network topologies for scalability and performance
- Highly available management and orchestration control plane
- Cost-effective management and orchestration setup
Product Usage Instructions
Deployment Options
PowerFlex appliance offers various deployment options to meet different
infrastructure needs:
- Two-layer deployment with separate compute-only and storage-only nodes
- Fully converged system
- Storage-only nodes
- Hybrid combinations of the above options
Management and Orchestration
The management and orchestration control plane of PowerFlex runs on a
dedicated cluster of three or more physical nodes to ensure high availability.
Additionally, a cost-effective setup with management running on a single
physical node is available.
FAQ
-
What is the target audience for the PowerFlex Appliance documentation?
The target audience includes customers, sales engineers, field consultants, and advanced services specialists who aim to deploy a high-performance, scalable, and flexible infrastructure using PowerFlex appliance. -
Can PowerFlex support both block and file storage?
Yes, PowerFlex allows for both block and file storage within the same system, providing flexibility in data storage options. -
How can I access additional PowerFlex appliance documentation?
For additional documentation, you can visit the PowerFlex appliance technical documentation available online.
Notes, cautions, and warnings
- NOTE: A NOTE indicates important information that helps you make better use of your product.
- CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem.
- WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
Introduction
- The PowerFlex Appliance with PowerFlex 4.x Architecture Overview describes the high-level architecture and key hardware and software components of PowerFlex appliance.
- The target audience for this document includes customers, sales engineers, field consultants, and advanced services specialists who want to deploy a high-performance, scalable, and flexible infrastructure using PowerFlex appliance.
- PowerFlex appliance architecture is based on Dell PowerEdge servers, Cisco Nexus switches, Dell PowerSwitch switches or customer provided switches, and PowerFlex software defined storage. PowerFlex Manager provides the management and orchestration functionality for PowerFlex appliance. PowerFlex appliance is an engineered system with optional full network automation (when using supported Cisco Nexus switches or Dell PowerSwitch switches) or partial network automation (when using customer provided switches). PowerFlex appliance serves as highly scalable and high performance hyperconverged infrastructure building block for modern and cloud native data center workloads.
- For additional PowerFlex appliance documentation, go to PowerFlex appliance technical documentation.
Overview
-
PowerFlex appliance is an engineered system designed to meet modern data center needs. You have flexibility in deploying
two layer (separate compute-only and storage-only nodes), fully converged, storage-only, PowerFlex file nodes or hybrid combinations. PowerFlex allows for block and file storage within the same system. -
PowerFlex appliance is a modular software-defined compute and storage platform that enables linear performance with scale and flexible deployment options for next-generation cloud applications and mixed workloads. The scale-out architecture of the PowerFlex appliance enables you to add PowerFlex nodes with various CPU, memory, and drive configuration options to meet the business need. PowerFlex appliance is designed for deployments involving large numbers of virtualized and bare metal workloads. PowerFlex appliance is built-in N+1 redundancy at component level to deliver high availability.
-
PowerFlex appliance has many advantages:
- Engineered system with automated end-to-end life cycle management using PowerFlex Manager
- Choice of the following network topologies to meet scale and performance business needs:
- Access and aggregation
- Leaf-spine
- Choice of network hardware: Cisco Nexus switches, Dell PowerSwitch switches or customer preferred switches
- Multiple PowerFlex node types and node configurations options to meet compute and storage needs:
- PowerFlex hyperconverged nodes
- PowerFlex storage-only nodes
- PowerFlex compute-only nodes
- PowerFlex file nodes
- Flexible compute and storage resources deployment options such as
- Hyperconverged – compute and storage in same chassis allowing proportional scale
- Two layer – compute and storage deployed in separate chassis allowing independent scale of compute and storage resources
- Storage only – only storage resources are part of PowerFlex appliance; compute resides outside the system boundaries
- Hybrid – combination of two or more of the above deployment options
- Highly available management and orchestration (M&O) control plane that runs on a dedicated three or more physical nodes cluster
- Cost effective management and orchestration that runs on a single physical PowerFlex management node.
- Use of your existing servers for management and orchestration
- Supports VMware ESXi and bare metal options
- Software and hardware-based data at rest encryption (D@RE) options
- Dell CloudLink
- Optional SEDs
- Supports 25 GbE or 100 GbE port bandwidth for backend connectivity
- Dual network environment using your existing software-defined network (SDN), such as Cisco ACI (optional) or VMware NSX
- Supports both block and file storage
- Supports native asynchronous replication between sites
- Built-in component level redundancy to ensure data availability
- Self-healing architecture with integrated call home feature
- Storage only option allows external compute resources to access data in the PowerFlex appliance
- PowerFlex nodes support:
- SSD
- NVMe technologies
- Software defined persistent memory (SDPM) for PowerFlex R760 and R660 nodes
- NVDIMM for PowerFlex R750 and R650 nodes
- Supports multi-VLAN or multi-subnet for the same network type, other than data networks, vSAN, and NSX overlay
- Supports non-root user, SSH key pairs and LDAP users for PowerFlex administration functions for security
- PowerFlex file supports the following:
- NAS server and filesystem clone
- File-level retention (FLR)
- Multi-tenancy
Common PowerFlex terms and associated acronyms
This section identifies common PowerFlex terms and associated acronyms.
Table 1. Common PowerFlex terms and associated acronyms
Term | Acronym |
---|---|
management virtual machine | MVM |
management data store | MDS |
PowerFlex management controller | PFMC |
PowerFlex management platform | PFMP |
storage data client | SDC |
storage data server | SDS |
storage virtual machine | SVM |
storage data replicator | SDR |
storage data target | SDT |
Revision history
Date | Document revision | Description of changes |
---|---|---|
May 2024 | 3.0 | Added information for: |
● Software-defined persistent memory
● NAS server and filesystem clone
● File-level retention
● Multi-tenancy
January 2024| 2.1| Updated information for protection domain
September 2023| 2.0| Added information for:
● Multi-VLAN and multi-subnet configurations
● Policy manager for secure connect gateway
● Global NameSpace
● Common Event Publishing Agent (CEPA)
Updated information for:
● Cisco Nexus switches
May 2023| 1.2| Updates to the aggregation switch options
January 2023| 1.1| Editorial updates
August 2022| 1.0| Initial release
Architecture considerations
PowerFlex appliance is a modular hyperconverged platform that enables extreme scalability and flexibility for next-generation cloud applications and mixed workloads.
System components
PowerFlex appliance contains compute, network, software-defined storage,
virtualization, and management and orchestration (M&O) control plane.
The following tables list the PowerFlex appliance software and hardware
components:
Table 3. Software components
Software | Function | Description |
---|---|---|
PowerFlex Manager | Management and orchestration | PowerFlex Manager is the |
management layer of the PowerFlex system. It provides the user interface for
both the block storage (PowerFlex software) and file storage services as well
as lifecycle management of the hardware components in the PowerFlex appliance.
PowerFlex| Software defined storage| PowerFlex software provides block
services to the PowerFlex system and is the software-defined storage layer
that forms the core of the offer.
PowerFlex file| Software defined file storage| Network-attached storage
enables data access through files, rather than block devices. This is the
software that provides file services to the PowerFlex system.
VMware vSphere| Virtualization| ● VMware ESXi: This is the default
supported hypervisor for PowerFlex compute- only and PowerFlex hyperconverged
nodes.
● VMware vCenter Server Appliance (vCSA): The VMware vCSA provides management services to the VMware compute environment including both compute- only and hyper-converged nodes in the PowerFlex system. For PowerFlex appliance deployments it also manages the virtual machines of the PowerFlex Management controller.
Secure connect gateway| Call home| Secure connect gateway is an enterprise monitoring technology that monitors your devices and proactively detects hardware issues that may occur. It automates support request creation for issues that are detected on the monitored devices.
NOTE: Secure connect gateway automatically collects the telemetry that is required to troubleshoot the issue that is detected. The collected telemetry helps technical support provide a proactive and personalized support experience.
Secure connect gateway can be set up with policy manager. Policy manager is a device access management technology delivered as a virtual appliance.
CloudLink (optional)| Software encryption and key management| CloudLink is an optional component of the system that provides key management for self- encrypting drives, and software data-at-rest encryption for non-self- encrypting units.
Table 4. Hardware components
Resource | Vendor | Components |
---|---|---|
Compute | Dell | PowerFlex nodes |
● PowerEdge R660/R760 servers, PowerEdge R650/R750 servers or PowerEdge R640/ R740xd/R840 servers () for PowerFlex hyperconverged nodes
● PowerEdge servers (R660/R760/R6625/R7625), PowerEdge servers (R650/R750/ R6525/R7525) or PowerEdge servers (R640/R740xd/R840) for PowerFlex compute- only nodes
| | ● PowerEdge servers R660 or PowerEdge servers R650 for PowerFlex file nodes
4 x 25 Gbe or 4 x 100 Gbe NIC options
---|---|---
Storage| Dell| PowerFlex nodes
● PowerFlex nodes, PowerEdge servers (R660/R760/R650/R750) and/or PowerEdge servers (R640/R740/R840) with PowerFlex software-defined storage
Network
(PowerFlex Manager supports full network automation for the listed switches.
Customer-preferred switches are supported with partial network automation.)
| Cisco| Preferred validated switch options:
Management switch options:
● Cisco Nexus 92348GC-X Aggregation switch option:
● Cisco Nexus 9336C-FX2 Access switch options:
● Cisco Nexus 93240YC-FX2
● Cisco Nexus 93180YC-FX3 Spine switch options:
● Cisco Nexus 9336C-FX2
● Cisco Nexus 9364C-GX Leaf switch options:
● Cisco Nexus 93240YC-FX2
● Cisco Nexus 9336C-FX2
● Cisco Nexus 9364C-GX Border-leaf switch option:
● Cisco Nexus 9336C-FX2
Customer access switch (for optional ACI connectivity using dual network)
● Cisco Nexus 93240YC-FX2
Dell| Management switch options:
● Dell PowerSwitch S4148T-ON Aggregation switch option:
● Dell PowerSwitch S5232F-ON Access switch options:
● Dell PowerSwitch S5224F-ON
● Dell PowerSwitch S5248F-ON
● Dell PowerSwitch S5296F-ON
● Dell PowerSwitch S4148F-ON
Management control plane| Dell| PowerFlex management controller:
● PowerEdge R660 or PowerEdge R650 servers custom configuration
Key architecture considerations
- Flexible network architecture is a key value proposition of PowerFlex appliance. In addition to vendor (Cisco Nexus, Dell PowerSwitch or customer preferred switches), PowerFlex appliance architecture offers the following network topologies to meet your business needs:
- Access and aggregation
- Leaf-spine
- PowerFlex appliance also offers the ability to support both hardware enabled software-defined networking (Cisco Application Centric Infrastructure) and native software-defined networking (VMware NSX).
- PowerFlex appliance offers four node configuration types to meet performance, scale, and storage and compute capacity business requirements.
- PowerFlex hyperconverged nodes
- PowerFlex compute-only nodes
- PowerFlex storage-only nodes
- PowerFlex file nodes
- Additionally, the above-mentioned nodes can be deployed using one or more of the following resource deployment options and PowerFlex Manager allows you to specify a non-root user when configuring a template for a compute-only, storage-only, or hyperconverged deployment.
- Hyperconverged deployment
- Storage-only deployment
- Two-layer deployment with disaggregated compute and storage only
- Hybrid deployment as a combination of above
- PowerFlex file deployment
- PowerFlex appliance can be deployed with either full network automation or partial network automation. With full network automation, PowerFlex Manager configures the node facing ports on the customer network switches, if they are Dell or Cisco Nexus supported switches outlined in System components. Partial network automation is used when you have customer network switches that are not supported by Dell. In this case, you are responsible for configuring the node facing ports along with the rest of your network. The PowerFlex nodes can be fully managed by PowerFlex Manager in either full network automation or partial network automation mode.
Network architecture
PowerFlex appliance supports two different network architectures, which have
the ability to meet the requirements for different performance and scaling
requirements.
The network architectures are:
- Access and aggregation
- Leaf-spine
Access and aggregation architecture
The following figure shows the logical layout of the PowerFlex appliance
integrated with your access and aggregation network architecture.
NOTE: There is an additional 10 / 25 Gb link from the PowerFlex
controller nodes to the out-of-band management switch.
NOTE: A PowerFlex management controller is optional in a PowerFlex appliance.
Leaf-spine architecture
The following diagram shows the logical layout of the PowerFlex appliance
integrated with your leaf-spine network architecture:
NOTE: There is an additional 10 / 25 Gb link from the PowerFlex
controller nodes to the out-of-band management switch.
PowerFlex storage-only deployment
A PowerFlex appliance storage-only deployment has a base configuration that is
a minimum set of PowerFlex storage-only nodes and fixed network resources.
Within the base configuration, you can customize the following hardware
aspects:
Table 5. Customizable hardware aspects
Hardware | Minimum set |
---|---|
Network | ● One customer-provided management switch |
● One pair of access or leaf switches (Dell PowerSwitch switches or |
customer-provided switches)
● One pair of border-leaf switches
NOTE: Only in a leaf-spine configuration.
---|---
Storage| At least four PowerFlex storage-only nodes are required. However,
Dell Technologies recommends using at least six nodes to build a PowerFlex
storage pool.
If storage compression is active, two SDPM components (for PowerFlex R660 or PowerFlex R760) or
a minimum of two NVDIMM components (for PowerFlex R650 or PowerFlex R750) per PowerFlex node are required. A recommendation is made according to the system sizing calculation.
Management| Standalone or multi-node PowerFlex management controller with high availability or customer-provided management infrastructure.
PowerFlex two-layer deployment
A PowerFlex appliance two-layer deployment has a base configuration that is
similar to a PowerFlex storage-only node deployment, but adds a minimum set of
PowerFlex compute-only nodes. The minimum set of PowerFlex storage-only nodes
and fixed network resources are also required.
Within the base configuration, you can customize the following hardware
aspects:
Table 6. Customizable hardware aspects
Hardware | Minimum set |
---|---|
Compute | At least three PowerFlex compute-only nodes. |
Network | ● One customer-provided management switch |
● One pair of access or leaf switches (Dell PowerSwitch switches or customer-provided switches)
● One pair of border-leaf switches
NOTE: Only in a leaf-spine configuration.
Storage| At least four PowerFlex storage-only nodes are required. However, Dell Technologies recommends using at least six nodes to build a PowerFlex storage pool.
Software-defined SAN storage (uses local disks to build a PowerFlex storage pool).
If storage compression is active, two SDPM components (for PowerFlex R660 or PowerFlex R760) or
a minimum of two NVDIMM components (for PowerFlex R650 or PowerFlex R750) per PowerFlex node are required. A recommendation is made according to the system sizing calculation.
Management| Standalone or multi-node PowerFlex management controller with high availability or customer provided management infrastructure.
PowerFlex hyperconverged deployment
A PowerFlex appliance hyperconverged deployment has a base configuration that
is a minimum set of hyperconverged components and fixed network resources.
Within the base configuration, you can customize the following hardware
aspects:
Table 7. Customizable hardware aspects
Hardware | Minimum set |
---|---|
Compute and storage | A minimum of four PowerFlex hyperconverged nodes are |
required; however, six is the recommended minimum. PowerFlex hyperconverged
nodes provide both storage and compute resources to the system.
| If storage compression is active, two SDPM components (for PowerFlex R660
or PowerFlex R760) or
a minimum of two NVDIMM components (for PowerFlex R650 or PowerFlex R750) per PowerFlex node are required. A recommendation is made according to the system sizing calculation.
---|---
Network| ● One customer-provided management switch
● One pair of access or leaf switches (Dell PowerSwitch switches or customer-provided switches)
● A pair of border-leaf switches
NOTE: Only in a leaf-spine configuration.
Management| Standalone or multi-node PowerFlex management controller with high availability or customer provided management infrastructure.
VMware NSX Edge node deployment
The VMware NSX ready deployment is a variation of standard deployment that
includes PowerFlex hyperconverged or compute-only nodes.
This includes an additional VMware NSX Edge node cluster deployment.
Table 8. Customizable hardware aspects
Hardware | Minimum set |
---|---|
Compute | VMware NSX transport is configured on PowerFlex compute-only nodes or |
PowerFlex hyperconverged nodes.
Network| ● Supports either a traditional Ethernet architecture (Cisco
Nexus or Dell PowerSwitch) or leaf-spine topology (Cisco Nexus).
● By default, the VMware NSX Edge physical nodes connect directly to either the aggregation or border leaf switches, depending on the network topology. If there is a limitation because of port capacity or cable distance, the management and transport connections (not Edge/BGP uplinks) are relocated from the aggregation or border leaf switches to the access or leaf switches.
Storage| ● VMware NSX Edge nodes can run in either a local RAID1+0 storage (recommended) or a VMware vSAN storage solution.
● VMware NSX Managers runs on the general shared datastores provided by PowerFlex within the PowerFlex management controller.
● PowerFlex storage-only nodes are not supported as a VMware NSX transport node.
Management| Four PowerFlex controller nodes with high availability. A fourth
controller node is included to host VMware NSX Manager.
VMware NSX Edge| ● A minimum of two VMware NSX Edge nodes, if using local
RAID storage option.
● A minimum of four VMware NSX Edge nodes, if using vSAN storage option.
Each VMware NSX Edge node uses three dual-port 25 Gb cards to connect to either the border leaf or aggregation switches. At minimum, four of the six NIC interfaces that are used for transport and external edge traffic must be configured as an individual trunk. The other two NIC interfaces that are used for VMware ESXi management or vSAN traffic are configured with Link Aggregation Control Protocol (LACP) enabled vPC.
NOTE: Do not deploy non-NSX Edge workloads in the VMware NSX Edge VMware vSphere cluster.
PowerFlex software-defined storage architecture
-
PowerFlex applies the principles of server virtualization to standard x86 servers with local disks, creating high-performance, sharable pools of block storage. PowerFlex abstracts the local storage contained within each server.
-
PowerFlex pools all the storage resources together. In the following figure, there is a global pool of 1 million IOPS and 100 terabytes, instead of having 100K IOPS and 10 terabytes available in each server. The applications are not constrained by what is within the local server, these resources are shared across the entire cluster.
-
PowerFlex automatically maintains balance across all resources, supporting application needs. Storage and/or compute can be added dynamically with no downtime or impact to applications because PowerFlex seamlessly balances the available resources. This enables data center operation in the most efficient and cost-effective way possible, regardless of organization size.
PowerFlex components
Storage data client (SDC)
The storage data client (SDC) is installed on PowerFlex nodes that consume the
system storage volumes. The volumes data and copies are spread evenly across
the nodes and drives that comprise the pool. The storage data client
communicates over multiple pathways to all the nodes. In this multi-point
peer-to-peer fashion, it reads and writes data to and from all points
simultaneously, eliminating bottlenecks and quickly routing around failed
paths. The storage data client:
- Provides front-end volume access to applications and file systems.
- Is installed on servers consuming storage.
- Maintains peer-to-peer connections to every storage data server managing a pool of storage.
Storage data server (SDS)
The storage data server is installed on every PowerFlex node that contributes
its storage to the system. It owns the contributing drives and together with
the other storage data servers forms a protected mesh from which storage pools
are created. Volumes carved out of the pool are presented to the storage data
clients for consumption. The storage data server:
- Abstracts local storage, maintains storage pools, and presents volumes to the storage data clients.
- Is installed on servers contributing local storage to the cluster.
Metadata manager (MDM)
The metadata manager software installs on three or five PowerFlex nodes and
forms a cluster that supervises the operations of the entire cluster and its
parts, while staying outside of the data path itself. The metadata manager
hands out instructions to each storage data client and storage data server
about its role and how to perform it, giving each component the information it
needs. The metadata manager:
- Oversees storage cluster configurations, monitoring, rebalances, and rebuilds.
- Is a highly available, independent cluster installed on three or five different PowerFlex nodes.
- Sits outside the data path.
Storage data replicator (SDR)
The storage data replicator proxies the I/O of replicated volumes between the
storage data client and the storage data servers where data is ultimately
stored. It splits writes, sending one copy to the destination storage data
servers and another to a replication journal volume. Sitting between the
storage data server and storage data client, from the point-of-view of the
storage data server, the storage data replicator appears as if it were an
storage data client sending writes (from a networking perspective, however,
the storage data replicator to storage data server traffic is still
backend/storage traffic). Conversely, to the storage data client, the storage
data replicator appears as if it were an storage data server to which writes
can be sent. The storage data replicator only mediates the flow of traffic for
replicated volumes. Non-replicated volume I/Os flow, as usual, between storage
data clients and storage data servers directly. As always, the metadata
manager instructs each of the storage data clients where to read and write
their data. The volume address space mapping, presented to the storage data
client by the metadata manager, determines where the volumes data is sent. But
the storage data client is not aware of the write-destination as an storage
data server or an storage data replicator. The storage data client is not
aware of replication.
Storage data target (SDT)
The storage data target (SDT) is installed with the storage data server to
connect compute/application clients to storage using NVMe over TCP. NVMe over
TCP front-end capability allows you to use agentless solution (no storage data
client), providing more flexible options for operating systems where storage
data client is not supported and reducing the operational complexity of
deploying and maintaining the host agent.
Storage schemas
Protection domains
A protection domain (PD) is a group of nodes or storage data servers that
provide data isolation, security, and performance benefits. A node
participates in only one protection domain at a time. Only nodes in the same
protection domain can affect each other, nodes outside the protection domain
are isolated. Secure multi-tenancy can be created with protection domains
since data does not mingle across protection domains. You can create different
protection domains for different node types with unequal performance profiles.
All the hosts in the domain must have the same type and configuration. A
PowerFlex hyperconverged node should not be in the same protection domain as a
PowerFlex storage-only node. The node configuration must match, which includes
the drives, CPU, and memory. Any difference in the node configuration leads to
an unknown performance impact.
Storage pools
- Storage pools are a subset of physical storage devices in a protection domain. Each storage device belongs to one (and only one) storage pool. The best practice is to have the same type of storage devices (HDD versus SSD or SSD versus NVMe) within a storage pool to ensure that the volumes are distributed over the same type of storage within the protection domain.
- PowerFlex supports two types of storage pools. You can choose between both layouts. A system can support both fine granularity (FG) and medium granularity (MG) pools on the same storage data server nodes. Volumes can be non-disruptively migrated between the two layouts. Within an fine granularity pool, you can enable or disable compression on a per-volume basis:
- Medium granularity: Volumes are divided into 1MB allocation units, distributed, and replicated across all disks contributing to a pool. MG storage pools support either thick or thin-provisioned volumes, and no attempt is made to reduce the size of user-data written to disk (except with all-zero data). MG storage pools have higher storage access performance than fine granularity storage pools but use more disk space.
- Fine granularity: A space efficient layout, with an allocation unit of just 4 KB and a physical data placement scheme based on log structure array (LSA) architecture. Fine granularity layout requires both flash media (SSD or NVMe) as well as SDPM or NVDIMM to create an fine granularity storage pool. fine granularity layout is thin-provisioned and zero-padded by nature, and enables PowerFlex to support in-line compression, more efficient snapshots, and persistent checksums. Fine granularity storage pools use less disk space than medium granularity storage pools but have slightly lower storage access performance.
Fault sets
A fault set is a logical entity that contains a group of storage data servers
within a protection domain that have a higher chance of going down together;
for example, if they are all powered in the same rack. By grouping them into a
fault set, PowerFlex mirrors data for a fault set on storage data servers that
are outside the fault set. Thus, availability is assured even if all the
servers within one fault set fail simultaneously.
PowerFlex features
PowerFlex is an enterprise-class, software-defined solution that is deployed, managed, and supported as a single system.
Replication
-
The following figure depicts where the storage data replicator (SDR) fits into the overall PowerFlex replication architecture:
-
The storage data replicator proxies the I/O of replicated volumes between the storage data client and the storage data servers where data is ultimately stored. Write I/Os are split, sending one copy on to the destination storage data servers and another to a replication journal volume. Sitting between the storage data server and storage data client, from the point-of-view of the storage data server, the storage data replicator appears as if it were an storage data client sending writes. (From a networking perspective, however, the storage data replicator to storage data server traffic is still backend/storage traffic.) Conversely, to the storage data client, the storage data replicator appears as if it were an storage data server to which writes can be sent. The storage data replicator only mediates the flow of traffic for replicated volumes. (In fact, only actively replicating volumes; the nuance will be covered below). Non-replicated volume I/Os flow, as usual, between storage data clients and storage data servers directly. As always, the metadata manager instructs each of the storage data clients where to read and write their data. The volume address space mapping, presented to the storage data client by the metadata manager, determines where the volume’s data is sent. But the storage data client is ignorant of the write-destination as an storage data server or an storage data replicator. The storage data client is not aware of replication.
Compression
-
Fine granularity (FG) layout requires both flash media (SSD or NVMe) as well as SDPM or NVDIMM to create an FG pool.
FG layout is thin-provisioned and zero-padded by nature, and enables PowerFlex to support in-line compression, more efficient snapshots, and persistent checksums. FG pools support only thin-provisioned, zero-padded volumes, and whenever possible the actual size of user-data stored on disk is reduced. You should expect an average compression ratio of at least 2:1. Because of the 4K allocation, FG pools drastically reduce snapshot overhead, because new writes and updates to the volumes data do not each require a 1 MB read/copy action. All data written to an FG pool receives a checksum and is tested for compressibility. The checksum for every write is stored with the metadata and adds an additional layer of data integrity to the system. -
PowerFlex offers a distinctive, competitive advantage with the ability to enable compression per-volume versus globally, and the ability to choose the best layout for each individual workload. The MG layout is still the best choice for workloads with high performance requirements. Fine granularity pools offer space-saving services and additional data integrity. Within an FG pool, enabling compression or making heavy use of snapshots has almost zero impact on the performance of the volumes.
Snapshots
Snapshots are a block image in the form of a storage volume or logical unit
number (LUN) used to instantaneously capture the state of a volume at a
specific point in time. Snapshots can be initiated manually or by new,
automated snapshot policies. Snapshots in fine granularity storage pools are
more space efficient and have better performance in comparison to medium
granularity snapshots. PowerFlex supports snapshot policies based on a time
retention mechanism. You can define up to 60 policy-managed snapshots per root
volume A snapshot policy defines a cadence and the number of snapshots to keep
at each level.
Volume migration
Migration is non-disruptive to ongoing I/O and is supported across storage
pools within the same protection domain or across protection domains.
Migrating volumes from one storage pool migrates the volume and all its
snapshots together (known as VTree granularity). There are several use cases
where volume migration is useful:
- Migrating volumes between different storage performance tiers
- Migrating volumes to a different storage pool or protection domain driven by multi-tenancy needs
- Extract volumes from a deprecating storage pool or protection domain to shrink a system
- Change a volume personality between thick or thin or fine granularity or medium granularity
System hardware
- System hardware consists of multiple sections on how PowerFlex nodes are used.
-
There are four types of PowerFlex nodes: storage providing nodes, storage consuming nodes, management nodes and VMware NSX Edge nodes. PowerFlex hyperconverged nodes provide storage and consume storage. The table below shows the combinations of provision and consumption allowed by PowerFlex appliance.
Table 9. Combinations of provision and consumption allowed by PowerFlex appliance PowerFlex storage provided by …
PowerFlex storage consumed by…
| –| Hyperconverged| Hyperconverged and storage only|
Storage-only
Compute-only| N/A| Hybrid| Two-layer
Hyperconverged and compute-only| Hybrid| Hybrid| Hybrid
Hyperconverged| Hyperconverged| Hybrid| N/A
External| N/A| N/A| Storage only
Storage-providing nodes
PowerFlex hyperconverged nodes
PowerFlex hyperconverged nodes are based on Dell PowerEdge R660, R760, R650,
R750, R640, R740xd, and R840 servers. PowerFlex is deployed on these nodes in
a true hyperconverged form where PowerFlex SDC and SDS software components are
installed on the same PowerFlex node. PowerFlex hyperconverged nodes nodes
provide and consume storage.
PowerFlex storage-only nodes
PowerFlex storage-only nodes are based on Dell PowerEdge R660, R760, R650,
R750, R640, R740xd, and R840 servers. PowerFlex storage-only nodes are
designed to provide storage capacity but no compute power to the compute
cluster. Only the SDS component of the PowerFlex runs on PowerFlex storage-
only nodes. PowerFlex storage-only nodes run an embedded operating system and
do not require any VMware ESXi license. PowerFlex storage-only nodes have the
ability to add additional storage capacity to a PowerFlex cluster without
additional compute power.
Storage-consuming nodes
PowerFlex hyperconverged nodes
PowerFlex hyperconverged nodes are based on PowerEdge R660, R760, R650, R750,
R640, R740xd, and R840 servers. PowerFlex is deployed on these nodes in a true
hyperconverged form where the PowerFlex data client and storage data serve
software components are installed on the same PowerFlex node. PowerFlex
hyperconverged nodes provide and consume storage.
PowerFlex compute-only nodes
PowerFlex compute-only nodes are based on PowerEdge R660, R760, R6625, R7625,
R650, R750, R6525, R7525, R640, R740xd, and R840 servers. The PowerFlex
compute-only node enables you to deploy PowerFlex in a two-layer architecture
that delivers ultimate flexibility when it comes to independently scaling
compute and storage resources. The PowerFlex SDC software component is
installed on PowerFlex compute-only nodes.
PowerFlex file nodes
PowerFlex file nodes are based on PowerEdge R660 and PowerEdge R650 servers
with two third generation Intel Xeon scalable processors with up to 24 cores
per processor. PowerFlex file nodes are deployed in a cluster of 2 to 16
nodes. The PowerFlex storage data client software component is installed on
PowerFlex file nodes.
Management controller
PowerFlex controller nodes
PowerFlex controller nodesare based on the Dell PowerEdge R660 server or
PowerEdge R650 server. PowerFlex controller nodes uses PowerFlex to provide a
reliable, and highly available storage cluster for the management plane.
PowerFlex appliance supports standalone and multi-node PowerFlex management
controllers.
VMware NSX ready nodes
VMware NSX Edge nodes host the VMware NSX Edge gateway instances (VMs), and
two or more VMware NSX Edge nodes provided with NSX ready configuration within
the PowerFlex appliance.
PowerFlex node networking
A PowerFlex appliance is based on either an access/aggregation leaf-spine
topology. You also have the option to implement your preferred networking as
long as the connections to and between PowerFlex nodes meet PowerFlex
requirements.
General network connectivity descriptions
- A pair of access switches are required to handle all inter-cabinet network traffic between the nodes.
- A standard deployment is one pair of access or leaf switches per cabinet. A management switch is required to support the out-of-band management requirements of the system. Management switches are needed to support network connectivity from the following equipment:
- One for each PowerFlex controller to support management traffic
- One for each PowerFlex controller iDRAC connection
- One for each PowerFlex node iDRAC connections
- One for each switch for the out-of-band connection if switches are managed by PowerFlex Manager
Management control plane
-
The PowerFlex appliance management control plane consists of :
- VMware ESXi to deliver high availability for VMs
- PowerFlex management controller – PowerEdge server with custom configuration
- One of the following:
- PowerFlex storage data server for cluster storage high availability
- Single PowerFlex management controller with RAID for data protection
- Customer provided management controller or controller nodes
-
The management network connection consists of the following:
- PowerFlex management platform – The PowerFlex management platform is the software management and orchestration stack for PowerFlex. It is implemented on the PowerFlex management controller. It includes the container environment running on physical or virtual Linux instances, and containers that provide services.
- PowerFlex Manager – provides IT operations management for PowerFlex appliance. It increases efficiency by reducing time-consuming manual tasks that are required to manage system operations. Use PowerFlex Manager to deploy and manage new and existing PowerFlex appliance environments.
-
PowerFlex Manager discovers, deploys, and operates the PowerFlex appliance by using resources, templates, and resource groups.
-
For more information on key PowerFlex Manager terminology, see Dell PowerFlex Appliance and PowerFlex Rack with PowerFlex 4.x Glossary.
-
PowerFlex Manager offers the following features:
- Resource discovery, inventory, and management
- Simplified and efficient day-to-day operations
- Management of block and file storage objects
- Creating template-based configurations for consistent and secure deployment of large numbers of compute, storage and network resources
- Built-in role-based authorization and identity management
- Comprehensive health alerting, monitoring, reporting and dashboards
- End-to-end automated life cycle management
- Life-cycle compliance management and reporting
-
For an in-depth overview of PowerFlex Manager, see the Dell PowerFlex Technical Overview.
- VMware vCenter – Is used for orchestrations, management, monitoring and reporting of virtual compute resources.
- Secure connect gateway – Is an enterprise monitoring technology that is delivered as an appliance and a stand-alone application. It monitors your devices and proactively detects hardware issues that may occur.
- Policy manager – policy manager is a device access management technology that is delivered as a virtual appliance.
PowerFlex file services
- PowerFlex has optional native file capabilities that are highly scalable, efficient, performance focused and flexible.
- PowerFlex file nodes enable accessing data over file protocols such as server message block (SMB), network file system (NFS) and secure file transfer protocol (SFTP). PowerFlex file nodes supports two primary business cases:
- Traditional NAS: Home directories and file sharing
- Transactional NAS: Database and VMware workloads
PowerFlex file architecture
-
PowerFlex file is deployed on PowerFlex file nodes to provide file services to applications.
-
PowerFlex file nodes provide compute capabilities (CPU and memory) and consume storage from PowerFlex block (SDS) providing a highly scalable performance for transactional and traditional workloads. PowerFlex file can be scaled independently of PowerFlex storage providing more flexible options for customers.
-
The following figure highlights applications consuming PowerFlex file storage:
-
With the native file capabilities available on PowerFlex appliance, administrators can easily implement a highly scalable, efficient, high performance, and flexible solution that is designed for the modern data center. The rich supporting feature set and mature architecture provides the ability to support a wide array of use cases. PowerFlex file uses virtualized NAS servers to enable access to file systems, provide data separation, and act as the basis for multi-tenancy. PowerFlex file services can be accessed through a wide range of protocols and can take advantage of advanced protocol features.
-
PowerFlex file servers – PowerFlex file uses virtualized file servers that are called NAS servers. A NAS server contains the configuration, interfaces, and environmental information that is used to facilitate access to the file systems. This includes services such as Domain Name System (DNS), Lightweight Directory Access Protocol (LDAP), Network Information Service (NIS), protocols, anti virus, NDMP, and so on.
-
Multi-tenancy – NAS servers can be used to enforce multi-tenancy. This is useful when hosting multiple tenants on a single system, such as for service providers. Since each NAS server has its own independent configuration, it can be tailored to the requirements of each tenant without impacting the other NAS servers on the same appliance. Each NAS server is logically separated from each other, and clients that have access to one NAS server do not inherently have access to the file systems on the other NAS servers. File systems are assigned to a NAS server upon creation and cannot be moved between NAS servers.
-
High availability – New NAS servers are automatically assigned across the available nodes. The preferred node acts as a marker to indicate the node that the NAS server should be running on. Once provisioned, the preferred node for a NAS server never changes. The current node indicates the node that the NAS server is running on. Changing the current node moves the NAS server to a different node, which can be used for load-balancing purposes. When a NAS server is moved to a new node, all file systems on the NAS server are moved along with it.
-
Protocols – PowerFlex file supports SMB1 through 3.1.1. SMB3 enhancements such as continuous availability, offload copy, protocol encryption, multichannel, and shared VHDX in Hyper-V are supported on PowerFlex file. PowerFlex file also supports the Microsoft Distributed File System (DFS) namespace. This ability enables the administrator to present shares from multiple file systems through a single mapped share. PowerFlex file SMB servers can be configured as a stand-alone DFS root node or as a leaf node on an Active Directory DFS root. DFS-R (replication) is not supported on PowerFlex file SMB servers.
-
PowerFlex file supports NFSv3 through NFSv4.1 and as Secure NFS. Each NAS server has options to enable NFSv3 and NFSv4 independently. Support for advanced NFS protocol options is also available. NFSv4 is a version of the NFS protocol that differs considerably from previous implementations. Unlike NFSv3, this version is a stateful protocol, meaning that it maintains a session state and does not treat each request as an independent transaction without the need for additional preexisting information NFSv4 brings support for several new features including NFS ACLs that expand on the existing mode-bit-based access control in previous versions of the protocol.
NAS servers and file systems also support access for FTP and SFTP. SFTP is more secure since, it does not transmit usernames and passwords in clear text. FTP and SFTP access can be enabled or disabled individually at the NAS server level. Only active mode FTP and SFTP connections are supported. -
Multi-protocol support – When a NAS server has both the SMB and NFS protocols enabled, multi-protocol access is automatically enabled. Multiprotocol access enables accessing a single file system using the SMB and NFS protocols simultaneously.
-
Naming and directory services – PowerFlex file supports the following naming and directory services:
-
DNS – A service that provides translations between hostnames and IP addresses
-
LDAP/NIS – Services that provide a centralized user directory for username and ID resolution
-
Local files – Individual files used to provide username and ID resolution
-
Filesystem – PowerFlex file leverages a 64-bit file system that is highly scalable, efficient, performant, and flexible. The PowerFlex file is mature and robust, enabling it to be used in many of the traditional NAS use cases.
-
Compression – PowerFlex file supports compression using fine granularity storage pools.
-
Shrink and extend – PowerFlex file provides increased flexibility by providing the ability to shrink and extend file systems as needed. Shrink and extend operations are used to resize the file system and update the capacity that is seen by the client.
-
Quotas – PowerFlex file includes quota support to allow administrators to place limits on the amount of space that can be consumed to regulate file system storage consumption. PowerFlex file supports user quotas, quota trees, and user quotas on tree quotas. All three types of quotas can co-exist on the same file system and can be used together to achieve fine grained control over storage usage.
-
User quotas: User quotas are set at a file system level and limit the amount of space a user may consume on a file system. Quotas are disabled by default.
-
Tree quotas: Quota trees limit the maximum size of a directory on a file system. Unlike user quotas, which are applied and tracked on a user-by-user basis, quota trees are applied to directories within the file system. Quota trees can be applied on new or existing directories.
-
User quotas on tree quotas: Once a quota tree is created, it is also possible to create additional user quotas within that specific directory by choosing to enforce user quotas. When multiple limits apply, users are bound by the limit that they reach first.
-
Snapshots – PowerFlex file features pointer-based snapshots. These can be used for restoring individual files or the entire file system back to a previous point in time. Since these snapshots leverage redirect-on-write on technology, no additional capacity is consumed when the snapshot is first created. Capacity only starts to be consumed as data is written to the file system and changes are tracked.
-
CAVA – Common Anti-Virus Agent (CAVA) provides an ant virus solution to SMB clients by using third-party anti virus software to identify and eliminate known viruses before they infect files on the storage system. Windows clients require this to reduce the chance of storing infected files on the file system and protects them if they happen to open an infected file.
The CAVA solution is for clients running the SMB protocol only. If clients use the NFS or FTP protocols to create, modify, or move files, the CAVA solution does not scan these files for viruses. -
NDMP – PowerFlex file supports three-way Network Data Management Protocol (NDMP) backups, allowing administrators to protect file systems by backing up to a tape library or other backup device. In an NDMP configuration, there are three primary components:
-
Primary system – Source system to be backed up, such as PowerFlex file.
-
Data Management Application (DMA) – Backup application that orchestrates the backup sessions, such as NetWorker.
-
Secondary system – The backup target, such as PowerProtect.
Three-way NDMP transfers both the metadata and backup data over the network. The metadata travels from the primary system to the DMA. The data travels from the primary system to the DMA and then finally to the secondary system. -
Global NameSpace – Global NameSpace also known as single namespace provides users a virtual view of shared folders by grouping shares/exports located on different servers into one or more single entry point to access multiple-filesystems. With Global NameSpace feature enabled, client hosts with correct access permission would be able to access existing and newly added FS to the Global NameSpace without needing to explicitly map/mount it on each client. Powerflex file supports multi-protocol Global NameSpace (GNS) for both SMB and NFSv4 clients. NFSv3 clients are not supported by the Global NameSpace infrastructure however they can access the shares directly.
-
Common Event Publishing Agent (CEPA) – The Dell Common Event Enabler (CEE) framework is used to provide a working environment for the Common Event Publishing Agent (CEPA) facility, which includes sub-facilities for auditing, content/quota management (CQM), Common Asynchronous Publishing Service (VCAPS), and indexing. CEE Common Event Publishing Agent (CEPA) is a mechanism whereby applications can register to receive event notification and context from Powerflex file system. The event publishing agent delivers to the consuming application both event notification and associated context in one message. Context may consist of file metadata or directory metadata that is needed to decide business policy
-
NAS Server and Filesystem Clone – Powerflex file users can clone their NAS server and filesystem for environment repurposing.
-
File-level retention – File-level retention protects files from modification or deletion until a specified retention period. Protecting a file system using File-level retention enables you to create a permanent, and unalterable set of files and directories. File-level retention ensures data integrity and accessibility, simplifies archiving procedures for administrators and improves storage management flexibility.
-
Security considerations
Enterprises have many reasons for encrypting their data, including addressing
regulatory compliance, protecting against theft of customer data, and
sensitive intellectual property.
PowerFlex appliance offers numerous built-in security features and
capabilities cross multiple security domains to help you meet security and
compliance requirements. Here is a summary of the PowerFlex appliance security
features by security domains.
Asset management
- PowerFlex Manager simplifies asset discovery and system resources inventory management
- Resource deployment services template and resource tagging allow you to efficiently deploy a complex environment with consistency
Identity authentication and authorization
PowerFlex appliance architecture offers built-in security controls to meet
authentication and authorization needs. Some of the key security controls are:
- LDAP/Active Directory integration
- Role-based access control (RBAC)
- RSA SecurID MFA option (using key cloak)
Data confidentiality
- Confidentiality is one of the key pillars of the security triad (CIA). PowerFlex appliance offers both software and hardware based FIPS 140-2 compliant data at rest encryption. For hardware-based D@RE, you can choose self encrypting drives (SED)s that meet your business needs and use integrated CloudLink for key management. The integrated CloudLink can also be used to provide software-based encryption for PowerFlex storage data servers (SDS) that is transparent to the features and operation of the PowerFlex solution. CloudLink uses dm-crypt, a native Linux encryption package, to secure SDS devices. A proven high-performance volume encryption solution, dm-crypt is widely implemented for Linux machines.
- CloudLink encrypts the storage data server devices with unique keys that are controlled by enterprise security administrators. CloudLink Center provides centralized, policy-based management for these keys, enabling single-screen security monitoring and management across one or more PowerFlex deployments.
System trust
PowerFlex appliance is built with Dell PowerEdge servers that are called
PowerFlex nodes. PowerFlex nodes inherit all the cutting-edge cyber-resiliency
and security features such as:
- An immutable silicon-based root of trust to securely boot iDRAC, BIOS and firmware
- Virtual lock for preventing server configuration/firmware changes and drift detection
- Rapid recovery to a trusted image when authentication fails
- Rollback to known good firmware version if firmware is compromised
- Secure system erase internal server storage devices including HDD, SSD, and NVMe drives
- Industry leading secure supply chain
- PowerFlex software integrity check
Network security
PowerFlex appliance not only offers built-in access/aggregation or leaf-spine
network topology but also incorporates many advanced security features that
are available with Cisco and Dell network switches. These security features
help you protect your network against data loss or compromise resulting from
intentional attacks and from unintended but damaging actions made by well-
meaning network users. Some of the key security features include:
- Network segmentation with, ACL, firewall, and VLAN
- TACACS+ security protocols support
- LDAP authentication and authorization support
- Role-based access control (RBAC) to control and limit access to operations on the Cisco NX-OS device
- Authentication, authorization, and accounting (AAA) an architectural framework support
- Access control list (ACL) support. IP ACLs, MAC ACL and VACL are available options to filter traffic based on IPv4 addresses, MAC address in the packet header, and VLAN routing.
- Simple Certificate Enrollment Protocol (SCEP) support
- Dynamic ARP inspection, DHCP snooping, key chain management, and control plane policing can used to further harden the security.
Auditing and accountability
Audit and accountability’s primary objectives are to maintain a record of
system activities, and provide the ability to establish individual
accountability, detect system anomalies, reconstruct system events using audit
logs and records. PowerFlex appliance creates and retains system audit logs,
event logs and alert records to that can be used for monitoring, trend and
behavior analysis, incident investigation, and reporting of unlawful or
unauthorized system activities.
Additional references
This section provides links to related information for network, storage, and virtualization components. Table 10. Additional reference links
Product | Description | Link to product documentation |
---|---|---|
PowerFlex | Converges storage and compute resources into a single layer |
architecture, aggregating capacity and performance, simplifying management,
and scaling to thousands of PowerFlex nodes.| Dell
PowerFlex
VMware vCenter Server| Provides a scalable and extensible platform that forms
the foundation for virtualization management.| VMware vCenter
Server
Virtualized infrastructure for PowerFlex| Virtualized infrastructure for
PowerFlex rack and PowerFlex appliance. Virtualizes all application servers
and provides VMware High Availability (HA) and Dynamic Resource Scheduling
(DRS).| VMware vSphere
References
Read User Manual Online (PDF format)
Read User Manual Online (PDF format) >>