ARISTA DANZ Monitoring Fabric Verified Scale Guide User Guide

May 15, 2024
ARISTA

ARISTA DANZ Monitoring Fabric Verified Scale Guide

ARISTA-DANZ-Monitoring-Fabric-Verified-Scale-Guide-
product

  • www.arista.com
  • DANZ Monitoring Fabric Verified Scale Guide DOC-06661-01

Headquarters

Support

Sales

Copyright 2023 Arista Networks, Inc. All rights reserved. The information contained herein is subject to change without notice. The trademarks, logos and service marks (“Marks”) displayed in this documentation are the property of Arista Networks in the United States and other countries. Use of the Marks are subject to Arista Network Terms of Use Policy, available at www.arista.com/en/terms-of-use. Use of marks belonging to other parties is for informational purposes only.

DANZ Monitoring Fabric Verified Scale

This document describes the DANZ Monitoring Fabric multi-dimension scale test performed with DMF controllers.

Overview
Network visibility is a growing concern in data centers due to increasing virtualization, service-oriented architecture, and cloud-based IT. However, visibility into network traffic with traditional monitoring infrastructure is very limited. Expensive monitoring infrastructure, including application performance monitoring tools, Intrusion Detection Systems (IDS), and forensic tools, are not efficiently utilized due to a lack of management of monitored traffic. DANZ Monitoring Fabric is an advanced network monitoring solution that alleviates this problem dramatically. DANZ Monitoring Fabric leverages high-performance bare metal Ethernet switches to provide the most scalable, flexible, and cost-effective monitoring fabric. Using an SDN-centric architecture, DANZ Monitoring Fabric enables tapping traffic everywhere in the network and delivers it to any troubleshooting, network monitoring, application performance monitoring, or security tool.

At its core is the centralized DANZ Monitoring Fabric Controller software that converts user-defined policies into highly optimized flows programmed into the forwarding ASICs of bare metal Ethernet switches running the production-grade Switch Light™ Operating System from Arista Networks Confidential. DANZ Monitoring Fabric delivers unprecedented network visibility with bare-metal economics, getting the right traffic to the right tool at the right time. With its open and published Application Programming Interfaces (APIs), the DANZ Monitoring Fabric Controller allows customers to deploy integrated network monitoring solutions along with the DANZ Monitoring Fabric.

Note: The scale and performance numbers reported in this document were executed on a DMF hardware controller.

DMF Verified Scale Values

TCAM Rule Limits
The tables in this section list the limits tested and verified for scalability of the DANZ Monitoring Fabric solution.

Table 1: Verified Jericho Series of Switches

Broadcom Switch Series Broadcom Chipset Switch Name
Jericho Series Jericho Arista DCS-7280SR-48C6
Jericho Plus Arista DCS-7280SR2-48YC6
Jericho 2 Arista DCS-7280CR3-32P4
Jericho 2C Arista DCS-7280SR3-48YC8

Note: The table above lists the switches that were used to verify scale and performance for each supported Broadcom Chipset. For a complete list of supported switches, refer to the DMF Hardware Compatibility Guide.

Table 2: Verified TCAM Rule Limits: Jericho Series of Switches

 | Match Mode| Broadcom Jericho Switches| Broadcom Jericho Plus Switches| Broadcom Jericho 2 Switches| Broadcom Jericho 2C Switches
---|---|---|---|---|---
IPv4 TCAM

rules per switch (Verified Limit/ Max Limit)

| Full| 6140/6144| 6140/6144| 6140/6144| 6140/6144
L3-L4| 6140/6144| 6140/6144| 6140/6144| 6140/6144
Offset| Not Supported| Not Supported| Not Supported| Not Supported
IPv6 TCAM

rules per switch (Verified Limit/ Max Limit)

| Full| 6140/6144| 6140/6144| 6140/6144| 6140/6140
L3-L4| 6140/6144| 6140/6144| 6140/6144| 6140/6140
Offset| Not Supported| Not Supported| Not Supported| Not Supported
Match conditions per policy| Full IPv4/IPv6| 6140/6140| 6140/6140| 6140/6140| 6140/6140
L3-L4 IPv4/ IPv6| 6140/6140| 6140/6140| 6140/6140| 6140/6140
L3-L4

Offset IPv4/ IPv6

| Not Supported| Not Supported| Not Supported| Not Supported

Table 3: Supported Trident Series of Switches

Broadcom Switch Series Broadcom Chipset Switch Name
Trident Series Trident 2 Dell S4048F-ON, Dell S6000F-ON
Trident 2 Plus Dell S4048-48T, Dell S6010F-ON
Trident 3 Arista DCS-7050CX3-32S, Arista DCS-7050SX3-48YC8, Arista DCS-

7050SX3-48YC12, Dell

S5248F-ON, Dell S5232F-ON, Arista DCS-7050SX3-96YC8

Table 4: Verified TCAM Rule Limits: Trident Series of Switches

 | Match Mode| Broadcom Trident 2 Switches| Broadcom Trident 2 Plus Switches| Broadcom Trident 3 Switches
---|---|---|---|---
IPv4 TCAM rules per switch (Verified Limit /Max Limit)| Full| 2040/2044| 8100/8188| 3055/3068
L3-L4| 4088/4092| 8100/8188| 3055/3068
Offset| 2040/2044| 8100/8188| 3055/3068
IPv6 TCAM rules per switch (Verified Limit /Max Limit)| Full| 1535/2044| 6100/8188| 2300/3068
L3-L4| 1535/4092| 6100/8188| 2300/3068
Offset| 1535/2044| 6100/8188| 2300/3068
Match conditions per policy| Full-IPv4/v6| 2040/1535| 8100/6100| 3055/2300
L3-L4IPv4/v6| 4088/1535| 8100/6100| 3055/2300
L3-L4

Offset-IPv4/v6

| 2040/1535| 8100/6100| 3055/2300

Table 5: Supported Tomahawk Series of Switches

Broadcom Switch Series Broadcom Chipset Switch Name
Tomahawk Series Tomahawk Dell Z9100F-ON, Dell S6100F-ON
Tomahawk Plus Dell S5048F-ON
Tomahawk 2 Arista DCS-7260CX3-64E

Dell Z9264F-ON

Table 6: Verified TCAM Rule Limits: Tomahawk Series of Switches

 | Match Mode| Broadcom Tomahawk| Broadcom Tomahawk Plus| Broadcom Tomahawk 2
---|---|---|---|---
IPv4 TCAM rules per switch (Verified Limit /Max Limit)| Full| 1015/1020| 1015/1020| 1015/1020
L3-L4| 1015/1020| 1015/1020| 1015/1020
Offset| 1015/1020| 1015/1020| 1015/1020
IPv6 TCAM rules per switch (Verified Limit /Max Limit)| Full| 760/1020| 760/1020| 760/1020
L3-L4| 760/1020| 760/1020| 760/1020
Offset| 760/1020| 760/1020| 760/1020
Match conditions per policy| Full-IPv4/v6| 1015/760| 1015/760| 1015/760
L3-L4IPv4/v6| 1015/760| 1015/760| 1015/760
L3-L4

Offset-IPv4/v6

| 1015/760| 1015/760| 1015/760

Table 7: Supported Maverick Series of Switches

Broadcom Switch Series Broadcom Chipset Switch Name
Maverick Series Maverick Dell S4112F-ON

Table 8: Verified TCAM Rule Limits: Maverick Series of Switches

  Match Mode Broadcom Maverick Switches
IPv4 TCAM rules per switch (Verified Limit /Max Limit) Full 4088/4092
L3-L4 8100/8188
Offset 4088/4092
IPv6 TCAM rules per switch (Verified Limit /Max Limit) Full 3060/4092
L3-L4 3060/8188
Offset 3060/4092
Match conditions per policy Full-IPv4/v6 4088/3060
L3-L4IPv4/v6 8100/3060

L3-L4

Offset-IPv4/v6

| 4088/3060

The DMF 8.4 release supports the following EOS switches using the Broadcom Qumran chipset.

Table 9: Verified Qumran-based Series of Switches

Broadcom Switch Series Broadcom Chipset Switch Name
Qumran-based Series QumranAX DCS-7020SR-24C2
Qumran2C DCS-7280CR3K-36S
Qumran2A DCS-7280SR3-40YC6

Note: The table above lists the switch models that were used to verify scale and performance for each supported Broadcom chipset. For a complete list of supported switches, please refer to the DMF 8.4 Hardware Compatibility List.

Table 10: Verified TCAM Rule Limits: Qumran Series of Switches

 | Match Mode| Broadcom QumranAX Switches| Broadcom Qumran2C Switches| Broadcom Qumran2A Switches
---|---|---|---|---
IPv4 TCAM Rules per Switch (Verified Limit /Max Limit)| Full| 4084/4088| 6140/6144| 6140/6144
L3-L4| 4084/4088| 6140/6144| 6140/6144
Offset| Not Supported| Not Supported| Not Supported
IPv6 TCAM Rules per Switch (Verified Limit /Max Limit)| Full| 4084/4088| 6140/6144| 6140/6144
L3-L4| 4084/4088| 6140/6144| 6140/6144
Offset| Not Supported| Not Supported| Not Supported
Match Conditions per Policy| Full IPv4/IPv6| 4084/4084| 6140/6140| 6140/6140
L3-L4 IPv4/IPv6| 4084/4084| 6140/6140| 6140/6140
L3-L4 Offset IPv4/ IPv6| Not Supported| Not Supported| Not Supported

Port Channel Interface Limits

Table 11: Verified Port Channel Interface Limits on Trident/Tomahawk Series

  Maximum Hardware/ Software Verified Limits
Number of Port Channel Interfaces Per Switch 64 10
Number of Port Channel Member Interfaces 32 32

Table 12: Verified Port Channel Interface Limits on Jericho Series

  Maximum Hardware/ Software Verified Limits
Number of Port Channel Interfaces Per Switch 1024 16
Number of Port Channel Member Interfaces 32 32

Tunnel Interface Limits

Table 13 : Verified VXLAN Tunnel Interface Limits on Trident/Tomahawk Series

  Maximum Hardware/Software Limit Verified Limits
VXLAN Rx Tunnels per Switch 2000 2000
VXLAN Bidirectional / Tx Tunnels per Switch Depends on the available ports on
the switch.1 60
  1. Configuration of Bidirectional / Tx Tunnels would require using an additional port. Therefore maximum number of supported Bidirectional / Tx Tunnels would be limited to number of free ports available on the switch.

Note: Verification for supported VXLAN tunnel scale was performed on DMF switches based on Trident 3 and Tomahawk 2 chipsets from Broadcom. For details on the feature and supported switch platforms, please refer to the DMF Hardware Compatibility Guide.

Table 14: Verified L2GRE Tunnel Interface Limits on Trident/Tomahawk Series

  Maximum Hardware/Software Limit Verified Limits
L2GRE Rx Tunnels per Switch 2000 2000
L2GRE Bidirectional / Tx Tunnels per Switch Depends on the available ports on
the switch. 60
Functional Limits

Table 15: Verified Functional Limits

Functionality Verified Limits
Filter Interfaces per switch 128
Delivery interfaces per switch 128
Services Chained in a Policy 4

User created policies per fabric (Disable overlap to create more than 200 user policies)| 200
Max number of policies which can overlap| 10 (Default is 4 )
Max number of policies per fabric (user + dynamic policies)| 4000
Switches per Fabric| 150
Filter interfaces per Fabric| 1500
Delivery interfaces per Fabric| 1000
Managed Services Per Fabric| 40
Managed Services Per Switch| 40
No of Service Nodes Per Fabric| 5
Filter interfaces per policy per Fabric| 1000
Connected devices per fabric| 100
IPv4 address groups| 170
IPv4 addresses per group| 20000
IPv6 address groups| 50
IPv6 addresses per group| 100
Maximum RTT between active and standby controller, between switch and controllers| 300 ms
Maximum Users| 500
Maximum Groups| 500
Unmanaged Service interfaces per switch| 44
Unmanaged Service per switch| 22
Unmanaged Service interfaces per Fabric| 100
Unmanaged Service per switch| 50

Naming Conventions

Table 16: Naming Conventions

  Minimum Length Maximum Length Allowed Pattern
Username 1 255 [a-zA-Z][-0-9a-zA-Z_]*
Password 1 255 [0-9a-zA-Z,./;[]<>?:{}
Group Name 1 255 [a-zA-Z][-0-9a-zA-Z_]*
Filter Interface Name 1 255 [a-zA-Z][-.:0-9a-zA-Z_]*
Delivery Interface Name 1 255 [a-zA-Z][-.:0-9a-zA-Z_]*
Service Interface Name 1 255 [a-zA-Z][-.:0-9a-zA-Z_]*
Service Name 1 255 [a-zA-Z][-.:0-9a-zA-Z_]*

DMF Service Node Verified Scale Values

NetFlow Scale Values

Table 17: Verified NetFlow Scale Values

DMF Service Node: Netflow Verified Limits
Service Node Throughput per port 1 (DCA-DM-SC, DCA-DM-SDL)

•     10 Gbps for IMIX traffic. (DCA-DM-SEL)

•     20 Gbps for IMIX traffic.

Max Packets processed per port| (DCA-DM-SC2)

•     6.0 million pps per port when 1 port is used. (DCA-DM-SDL3)

•     5.5 million pps per port when 1 port is used.

(DCA-DM-SEL4)

•     7.5 million pps per port when 1 port is used.

(DCA-DM-SC2)

•     5.5 million pps per port when 2 ports on the same NIC are used.

(DCA-DM-SDL3)

•     5.0 million pps per port when 4 ports on the same NIC are used.

(DCA-DM-SEL4)

•     7.0 million pps per port when 2 ports on the same NIC are used.

(DCA-DM-SDL3)

•     4.0 million pps per port when 16 ports are used.

(DCA-DM-SEL4)

•     6.0 million pps per port when 16 ports are used.

Expected Netflow Traffic out of per service node port| 300Mbps 5
Max Number of Flows supported| 1 million per port of supported managed appliances.

16 million per 16 ports of supported managed appliances.

  1. In push-per-policy mode, a 4-byte internal VLAN tag is added to the traffic and this reduces the maximum bandwidth supported.
  2. DCA-DM-SC Service Node (4x10G) handles 10 Gbps per port with average packet size >= 210 bytes.
  3. DCA-DM-SDL Service Node (16x10G) handles 10 Gbps traffic per port with an average packet size >= 285 bytes.
  4. DCA-DM-SEL Service Node (16x25G) handles 20 Gbps traffic per port with an average packet size >= 68 bytes.
  5. Measured when each service node port sent 1 million flow records at the same time.

Note: All test cases are executed by sending 10Gbps traffic to supported 10G service node ports with 1 million flows.
Note: All test cases are executed by sending 20Gbps traffic to DCA-DM- SEL.

IPFIX Scale Values

Table 18: IPFIX Template Used

IPV4 Template IPV6 Template
•     key destination-ipv4-address •     key destination-ipv6-address
•     key destination-transport-port •     key destination-transport-port
•     key dot1q-vlan-id •     key dot1q-vlan-id
•     key source-ipv4-address •     key source-ipv6-address
•     key source-transport-port •     key source-transport-port
•     field flow-end-milliseconds •     field flow-end-milliseconds
•     field flow-end-reason •     field flow-end-reason
•     field flow-start-milliseconds •     field flow-start-milliseconds
•     field maximum-ttl •     field maximum-ttl
•     field minimum-ttl •     field minimum-ttl
•     field packet-delta-count •     field packet-delta-count

Note: All test cases are executed by sending 10Gbps traffic to all supported 10G service node ports with 1 million flows.

Table 19: Verified IPFIX Scale Values

DMF Service Node: IPFIX| IPv4 Verified Limits| IPv6 Verified Limits
---|---|---
Service Node Throughput per port. 1| (DCA-DM-SC)

•     10 Gbps for IMIX traffic. (DCA-DM-SEL)

•     20 Gbps for IMIX traffic.

| (DCA-DM-SC)

•     10 Gbps for IMIX traffic. (DCA-DM-SEL)

•     11 Gbps for IMIX traffic.

Max Packets processed per port.| (DCA-DM-SC2)

•     7.5 million pps per port when 1 port is used.

•     7.0 million pps per port when 2 ports on the same NIC are used.

(DCA-DM-SEL3)

•     9.5 million pps per port when 1 port is used.

•     8.5 million pps per port when 2 ports on the same NIC are used.

•     7.0 million pps per port when 16 ports are used.

| (DCA-DM-SC2)

•     6.4 million pps per port when 1 port is used.

•     6.0 million pps per port when 2 ports on the same NIC are used.

(DC-DM-SEL3)

•     7.5 million pps per port when 1 port is used.

•     7.5 million pps per port when 2 ports on the same NIC are used.

•     6.5 million pps per port when 16 ports are used.

Expected IPFIX Traffic out of per service node port.| 300 Mbps 4 .| 500 Mbps4 .
Max Number of Flows tested per port.| (DCA-DM-SC)

•     1 million per port.

•     4 million when 4 ports are used. (DCA-DM-SEL)

•     16 million when 16 ports are used.

| (DCA-DM-SC)

•     1 million per port.

•     4 million when 4 ports are used. (DCA-DM-SEL)

•     16 million when 16 ports are used.

  1. In push-per-policy mode, a 4-byte internal VLAN tag is added to the traffic and this reduces the maximum bandwidth supported and recommended.
  2. DCA-DM-SC (4x10G) handles 10Gbps traffic per port with an average packet size of IPv4>= 160 bytes for IPv6 >=190 bytes.
  3. DCA-DM-SEL (16x25G) handles 20Gbps traffic per port with an average packet size of IPv4 >= 68 bytes and 10Gbps traffic per port with an average packet size of IPV6 >= 218 bytes.
  4. Measured when service node exports fix data packets representing 1 million unique flow information with default eviction timers.
Deduplication Verified Scale Values

Table 20: Verified Scale for Deduplication Managed Services

Managed Service| One Service Node Port| 4 Service Node Ports| 16 Service Node Ports
---|---|---|---
Deduplication Maximum Packet Rate Processed| (DCA-DM-SC)

•     2 ms window: 14 million pps.

•     4, 6 ms window: 13 million pps.

•     8 ms window: 11 million pps.

(DCA-DM-SDL)

•     2 ms window: 14 million pps.

•     4, 6 ms window: 13 million pps.

•     8 ms window: 11 million pps.

(DCA-DM-SEL)

•     2 ms window: 19 million pps.

•     4, 6 ms window: 18 million pps.

•     8 ms window: 16 million pps.

| (DCA-DM-SC)

•     2 ms window: 13 million pps per port when 4 ports are used.

•     4, 6 ms window: 13 million pps per port when 4 ports are used.

•     8 ms window: 11 million pps per port when 4 ports are used.

(DCA-DM-SEL)1

•     2 ms window: 17.5 million pps per port when 2 ports on the same NIC are used.

•     4, 6 ms window: 16.5 million pps per port when 2 ports on the same NIC are used.

•     8 ms window: 15.5 million pps per port when 2 ports on the same NIC are used.

| (DCA-DM-SDL)

•     2, 4, 6, 8 ms window: 8 million pps.

(DCA-DM-SEL)

•     2, 4, 6, 8 ms

window: 15.5 million unique pps.

Managed Service| One Service Node Port| 4 Service Node Ports| 16 Service Node Ports
---|---|---|---
Deduplication Maximum Bandwidth by Service Node Port| (DCA-DM-SC)

10 Gbps for IMIX traffic.

| (DCA-DM-SC)

40 Gbps for IMIX traffic.

| (DCA-DM-SC)

160 Gbps for IMIX traffic.

 | •     2 ms window: It handles 10 Gbps traffic per port with an average packet size > 70 bytes.

•     4, 6 ms window: It handles 10 Gbps traffic per port with an average packet size > 76 bytes.

| •     2 ms window: It handles 10 Gbps traffic per port with an average packet size > 76 bytes.

•     4, 6 ms window: It handles 10 Gbps traffic per port with an average packet size > 76 bytes.

| •     Service node ports handle 10 Gbps traffic per port with an average packet size > 210 bytes.

(DCA-DM-SEL)3

320 Gbps for IMIX traffic.

 | •     8 ms window: It handles 10 Gbps traffic per port with an average packet size > 94 bytes.

(DCA-DM-SEL)

| •     8 ms window: It handles 10 Gbps traffic per port with an average packet size > 94 bytes.

(DCA-DM-SEL)3

| •     Service node ports handle 20 Gbps traffic per port with an average packet size > 210 bytes.
 | 20Gbps for IMIX traffic.| 40Gbps for IMIX traffic.|
 | •     2, 4, 6 and 8 ms window: It handles 20 Gbps traffic per port with average packet size > 70 bytes.| •     2, 4, 6 and 8 ms window: It handles 40 Gbps traffic per port with average packet size > 70 bytes.|

  1. DCA-DM-SEL NIC Hardware configuration is 2 Port, Published numbers represent NIC card Performance.
  2. In push-per-policy mode, 4-byte internal VLAN tags are added to the traffic, which reduces the maximum bandwidth supported to 9.7 Gbps.
  3. DCA-DM-SEL’s maximum supported bandwidth per port is 20 Gig.

Note: Tested for 100%, 50%, 20%, and 0% deduplication by sending 10Gbps traffic with different packet sizes.

Header Stripping Verified Scale Values

Table 21: Header Stripping Verified Scale Values

Managed Service| One Service Node Port| 4 Service Node Port| 16 Service Node Port
---|---|---|---
Header Stripping Maximum Packet Rate Processed| (DCA-DM-SC)

•     14 million pps per port. (DCA-DM-SDL)

•     12 million pps per port. (DCA-DM-SEL)

•     29 million pps per port.

| (DCA-DM-SC)

•     14 million pps per port. (DCA-DM-SDL)

•     8 million pps per port. (DCA-DM-SEL)

•     29 million pps per port.

| (DCA-DM-SDL)

•     7.5 million pps per port. (DCA-DM-SEL)

•     14.5 million pps per port.

Header Stripping Maximum Bandwidth by Service Node Port1| (DCA-DM-SC)

•     10 Gbps for IMIX traffic. It handles 10 Gbps traffic per port with an average packet size > 70 bytes.

(DCA-DM-SEL)

•     20 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with an average packet size > 70 bytes.

| (DCA-DM-SC)

•     40 Gbps for IMIX traffic. It handles 10 Gbps traffic per port with an average packet size > 70 bytes.

(DCA-DM-SEL)

•     40 Gbps2 for IMIX traffic.

It handles 20 Gbps traffic per port with an average packet size > 70 bytes.

| (DCA-DM-SC)

•     160 Gbps for IMIX traffic. It handles 10 Gbps traffic per port with average packet size > 160 bytes.

(DCA-DM-SEL)

•     320 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with an average packet size > 140 bytes.

  1. In push-per-policy mode, 4-byte internal VLAN tags are added to the traffic, which reduces the maximum bandwidth supported to 9.7 Gbps.
  2. DCA-DM-SEL NIC Hardware configuration is 2 Port, Published numbers represent NIC card Performance.

Table 22: Header Stripping Verified Scale Values

Managed Service| One Service Node Port| 4 Service Node Port| 16 Service Node Port
---|---|---|---
Header Stripping Maximum Packet Rate Processed| (DCA-DM-SC)

•     14 million pps per port.

(DCA-DM-SDL)

•     12 million pps per port.

(DCA-DM-SEL)

•     29 million pps per port.

| (DCA-DM-SC)

•     14 million pps per port.

(DCA-DM-SDL)

•     8 million pps per port.

(DCA-DM-SEL)

•     29 million pps per port.

| (DCA-DM-SDL)

•     7.5 million pps per port.

(DCA-DM-SEL)

•     14.5 million pps per port.

Header Stripping Maximum Bandwidth by Service Node Port1| (DCA-DM-SC)

•     10 Gbps for IMIX traffic. It handles 10 Gbps traffic per port with an average packet size of>

70 bytes. (DCA-DM-SEL)

•     20 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with an average packet size of>

70 bytes.

| (DCA-DM-SC)

•     40 Gbps for IMIX traffic. It handles 10 Gbps traffic per port with an average packet size of>

70 bytes. (DCA-DM-SEL)

•     40 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with an average packet size of>

70 bytes.

| (DCA-DM-SC)

•     160 Gbps for IMIX traffic. It handles 10 Gbps traffic per port with an average packet size of>

160 bytes. (DCA-DM-SEL)

•     320 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with an average packet size of>

140 bytes.

  1. In push-per-policy mode, 4-byte internal VLAN tags are added to the traffic, which reduces the maximum bandwidth supported to 9.7 Gbps.

Note: Tested VxLan, MPLS, ERSPAN1, and LISP encapsulated packets of different sizes at line rate.

Slicing, Masking and Pattern Matching Verified Scale Values
This section summarizes the verified scale values for the following DMF Service Node managed services.

  • Slicing
  • Masking
  • Pattern Matching

Table 23: Verified Scale for Packet Slicing as a Managed Service

Processing rate and supported bandwidth

1

| One Service Node Port| 4 Service Node Ports| 16 Service Node Ports
---|---|---|---
Maximum Packet Rate Processed| (DCA-DM-SC)

•     14 million pps per port (DCA-DM-SDL)

•     14 million pps per port (DCA-DM-SEL)

•     29.5 million pps per port

| (DCA-DM-SC)

•     13 million pps per port (DCA-DM-SDL)

•     8 million pps per port (DCA-DM-SEL)

•     17.5 million pps per port2

| (DCA-DM-SDL)

•     8 million pps per port. (DCA-DM-SEL)

•     17.5 million pps per port.

Maximum Bandwidth by Service Node| (DCA-DM-SC)

•     10 Gbps for IMIX traffic. It handles 10Gbps traffic per port with an average packet size > 70 bytes.

(DCA-DM-SEL)

•     20 Gbps for IMIX traffic. It handles 20Gbps traffic per port with an average packet size > 130 bytes.

| (DCA-DM-SC)

•     40 Gbps for IMIX traffic. It handles 10Gbps traffic per port with an average packet size of> 70 bytes.

(DCA-DM-SEL)

•     40 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with an average packet size > 130 bytes.2

| (DCA-DM-SC)

•     160 Gbps for IMIX traffic. It handles 10Gbps traffic per port with average packet size of> 70 bytes.

(DCA-DM-SEL)

•     320 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with average packet size of> 130 bytes.

.

  1. In push-per-policy mode, 4-byte internal VLAN tags are added to the traffic, which reduces the maximum bandwidth supported to 9.7 Gbps.
  2. With regex \d{3}-\d{2}-\d{4} to match/mask/drop packets with Social Security numbers in a 64-byte packet, DCA-DM-SC can handle 10 million packets/sec. The performance reduces to 5 million pps with the 131-byte packet. With regex \d{4}[\s\-]\d{4}[\s\-]\d{4}[\s\-]*\d{4} to match/mask/drop packets with credit card numbers in a 68-byte packet, DCA-DM-SC can handle 7 million apps. The higher the packet size and position of the match string in the packet will influence performance. Performance can be optimized by setting the appropriate l4-payload offset value.

Note: Tested different packet sizes with line rate traffic.

Table 24: Verified Scale for Packet Masking as a Managed Service

Processing rate/ bandwidth supported 1| One Service Node Port| 4 Service Node Ports| 16 Service Node Ports
---|---|---|---
Maximum Packet Rate Processed| Depending on regex pattern

DCA-DM-SC supports 40% of 10 Gbps traffic or more per port. DCA-DM-SEL supports 31%2 of 20 Gbps 3 traffic or more per port.

Maximum Bandwidth by Service Node Port| Depending on regex pattern

One Service Node port handles about 40% of 10 Gbps traffic or more.

To get 10 Gbps performance, use LAG with 2 or more Service Node ports.

  1. In push-per-policy mode, 4-byte internal VLAN tags are added to the traffic, which reduces the maximum bandwidth supported to 9.7 Gbps.
  2. With regex \d{3}-\d{2}-\d{4} to match/mask/drop packets with Social Security numbers in a 64-byte packet, DCA-DM-SEL can handle 11 million pps. With regex \d{4}[\s\-]\d{4}[\s\-]\d{4}[\s\-]*\d{4} to match/mask/drop packets with credit card numbers in a 68-byte packet, DCA-DM-SEL supports masking service 11 million pps. The higher the packet size and position of the match string in the packet will influence performance. Performance can be optimized by setting the appropriate l4-payload offset value.
  3. Two ports belong to a single NIC card.

Table 25: Verified Scale for Pattern Matching as a Managed Service

Processing rate/ bandwidth supported 1| One Service Node Port| 4 Service Node Ports| 16 Service Node Ports
---|---|---|---
Maximum Packet Rate Processed| Depending on regex pattern

One Service Node port handles about 50% of 10 Gbps traffic or more. DCA-DM-SEL supports 36% of 20 Gbps traffic or more per port.

Maximum Bandwidth by Service Node Port| Depending on regex pattern

One Service Node port handles about 50% of 10 Gbps traffic or more.

To get 10 Gbps performance, use LAG with 2 or more Service Node ports.

  1. In push-per-policy mode, 4-byte internal VLAN tags are added to the traffic, which reduces the maximum bandwidth supported to 9.7 Gbps.

Note: The performance of packet masking or packet matching depends on the packet length and the complexity of the regular expression used.

Analytics Node Verified Scale Values

This section lists the tested scalability values for the Analytics Node.

Table 26: Analytics Node Scale Performance Results

  Single Node Cluster Three Node Cluster Five Node Cluster
ARP 20,000 pkts/sec 60,000 pets/sec 100,000 pets/sec
DHCP 15,000 pkts/sec 30,000 pkts/sec 60,000 pkts/sec
ICMP 15,000 pkts/sec 40,000 pkts/sec 80,000 pkts/sec
DNS 8,000 pkts/sec 20,000 pkts/sec 32,000 pkts/sec
TCPFlow 6,000 flows/ 18,000 flows/sec 30,000 flows/sec
sFLOW 12,000 flows/sec 30,000 flows/sec 70,000 flows/sec
Netflow v5 without Optimization1 15,000 flows/sec 35,000 flows/sec 65,000

flows/sec
IPFIX without Optimization1| 12,000 flows/sec| 34,000 flows/sec| 65,000 flows/sec
Netflow v9 without Optimization1| 12,000 flows/sec| 34,000 flows/sec| 65,000 flows/sec
All the Above Cases Combined: 2| ARP: 800 pkts/sec

DHCP: 500 pkts/sec

ICMP: 300 pkts/sec

DNS: 3,000 pkts/sec

TCPFlow: 300 flows/sec

flow: 3,000 flows/sec

Netflow ver 5: 5,000 flows/ sec

| ARP: 1,800 pkts/sec

DHCP: 900 pkts/sec

ICMP: 1,200 pkts/sec

DNS: 6,000 pkts/sec

TCPFlow: 400 flows/sec

sFLOW: 6,000 flows/sec

Netflow ver 5: 10,000 flows/sec

| ARP: 2,000 pkts/sec

DHCP: 1,200 pkts/sec

ICMP: 2,000 pkts/sec

DNS: 8,000 pkts/sec

TCPFlow: 500 flows/sec

flow: 8,000 flows/sec

Netflow ver 5: 13,000 flows/sec

  1. The Netflow with optimization test cases yields a result of 100,000 flows/sec for a single analytics node cluster. For more details about  Netflow optimization, please refer to the Arista Analytics User Guide.
  2. The rate of traffic chosen is for testing purposes only. In the production network, the rate of traffic for each protocol may vary.

Note: The above test measurements were performed with 60% average CPU Utilization.

Recorder Node Verified Scale Values
This section lists the tested performance numbers for the Recorder Node with no-drop packet capture characteristics.

Table 27: Maximum packets recorded on a DCA-DM-RA3 Recorder Node

Packet Size (Bytes)| Packets per second| Maximum Bandwidth (Gbps)
---|---|---
1500 Bytes or greater| ~1.98 million| 24 Gbps
512 Bytes or greater| ~4.7 million| 20 Gbps
IMIX| ~6.3 million| 19 Gbps
256 Bytes or greater| ~8.6 million| 19 Gbps

Note: IMIX is a 7:4:1 distribution of Ethernet-encapsulated packets of sizes 64, 570, and 1518 bytes. This leads to a 353-byte packet-size average.

REFERENCES

Related Documents

The following documentation is available for DANZ Monitoring Fabric:

  • DANZ Monitoring Fabric Release Notes
  • DANZ Monitoring Fabric User Guide
  • DANZ Monitoring Fabric Deployment Guide
  • DANZ Monitoring Fabric Hardware Compatibility List
  • DANZ Monitoring Fabric Hardware Guide
  • DANZ Monitoring Fabric Verified Scale Guide
  • DANZ Monitoring Fabric SNMP MIB Reference Guide

Read User Manual Online (PDF format)

Loading......

Download This Manual (PDF format)

Download this manual  >>

ARISTA User Manuals

Related Manuals