Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (2024)

Date

Version

Modifications

10/03/2021

1.0

Initial Version

10/14/2021

2.0

Added the “Integrating ACI Multi-Pod and ACI Multi-Site”

Fixed some typos and other small edits

Table 1. Document Version History

Introduction

The main goal of this document is to provide specific deployment and configuration information for multiple Cisco ACI Multi-Site use cases. ACI Multi-Site is the Cisco architecture commonly used to interconnect geographically dispersed data centers and extend Layer 2 and Layer 3 connectivity between those locations, together with a consistent end-to-end policy enforcement.

This paper is not going to describe in detail the functional components of a Cisco Multi-Site architecture, nor the specifics of how data-plane communication works, and the control-plane protocols used for exchanging reachability information between ACI fabrics. A prerequisite for making the best use of this deployment guide is to have gone through the white paper describing the overall ACI Multi-Site architecture and its main functionalities, available at the link below:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739609.html

This guide is divided into different sections, each tackling a specific deployment aspect:

Provisioning the Inter-Site Network connectivity: this section covers the specific configuration required on the network devices building the ISN infrastructure used to interconnect the different ACI fabrics.

Adding ACI fabrics to a specific Multi-Site domain: this part covers the required infra configuration performed on the Cisco Nexus Dashboard Orchestrator NDO (previously known as Cisco Multi-Site Orchestrator – MSO) to add different ACI fabrics to the same Multi-Site domain.

Note: Most of the considerations contained in this paper apply also for deployment leveraging the original virtual NDO cluster (available up to the software release 3.1(1)). However, given the fact that going forward the Orchestrator is only going to be supported as a service enabled on top of Cisco Nexus Dashboard compute platform, in the rest of this document we’ll solely make reference to Nexus Dashboard Orchestrator (NDO) deployments.

Nexus Dashboard Orchestrator schema and templates definition: this section provides guidelines on how to configure schema and templates in order to provision specific tenant policies. While Nexus Dashboard Orchestrator by design provides lots of flexibility on how to define those policy elements, the goal is to offer some best practice recommendations.

Establishing endpoint connectivity across sites (east-west): this section focuses on how to deploy EPGs/BDs in different fabrics and establish Layer 2 and Layer 3 intersite connectivity between them. From a security perspective, it is covered how to define specific policies between EPGs with the use of security contracts, but also how to simplify the establishment of connectivity by initially removing the policy aspect through the use of Preferred Groups and vzAny.

Establishing endpoint connectivity with the external network domain (North-South): this part focuses on the deployment of L3Out configuration to ensure endpoints connected to the ACI leaf nodes of different fabrics can communicate with external resources, either reachable via a local L3Out or a remote L3Out connection (Intersite L3Out).

Network services integration: the focus here is on how to leverage Service-Graph with PBR to ensure network services can be inserted in between communications for EPGs belonging to the same fabric or to separate fabrics (east-west) or for communication between endpoints connected to ACI and the external network domain (north-south).

The topology that is going to be used to provision the different use cases mentioned above is the one shown in Figure 1 below:

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (1)

Figure 1.

Two Fabrics Multi-Site Deployment

The ACI fabrics are connected via an Inter-Site Network (ISN) routed domain that represents the transport for VXLAN communications happening between endpoints part of different fabrics. As a reminder, there is no latency limit between the different ACI fabrics part of the same Multi-Site domain. The only latency considerations are:

Up to 150 ms, RTT is the latency supported between the Nexus Dashboard cluster nodes where the Orchestrator service is enabled.

Up to 500 msec RTT is the latency between each Nexus Dashboard Orchestrator node and the APIC controller nodes that are added to the Multi-Site domain. This means that the Multi-Site architecture has been designed from the ground up to be able to manage ACI fabrics that can be geographically dispersed around the world.

All the use cases described in this paper have been validated using the latest ACI and Nexus Dashboard Orchestrator software releases available at the time of writing this paper. Specifically, the two ACI fabrics are using ACI 5.1(1) code, whereas for Nexus Dashboard Orchestrator it is used the 3.5(1) release. However, keep in mind that from Nexus Dashboard Orchestrator release 2.2(1), there is not interdependency between Nexus Dashboard Orchestrator and ACI software releases, and a Multi-Site deployment using Nexus Dashboard Orchestrator release 3.2(1) (and later) can have fabrics running a mix of software releases (from ACI 4.2(4), which is the first one supported with Nexus Dashboard) being part of the same Multi-Site domain.

Note: The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product.

Provisioning Inter-Site Network Connectivity

The first step in the creation of a Multi-Site domain is the provisioning of the network infrastructure used to interconnect the different ACI fabrics and to carry the VXLAN traffic allowing to establish Layer 2 and Layer 3 connectivity across sites. The ISN is not managed by APIC nor by the Orchestrator service, so it must be independently pre-provisioned before starting the configuration of the spine nodes to connect each fabric to the ISN infrastructure.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (2)

Figure 2.

ISN Router Connecting to the Spine Nodes

The interfaces of the ISN devices connecting to the spine nodes in each fabric need to be deployed as point-to-point L3 links establishing routing adjacencies to allow for the exchange of infrastructure (i.e., underlay) prefixes between the spine nodes in different fabrics. The configuration sample below shows a specific example of the interfaces defined on the ISN router in Figure 2 to connect to the spine nodes of the local ACI Pod (the interfaces of other routers would be configured similarly).

Note: The configuration below applies to the deployment of Nexus 9000 switches as ISN nodes. When using different HW platforms, it may be required to slightly modify the specific CLI commands.

ISN-1 Router:

interface Ethernet1/49.4

description L3 Link to Pod1-Spine1

mtu 9150

encapsulation dot1q 4

vrf member ISN

ip address 192.168.1.1/31

ip ospf network point-to-point

ip router ospf ISN area 0.0.0.0

no shutdown

!

interface Ethernet1/50.4

description L3 Link to Pod1-Spine2

mtu 9150

encapsulation dot1q 4

vrf member ISN

ip address 192.168.1.5/31

ip ospf network point-to-point

ip router ospf ISN area 0.0.0.0

no shutdown

As shown above, sub-interfaces must be created on the physical links connecting the router to the spine nodes. This is because of the specific ACI implementation that mandates leaf and spine nodes to always generate dot1q tagged traffic (the specific VLAN tag 4 is always used by the ACI spine nodes when connecting to the external ISN infrastructure). Please notice that those interfaces remain point-to-point Layer 3 links that must be addressed as part of separate IP subnets (the use of a /30 or /31 mask is commonly recommended) and this imply that the main requirement for the ISN router is to be able to use the same VLAN tag 4 on sub-interfaces configured for different local links. Most of the modern switches and routers offer this capability.

The MTU of the interfaces should account for the extra overhead of VXLAN traffic (50 Bytes). The 9150B value shown in the configuration sample above matches the default MTU of the spine sub-interfaces connecting to the external ISN infrastructure, which ensures that the OSPF adjacency can be successfully established. However, it is not necessarily required to support such a large MTU on the ISN routers for intersite communication, as the required minimum value mostly depends on the MTU of the traffic generated by the endpoints connected to the ACI fabric. For more information on this, please refer to the “Intersite Network (ISN) deployment considerations” section of the ACI Multi-Site paper:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739609.html#IntersiteNetworkISNdeploymentconsiderations

The interfaces on the ISN routers can be deployed as part of a dedicated VRF or in the global table. Using a dedicated VRF, when possible, is a strong best practice recommendation, bothfrom an operational simplification perspectiveand also preventing to send to the spine nodes more prefixes thanstrictlyrequired for Multi-Site control and data plane (in case the ISN infrastructure is also shared for providing other connectivity services).

From ACI release 5.2(3) and Nexus Dashboard Orchestrator release 3.5(1) it is possible to deploy also BGP for establishing underlay adjacencies between the spines and the ISN devices. However, in this paper we focus on the use of OSPF as it has been available since the introduction of the ACI Multi-Site architecture, and it is widely deployed.

Very often, a different routing protocol (usually BGP) is used between the devices building the core of the ISN network, especially when that infrastructure extends across geographically dispersed locations. This implies the need to redistribute from OSPF into BGP the specific prefixes that must be exchanged across fabrics part of the same Multi-Site domain. This allows controlling in a very selective way the prefixes that are exchanged across sites. As it will be discussed in more detail as part of the “Nexus Dashboard Orchestrator Sites Infra Configuration” section, only a handful of prefixes are required for establishing intersite control and data plane connectivity:

ABGP EVPN Router-ID for each spine node, to establish MP-BGP EVPN adjacenciesto remote spine nodes.

An Overlay Unicast TEP (O-UTEP) anycast addressfor each Podpart of the same ACIfabric, usedforunicast Layer 2 and Layer 3data plane connectivitywiththeremote sites.

An Overlay Multicast TEP (O-MTEP) anycast addressshared between all the Pods part of the same ACI fabric, used to receive Layer 2 Broadcast/Unknown Unicast/Multicast (BUM)trafficoriginated fromtheremote sites.

One (or more) external TEP pools are used toenable theintersite L3Out connectivity with the remote sites.

The originalinfraTEP pools used for each fabric bring-up (10.1.0.0/16 and 10.2.0.0/16 in the example inFigure1) do not need to be exchanged across sites and should hence not being redistributed between protocols. The sample below shows an example of redistribution allowing the exchange of thefewprefixes listed above(once again, this configuration applies to Nexus 9000 switches deployed as ISN devices):

Define the IP prefix list and route-map toadvertisethe local prefixes to the remote sites:

ip prefix-list LOCAL-MSITE-PREFIXES seq 5 permit <BGP-EVPN-RID Site1-Spine1>

ip prefix-list LOCAL-MSITE-PREFIXES seq 10 permit <BGP-EVPN-RID Site1-Spine2>

ip prefix-list LOCAL-MSITE-PREFIXES seq 15 permit <O-UTEP-Pod1-Site1>

ip prefix-list LOCAL-MSITE-PREFIXES seq 20 permit <O-MTEP-Site1>

ip prefix-list LOCAL-MSITE-PREFIXES seq 25 permit <EXT-TEP-POOL-Site1>

!

route-map MSITE-PREFIXES-OSPF-TO-BGP permit 10

match ip address prefix-list LOCAL-MSITE-PREFIXES

Define the IP prefix list and route-map to specify the prefixes to be received from the remote sites:

ip prefix-list REMOTE-MSITE-PREFIXES seq 5 permit <BGP-EVPN-RID Site2-Spine1>

ip prefix-list REMOTE-MSITE-PREFIXES seq 10 permit <BGP-EVPN-RID Site2-Spine2>

ip prefix-list REMOTE-MSITE-PREFIXES seq 15 permit <O-UTEP-Pod1-Site2>

ip prefix-list REMOTE-MSITE-PREFIXES seq 20 permit <O-MTEP-Site2>

ip prefix-list REMOTE-MSITE-PREFIXES seq 25 permit <EXT-TEP-POOL-Site2>

!

route-map MSITE-PREFIXES-BGP-TO-OSPF permit 10

match ip address prefix-list REMOTE-MSITE-PREFIXES

Apply the route-maps to redistribute prefixes between OSPF and BGP (and vice versa):

router bgp 3

vrf ISN

address-family ipv4 unicast

redistribute ospf ISN route-map MSITE-PREFIXES-OSPF-TO-BGP

!

router ospf ISN

vrf ISN

redistribute bgp 3 route-map MSITE-PREFIXES-BGP-TO-OSPF

Adding the ACI Fabrics to the Multi-Site Domain

The following sections describes how to add the fabrics that are part of your Multi-Site domain to the Nexus Dashboard Orchestrator.

Onboarding ACI Fabrics to Nexus Dashboard Orchestrator

Oncethe configuration in the ISN is provisioned, it is then possible to onboard the ACI fabrics to Nexus Dashboard Orchestrator and perform the required infra configuration to ensure each site can be successfully connected to the ISN and establish the required control plane peerings. Specifically, each spine establishes OSPF adjacencies with the directly connected first-hop ISN routers and MP-BGP EVPN peerings with the spine nodes in the remote sites.

Up to Multi-Site Orchestrator release 3.1(1),the site onboarding procedurehad tobe done directly on MSO. From release 3.2(1)and later, the Nexus Dashboard Orchestrator is only supported as a service running on a Nexus Dashboard compute clusterand in that case, the site onboarding procedure must be performed directly onNexus Dashboard (and the sites are thenmade available to the hosted services, as it is the case for Nexus Dashboard Orchestrator in our specific example).Therequiredinfraconfiguration steps described in the“Nexus Dashboard Orchestrator Sites Infra Configuration” sectionremain valid also for deployments leveraging older MSO releases.

Figure 3 highlights how to start the process used to onboard an ACI fabric to Nexus Dashboard.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (3)

Figure 3.

Adding a Site to NDO 3.1(1)

After selecting “Add Site”, the following screen opens up allowing to specify the information for the ACI fabric that needs to be onboarded on NDO.Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (4)

Figure 4.

Specify the ACI Fabric Information

The following information is required to onboard an ACI fabric to Nexus Dashboard:

Site Name: The name used to reference the ACI fabric on Nexus Dashboard.

Host Name / IP Address: The IP address of one of the APIC cluster nodes managing the fabric that is being added. At the time of writing of this paper, and when running only the Orchestrator service on top of Nexus Dashboard, it is possible to specify here the Out-of-Band (OOB) or In-Band (IB) address of the APIC. When enabling other services on the same ND cluster (as for example Insights), it may be required instead to onboard the fabric using the IB address only, so please refer to the specific service installation guide (available on cisco.com) for more specific information.

User Name and Password: The credentials used to connect to Nexus Dashboard and, through the Single-Signed-On functionality, to the UI of the services hosted on Nexus Dashboard.

Login Domain: By default, the user will belocallyauthenticatedon Nexus Dashboard. If instead, the desire is to use a specificlogin domain (Radius, TACACS, etc.), the domain name can be defined on Nexus Dashboard and specified in this field.

In-Band EPG: this is only required when hosting services on Nexus Dashboard (like Insights) that are using In-Band connectivity for data streaming from this site.

Security Domains:.

The lastoptionalstep of the ACI fabric onboarding process consists indropping a pin for the site on the map, to represent the geographical location for this specific fabric.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (5)

Figure 5.

Dropping a Pin on the Map to Locate the Site

The same procedure can be repeated for every fabric that needs to be onboarded to Nexus Dashboard. At the end, all these sites are displayed on the Nexus Dashboard Orchestrator UI in “Unmanaged” state. The user can the selectively set as “Managed” the fabrics that should become part of the same ACI Multi-Site domain.

When a site is set to “Managed”, the user is also asked to enter a specific Site ID, which must uniquely identify that site.

Note: The Site ID is different than the Fabric-ID that is configured at the APIC level. Fabrics configured with the same Fabric-ID can become part of the same multi-Site domain, as long as they get assigned a unique Site ID.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (6)

Figure 6.

Displaying the Sites on the NDO UI

Once the fabrics become “Managed”, they will be displayed on the NDO Dashboard.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (7)

Figure 7.

Connectivity View on NDO Dashboard

As noticed above, the Orchestrator Service can gather information about the health of each APIC controller node for the onboarded sites and the specific faults raised on each APIC domain (with their associated level of criticality).The connectivity between sites is showing a warning sign at this point, for the simple reason that the fabrics have just been onboarded to the Orchestrator Service, buttheinfra configuration steps havenotbeen performed yet to connect each site to the external ISN.

Nexus Dashboard Orchestrator Sites Infra Configuration

After the fabrics have been onboarded to Nexus Dashboard and set as “Managed” on the Orchestrator service, it is required to perform the specific infra configuration allowing to connect each site to the ISN. This is needed so the spines in each fabric can first establish the OSPF adjacencies with the directly connected ISN routersandexchange the ‘underlay’ network information required for then successfully establishing intersite control plane and data plane connectivity.

Table1below displays the specific information that should be available before starting the infra configuration. For more explanation about the meaning of those different IP prefixes please refer to the “Intersite Network (ISN) deployment considerations” section of the ACI Multi-Site paper:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739609.html#IntersiteNetworkISNdeploymentconsiderations

Site

Node

Interfaces to ISN

IP Address for Interface to ISN

BGP-EVPN Router-ID

O-UTEP

O-MTEP

1

Spine-1101

e1/63

e1/64

192.168.1.0/31

192.168.1.2/31

172.16.100.1

172.16.100.100

172.16.100.200

1

Spine-1102

e1/63

e1/64

192.168.1.4/31

192.168.1.6/31

172.16.100.2

172.16.100.100

172.16.100.200

2

Spine-2101

e1/63

e1/64

192.168.2.0/31

192.168.2.2/31

172.16.200.1

172.16.200.100

172.16.200.200

2

Spine-2102

e1/63

e1/64

192.168.2.4/31

192.168.2.6/31

172.16.200.2

172.16.200.100

172.16.200.200

Table 2. IP Address Information for the Infra Configuration of the Sites

The Infra configuration workflow is started by selecting the “Infrastructure” à “Infra Configuration” option on the Nexus Dashboard Orchestrator left tab.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (8)

Figure 8.

Starting the Sites Infra Configuration Process

After selecting “Configure Infra”, the “Fabric Connectivity Infra” page is displayed to the user.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (9)

Figure 9.

Infra Configuration General Settings

Inthe “General Settings”tabwe find the“Control Plane Configuration” section allowingus to tune if desired, some default parameters used for the MP-BGP EVPN control plane used between the spines belonging to different fabrics. It is recommended to keep the default values for those parameters in the majority of the deployment scenarios. This applies alsotothe “BGP Peering Type” option: bydefault,it is set as “full-mesh”, which essentially means thatthe spine nodes deployed in different sites establish a full-mesh of MP-BGP EVPN adjacencies between them. This happens independently from the fact that the different sites are part of the same BGP Autonomous System Number (ASN) or not. The alternative option is to select “route-reflector”, which is effective only if the fabrics are part ofthe same ASN. Notice also that the “Graceful Helper” knob is on by default: this is to enable the BGP Graceful Restart functionality (documented inIETF RFC 4724) allowinga BGP speaker to indicate its ability to preserve its forwarding state duringaBGP restartevent.

The site infra configuration is instead started by selecting the tab on the left associated with specific fabric.Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (10)

Figure 10.

Starting the Specific Site Infra Configuration

The configuration is performed in three different steps: Site level, Pod level and Spine node level, depending on what is being selected on the Nexus Dashboard Orchestrator UI.

Site-Level Configuration

When clicking on the main box identifying the whole ACI Site, The Orchestrator service gives the capability of configuring the parameters shown inFigure11.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (11)

Figure 11.

Site Level Configuration

ACI Multi-Site knob: Turn this on to enable Multi-Site on the fabric and ensure that the control and data plane connectivity with the other sites can be established. This is not required if Nexus Dashboard Orchestrator is deployed onlyto provide policies locally to each connected site and no intersite communication across the ISN is desired (an ISN is not even needed for this specific scenario).

Overlay Multicast TEP: The anycast TEP address deployed on all the local spine nodes and used to receive BUM (or Layer 3 multicast) traffic originated from a remote site.A single O-MTEP address is associated with an ACI fabric, no matter if it is a single Pod or a Multi-Pod fabric.

External Routed Domain: This is the routed domain defined on APICas part of the Fabric Access policiesfor connecting the spine nodes to the ISN.While the specification on Nexus Dashboard Orchestrator of this parameter is technically not strictly required, it is a good practice to have the access policies for the spines defined at the APIC level.

BGP Autonomous System Number: The local ASN value that is dynamically pulled from APIC.

OSPF settings (Area ID, Area Type, Policies): Those are OSPF parameters required to then establish OSPF adjacencies between the spines and the directly connected ISN routers.

Pod-Level Configuration

Selecting the box identifying a Podit is then possible to access the Pod’sspecific configuration. Those are the settings that are independently applied to all the Pods that are part of the same Multi-Pod fabric (in our specific example we have a single Pod fabric).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (12)

Figure 12.

Pod Level Configuration

OverlayUnicastTEP: The anycast TEP address deployed on all the local spine nodesin the Podand used to send and receive Layer 2 and Layer 3 unicast trafficflows.Every Pod part of the same Multi-Pod fabric would define a unique TEP address.

External TEP Pools: Prefix(es)required when enabling the intersite L3Out functionality, as discussed in more detail in the “Deploying Intersite L3Out” section.

Spine-Level Configuration

Finally, a specific configuration must be appliedforeachspecificspine node.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (13)

Figure 13.

Spine Level Configuration

Ports: Specify the interfaces on the local spine used to connect to the ISN infrastructure. For each interface, the following parameters must be specified.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (14)

Figure 14.

Port Settings

Ethernet Port ID: The specific interface connected to the ISN.A sub-interface is provisioned to carry send/receive underlay traffic to/from the ISN.

IP address: Thesub-interface’s IP address.

Description: An optional description to be associated to this specific interface.

MTU: TheMTU value for the sub-interface. “Inherit” keeps the default value of 9150B (as shown in the CLI output below), but the user can specify a different value if desired.

Spine1011-Site1# show int e1/63.63

Ethernet1/63.63 is up

admin state is up, Dedicated Interface, [parent interface is Ethernet1/63

Hardware: 10000/100000/40000 Ethernet, address: 0000.0000.0000 (bia780c.f0a2.039f)

Internet Address is 192.168.1.0/31

MTU 9150 bytes, BW 40000000 Kbit, DLY 1usec

It is recommended to ensure the value used here is matching the MTU configured on the ISN router (please refer to the “Provisioning Inter-Site Network Connectivity” section).

OSPFPolicy:References the specific Policy created/selected during theSitelevel configuration.Usually,it is required to specify the fact that those are OSPF point-to-point interfaces.

OSPFAuthentication:Allows to enable authentication (disabled by default).

Note: The OSPF parameters would disappear if OSPF is disabled and different BGP parameters would be shown when enabling the use of BGP for underlay peering between the spines and the ISN devices.

BGP Peering: This knob must be enabled to ensure the spine establishes MP-BGP EVPN peerings with the spines in remote fabrics. At least two spines per fabric (for the sake of redundancy) should be configured with this knob enabled. Theother local spines assume the role of “Forwarders”, which essentially means they establish MP-BGP EVPN adjacencies only with the spinesin other Pods of the same ACI fabricthat have “BGP Peering” on and not with the spinesdeployed in remote ACI fabrics. This can be done to reduce the number of geographic BGP adjacencies, without compromising the overall redundancy of the intersite peerings.

Note: In our specific example of a single-Pod fabric with two spines, it is required toenable the “BGP Peering” knob on both spines, toensureremote prefixes continue to be learned in case of a spine’sfailurescenario. If there were more than two spines deployed in the same Pod, the knob should be enabled only on two of them. The other two“Forwarders” spineswould learntheremote endpoint information fromthe local BGP-enabled spines via the COOP control plane.

BGP-EVPN ROUTER-ID: Unique loopback interface deployed on each spine and used to establish MP-BGP EVPN peerings with the local “Forwarders” spines and with the BGP enabled spines deployed in the remote sites. The best practice recommendation is to use an IP address dedicated for the purpose of Multi-Site EVPN peerings, which is routable in the ISN infrastructure.

Important Note:TheRouter-ID specified herefor the spine nodewill replace the original Router-ID that was allocated from the internal TEP pool during the fabric bring-up.This causes a reset of theMP-BGP VPNv4 sessions established between the spine RRs and the leaf nodes to propagateinside the fabrictheexternal prefixes learned on local L3Out connections, with a consequent temporary outage for the north-south traffic flows.As a consequence,it is recommended to perform thisinfraconfigurationtaskone spine at a timeand, preferably,during a maintenance window. The same considerations apply toa site removal scenariowhena fabric must be detached from Nexus Dashboard Orchestrator. It is worth noticing that simply deleting a site from Nexus Dashboard Orchestrator does not cause the removal of the infra L3Out configuration, so the Router-IDpreviouslyassignedon NDO will continue to be used. If, however, the infra L3Outisdeleteddirectlyon APIC, the router-IDwillbe changed to the original one part of the TEP pool, and that would also causeatemporary north-south traffic outage due to the reset of the BGP VPNv4 sessions.

The last knob is only requiredifthe fabrics in the Multi-Site domain are all part of the same BGP ASN and there is thedesire to configure the spine as Route-Reflector. The best practice recommendation is to keep the default behavior of creatingfull-meshpeeringssince there are no scalability concerns even when deploying the maximum number offabricssupported in the same Multi-Site domain.

Verifying Intersite Control and Data Plane Connectivity

Aftercompletingand deployingthe configuration steps described in theprevioussections for all the ACI fabrics that have been onboarded to Nexus Dashboard Orchestrator,the MP-BGP EVPN adjacencies should beestablishedacross sites and the VXLAN dataplane should also be in a healthy state.This can be verified by looking at the dashboard tab of the NDO UI.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (15)

Figure 15.

NDO Dashboard

If the connectivitywereshown in an unhealthy state, the UI would alsoprovidethe information if thereisa problem inestablishingcontrol-plane adjacencies or anon-healthyVXLAN data-plane or both. This information can be retrieved for each site as part of the “Infra Configuration” section, as shown in figure below.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (16)

Figure 16.

Display Connectivity Status for Each SIte

As noticed above, both the “Overlay Status” and “Underlay Status” of each fabric is shown in detail. The user has also the capability of drilling into more details by clicking the specific value highlighting the routing adjacencies or the tunnel adjacencies. The figure below, shows for example the detailed “Overlay Routing Status” information.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (17)

Figure 17.

Overlay Routing Status Detailed Information

Typically, issues in the establishment of control plane or data plane connectivity are due to configuration errors in the ISN that do not allowthesuccessful exchangeofreachability information between fabrics. The first required step is hence ensuring that the spine nodes in a site receive the IP prefixes (BGP EVPN Router-ID, O-UTEP, and O-MTEP) from the remote site and vice versa. This can be done by connecting to one of the spine nodes in site 1 as follows:

Spine 1101Site1

Spine1011-Site1# show ip route vrf overlay-1

IP Route Table for VRF "overlay-1"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

<snip>

172.16.100.1/32, ubest/mbest: 2/0, attached, direct

*via 172.16.100.1, lo16, [0/0], 01w02d, local, local

*via 172.16.100.1, lo16, [0/0], 01w02d, direct

172.16.100.2/32, ubest/mbest: 4/0

*via 10.1.0.64, eth1/34.69, [115/3], 06:36:58, isis-isis_infra, isis-l1-int

*via 10.1.0.67, eth1/61.72, [115/3], 06:36:58, isis-isis_infra, isis-l1-int

*via 10.1.0.68, eth1/33.71, [115/3], 06:36:58, isis-isis_infra, isis-l1-int

*via 10.1.0.69, eth1/57.70, [115/3], 06:36:58, isis-isis_infra, isis-l1-int

172.16.100.100/32, ubest/mbest: 2/0, attached, direct

*via 172.16.100.100, lo21, [0/0], 06:36:59, local, local

*via 172.16.100.100, lo21, [0/0], 06:36:59, direct

172.16.100.200/32, ubest/mbest: 2/0, attached, direct

*via 172.16.100.200, lo20, [0/0], 06:36:59, local, local

*via 172.16.100.200, lo20, [0/0], 06:36:59, direct

172.16.200.1/32, ubest/mbest: 1/0

*via 192.168.1.3, eth1/64.64, [110/4], 01w02d, ospf-default, intra

172.16.200.2/32, ubest/mbest: 1/0

*via 192.168.1.1, eth1/63.63, [110/4], 01w02d, ospf-default, intra

172.16.200.100/32, ubest/mbest: 2/0

*via 192.168.1.1, eth1/63.63, [110/4], 06:37:51, ospf-default, intra

*via 192.168.1.3, eth1/64.64, [110/4], 00:00:35, ospf-default, intra

172.16.200.200/32, ubest/mbest: 2/0

*via 192.168.1.1, eth1/63.63, [110/4], 06:37:46, ospf-default, intra

*via 192.168.1.3, eth1/64.64, [110/4], 00:00:35, ospf-default, intra

Note: The assumption is that proper filtering is performed on thefirst-hopISN routers to ensure that only the required IP prefixes are exchanged between sites. For more information on the required configuration please refer to the “Provisioning Inter-Site Network Connectivity” section.

Nexus Dashboard Orchestrator Tenant, Schema and Template Definition

Once the site onboarding and the infra configuration steps are completed,it is possibleto startestablishingsecurecommunicationbetween endpoints connected to the different ACI fabrics. Todo that it is first necessaryto create a Tenant and deploy it on all the fabric that requires intersite connectivity.Bydefault,only two tenants (infra and common) are pre-defined on Nexus Dashboard Orchestrator and automatically associated to all the sites that werepreviouslyset as “Managed”.

Notice that there are no schemas associated to those tenants by default, so if it is desirable toutilizesome of the common/infra policies that are normally available by default on APIC, it isrequiredto import those objects from the different APIC domains into Nexus Dashboard Orchestrator. Covering the import of existing policies from the APIC domains is out of the scope of this paper.

The focus for the rest of this section is on the creation of policies for new tenants and their provisioning across the fabrics that are part of the Multi-Site domain. The first step in doing so is to create a new Tenant and associate it to all the sites where it should be deployed (Figure18).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (18)

Figure 18.

Adding a New Tenant

After selecting the “Add Tenant”option, it is possible to configure the tenant’s information and specify the sites where the tenant should be created. In the example inFigure19the newly created tenant is mapped to both the sites that were previously onboarded on the Orchestrator Service.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (19)

Figure 19.

Mapping a Tenant to Different Fabrics

Notice also how the screen above gives you the possibility of associating specific users to this newly created tenant to allow them to manage the tenant’s configuration (by default only the admin user is associated to the tenant). For more information on the supported user roles and configuration, please refer to the configuration guide at the link below:

https://www.cisco.com/c/en/us/td/docs/dcn/ndo/3x/configuration/cisco-nexus-dashboard-orchestrator-configuration-guide-aci-341.html

As a result of the configuration shown above, Tenant-1 is created on both Site1 and Site2. However, this is still an “empty shell,”as no policies have been defined yet for this tenant to be provisioned on the fabrics. The definition of tenant policies requires the creation of specific configuration constructs called “Schemas” and “Templates.”For a more detailed discussion on what those constructsrepresentand associated deployment guidelines, please refer to the “Cisco ACI Multi-Site Architecture” section of the paper below:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739609.html#CiscoACIMultiSitearchitecture

In our example we are going to definea specific schema (named “Tenant-1 Schema”) to be usedas a repository ofall the templates associated to Tenant-1.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (20)

Figure 20.

Creating a New Schema

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (21)

Figure 21.

Assigning the Name to the Schema

Since for the use cases we are going to discuss in the rest of this paper we would need todeploy policies that are locally available in each siteandcommon to both sites (i.e.,‘stretched’ policies), we are simply going to use three templates:

Template-Site1 to deploy policies only local to Site1.

Template-Site2 to deploy policies only local to Site2.

Template-Stretched to deploy policies common to Site1 and Site2 (stretched policies).

Note: It is important to keep in mind that objects that should exist in different fabrics part of the same ACI Multi-Site domain, when having the same name should always and only be provisioned from templates associated to all those sites. The only exception could be EPGs deployed as part of different Application Profiles, which could have overlapping names. Even in this case, the best practice recommendation is to provision site local EPGs with unique naming, for the sake of operational simplification.

Each of theabovetemplatesmust be associated to the Tenant-1 tenant, as shown inFigure22.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (22)

Figure 22.

Create a Template Mapped to Tenant-1

Once the same operation has been completed for the other templates, it is then possible to associate each template to the corresponding ACI site.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (23)

Figure 23.

Associate the Templates to the ACI Sites

Whenthis last step is completed, we are ready to start defining the specific configuration policies to be pushed to the different sitestoimplement thedifferentuse casesdescribed in the following sections.

Intersite Connectivity Between Endpoints

The first two use cases that we are considering are the ones allowing toestablishintra-EPG and inter-EPGsconnectivity between endpoints connected to separate fabrics.We usually refer to those use cases as “east-west” connectivity.

Intra-EPG Connectivity Across Sites

Toestablishintra-EPG connectivity across sites, it isrequiredto define objects in the Template-Stretched, which allows torenderthose items in both fabrics.There are a couple of different scenarios that can be deployed. The first one is the one shown inFigure24, where the EPG is mapped to a stretched BD and the IP subnet(s) associated to the BD is also stretched across sites. This implies that intra-subnet communicationcan in this case be enabled between endpoints connected to different sites.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (24)

Figure 24.

Stretched EPG, Stretched BD and Stretched Subnet

The second scenario is instead depicted inFigure25. In this case the EPG is still stretched across site, but the BD and the subnet(s)are not stretched, which essentially implies that intra-EPG communication between endpoint connected in separate sites will be Layer 3 and not Layer 2 (as it was in the previous case).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (25)

Figure 25.

Stretching theEPGwithout Stretching BD and Subnet

The following sections highlight the specific configuration stepsrequiredto provision the policiesrequiredto enable the two communication patterns shown above.

Creating a Stretched VRF

The first step for enabling intra-EPG communication across sites consists in creating and deploying the VRFto which theEPG(or better its BD)isassociated. This VRF must be configured as part of the Template-Stretched since its configurationmust be provisioned in both ACI fabrics.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (26)

Figure 26.

Creating a new VRF in Template-Stretched

Figure27highlightsthe various configuration parameters available when creating a new VRF on the NDO GUI.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (27)

Figure 27.

VRF Configuration Parameters

The “Policy Control Enforcement Preference” is always enforced and grayed out, as it is the only VRF configuration supported with Multi-Site. The only reason for exposing the knob is for brownfield scenario where a VRF configuration is imported from APIC into Nexus Dashboard Orchestrator; if the VRF on APIC is configured as “Unenforced”, theuser can then have the capability to modify thesettings to “Enforced” directly onNDO orkeeping it “Unenforced” with the specific understanding that such configuration would not allow establishing intersite communication. There are other supported functionalities (i.e.use of Preferred Groups or vzAny) allowing to remove the policy enforcement for inter-EPG communication, as it will be discussed in more detail in the “Inter-EPGs Connectivity across Sites” section.

The other default setting for a newly created VRF is the enablement of “IP Data-Plane Learning”.There are only specific scenarios where this setting requires to be changed, usually related to use cases where an IP address may get associated with different MAC addresses (active/active server NIC teaming options, application clustered services, certain FW/SLB cluster options, etc.).For more information on this please refer to the ACI design guide available at the link below:

https://www.cisco.com/c/en/us/td/docs/dcn/whitepapers/cisco-application-centric-infrastructure-design-guide.html

Once the VRF configuration is completed, it is possible to deploy the template to ensure that the VRFiscreated on both APIC domains that are associated with the Template-Stretched.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (28)

Figure 28.

Deploying the Template-Stretched to Create the VRF on the APIC Domains

Before the configuration is pushed to the APIC domains, the NDO GUIprovidesa summary of the objectsthat will be createdand where (in this case only VRF1 on both Site1 and Site2).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (29)

Figure 29.

VRF1 being pushed to Site1 and Site2

Starting from NDO release 3.4(1), a new functionality named “Template Deployment Plan” became available. By selecting the corresponding button shown in the figure above, graphical (and XML based) information is displayed to show in detail what objects are provisioned by the Orchestrator (and in which sites) as a result of the deployment of the template. In this simple scenario, the Deployment Plan only shows that the VRF has been created in both sites (since the template that is being deployed is associated to both sites).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (30)

Figure 30.

Template Deployment Plan (Graphical View)

Selecting the “View Payload” option shown above allows you instead to view the XML format of the REST API call that the Orchestrator will make to the APIC controllers in each site as a result of the deployment of the template.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (31)

Figure 31.

Deployment Plan (XML View)

Creating a Stretched Bridge Domain and a Stretched Subnet

The stretched BDrequiredto implement the use case shown inpreviousFigure24must be defined insidetheTemplate-Stretched.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (32)

Figure 32.

Creating a Stretched BD in Template-Stretched

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (33)

Figure 33.

StretchedBD(configuration parameters)

As noticed inFigure33above, the BD must be associated with the stretched VRF1previouslydefined.

The BDis stretched by setting the “L2 Stretch” knob. In most of the use cases, the recommendation is to keep the “Intersite BUM Traffic Allow” knob disabled instead, as it is strictlyrequiredonly in specific scenarios where flooding should be enabled between sites. This is the case for example for legacy-to-ACI migration use cases (until the default gateway for the endpoints is migrated to ACI) or for deployment where L2 multicast stream must be sent across sites.The other knobs to control flooding can usually be kept to the default values.

Since the BD is stretched, the BD subnet is also defined at the template level since it must also be extended across sites.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (34)

Figure 34.

Define the BD’s Subnet IP Address

Once the BD configuration is completed, it is possible to deploy the Template-Stretched to the ACI fabrics.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (35)

Figure 35.

Deploying the Stretched BD to Site 1 and Site2

As the end results, the BD is created on both APIC domains, with the same anycast gateway 10.10.3.254/24defined on all the leaf nodes where VRF1 is deployed.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (36)

Figure 36.

BD-Stretched with Stretched Subnet Created on APIC in Site1

Creating a Non-Stretched Bridge Domain with a Non-Stretched Subnet

This specific configuration isrequiredto implement the use case previously shown inFigure25, where the EPG is stretched but the BD is not. Since an EPG can only be associated with a single BD, we need to ensure that the same BD object is created in both sites, even if the forwarding behavior of the BD is to be non-stretched. This can be achieved by deploying the BD inside the Stretched-Template and configure it as shown inFigure37.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (37)

Figure 37.

Non-Stretched BD2Deployed Across Sites(Configuration Parameters)

The BD is associated with the same stretched VRF1 previously defined.

The BD must be configured with the “L2 Stretch” knob disabled, as wedon’twant to extend the BD subnet nor allow L2 communication across sites.

The BD’s subnet field is grayed out at the template level; this is because for this specific use case the goal is toprovidea separate IP subnet to the BD deployed in each site. The subnet is hence configuredat the site level (for each site to which the Template-Stretched is associated), as shown inFigure38andFigure39.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (38)

Figure 38.

DefinetheBD’s Subnet at theSite1Level

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (39)

Figure 39.

BD’s Subnet for Endpoints ConnectedtoSite1

The same configuration should be applied for the same BD at the Site2 level, which allows to configure a separate IP subnet to be used for the endpoints that are connected to Site2 (Figure40).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (40)

Figure 40.

BD’s Subnet for Endpoints Connected to Site2

Once a specific subnet has been provisioned at the site level for each ACI fabric and the template has been deployed, it is possible to verify directly on the APIC domains what is the result of the configuration. As noticed inFigure41, the BD in Site1 is configured with both IP subnets, but only the specific one that was configured at the Site1 level on Nexus Dashboard Orchestrator (10.10.4.0/24) is going to be used to provide default gateway services for the endpoints. The other IP subnet (10.10.5.0/24)(also referred to as “Shadow Subnet”)is automatically provisioned with the “No Default SVI Gateway” parameter since it is only installed on the leaf nodes in Site1 to allow routing to happen across the sites when endpoints part of the same EPG want to communicate (we’ll look at the leaf node routing table in the “Creating the Stretched EPGs” section).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (41)

Figure 41.

BD’s Subnets Configured on APIC in Site 1

The exact opposite considerations are instead valid for the same BD deployed on the APIC nodes in Site2, as highlighted inFigure42below.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (42)

Figure 42.

BD’s Subnets Configured on APIC in Site 1

Note: The “Shadow Subnet” is always provisionedwith the “Private to VRF” scope, independently from the specific settings the same subnet had in the original site. This means that itwon’tever be possible to advertise the“Shadow Subnet” prefix out of an L3Out in the site where it is instantiated. For advertising a BD subnet out of the L3Outs of different sites it isrequiredto deploy the BD with the “L2 Stretch” flag set.

Creating the Stretched EPGs

The last stepconsists in creating the two EPGs (EPG1-Stretched and EPG2-Stretched) previously shown inFigure24andFigure25. Since those are stretched objects, they will be defined in the Template-Stretched and then pushed to both ACI sites.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (43)

Figure 43.

Creating Stretched EPGs

As shown above, each EPG is mapped to the BD previously created, depending on the specific use case it needs to be implemented. Once the EPGs have been created, the next logical step is to specify what type of endpointsshould become part of those EPGs. ACI allows connecting to the same EPG endpoints of different nature: bare metal servers, virtual machines, containers, etc. The type of endpoints to be used is specified by mapping the EPG to a specific domain (physical domain, VMM domain, etc.). Those domains are created at the APIC level for each fabric that is part of theMulti-Sitedomain, but theythen getexposed to the Orchestrator Service so that the EPG-domain mappings can be provisioned directly through the Orchestrator Service (at thesite-specificlevel, since each fabric can expose its own locally defined domains).

Note: How to create domains on APIC is out of the scope of this paper. For moreinformation,please refer to the ACI configuration guides below:

https://www.cisco.com/c/en/us/support/cloud-systems-management/application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html

Figure44showsthe example of the mapping ofEPG2-Stretched to a physical domain inSite1and the corresponding static port configuration required forthose physical endpoints.This configuration must be performed at the site level since it specifically references a physical domain that is locally defined in that APIC domain.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (44)

Figure 44.

Static Port and Physical Domain Configuration for EPG2-Stretched

After selecting “Add Domain”,is it then possible to specify the specific physical domain this EPG should be mapped to. There are different options to select for what concerns the “Deployment Immediacy” and “Resolution Immediacy”.For more information on what is the meaning of those options please refer to the ACI configuration guides referenced above.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (45)

Figure 45.

Mapping EPG2-Stretched to a Physical Domain

The static port configuration allows then to specify the specific port (vPC1) and VLAN encapsulation (VLAN 100) to be used to connect the physical endpoint to the ACI fabric and make it part of the EPG2-Stretched group.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (46)

Figure 46.

Static Port Configuration for a Physical Endpoint

Finally, before the physical domain mapping configuration is pushed to the APIC Site1, the Nexus Dashboard Orchestrator GUI displays the specific changes that will be applied when hitting “Deploy”, just to ensure the admin can verify those actions reflect the desired intent.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (47)

Figure 47.

Reviewing the Changes to be Deployed to Site1

Following a similarprocedure,it is possible tomap EPG2-Stretched to a specific domain in Site2, for example, a VMM domain. Doing so, would then automatically provision a corresponding port-group on the ESXi hosts managed by the vSphere server that is peered with APIC so that the virtual machines thatrepresentendpoints part of Stretched-EPG2 can be connected to it.

Verifying Intra-EPG Communication

Once the endpoints are connected,theyare locallydiscovered by the leaf nodes.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (48)

Figure 48.

Endpoints Connected to Stretched EPGs

This information can be retrieved directly from the APIC in each site (as part of the operational tab of the EPG),and alsothrough the CLIon each specific leaf node, as shown below for Site1 and Site2:

Leaf103Site1

Leaf103-Site1# show endpoint vrf Tenant-1:VRF1

Legend:

s -arp H - vtep V -vpc-attached p - peer-aged

R - peer-attached-rlB - bounce S - static M - span

D - bounce-to-proxy O- peer-attached a - local-aged m - svc-mgr

L - local E - shared-service

+--------------------------------+---------------+-----------------+--------------+-------------+

VLAN/ Encap MAC Address MAC Info/ Interface

Domain VLAN IP Address IP Info

+--------------------------------+---------------+-----------------+--------------+-------------+

10 vlan-100 0050.56b9.3e72 LV po1

Tenant-1:VRF1 vlan-100 10.10.4.1 LV po1

Leaf301Site2

Leaf301-Site2# show endpoint vrf Tenant-1:VRF1

Legend:

s -arp H - vtep V -vpc-attached p - peer-aged

R - peer-attached-rlB - bounce S - static M - span

D - bounce-to-proxy O- peer-attached a - local-aged m - svc-mgr

L - local E - shared-service

+--------------------------------+---------------+-----------------+--------------+-------------+

VLAN/ Encap MAC Address MAC Info/ Interface

Domain VLAN IP Address IP Info

+--------------------------------+---------------+-----------------+--------------+-------------+

42 vlan-136 0050.5684.48b0LpV po2

Tenant-1:VRF1 vlan-136 10.10.5.1LpV po2

Communication between those endpoints can be freelyestablishedacross sites since they are part of the same EPG2-Stretched group. When looking at the routing table of the leaf nodes where the endpoints are connected, it is possible to notice how the IP subnet for the local endpoint is locally instantiated (with the corresponding anycast gateway address)and alsothe IP subnet for the endpoints in the remote site is locally instantiated pointing to the proxy-VTEP address of the local spines as next-hop (10.1.112.66).

Leaf103Site1

Leaf103-Site1# show endpoint vrf Tenant-1:VRF1

IP Route Table for VRF "Tenant-1:VRF1"

'*' denotes best ucast next-hop

'**' denotes best mcast next-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

10.10.4.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 00:09:58, static, tag 4294967294

10.10.4.254/32, ubest/mbest: 1/0, attached, pervasive

*via 10.10.4.254, vlan10, [0/0], 00:09:58, local, local

10.10.5.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 00:09:58, static, tag 4294967294

This is the result of the configuration pushed to APIC and shown inpreviousFigure41 (the opposite configuration is provisioned on the leaf nodes in Site2); the subnet entry in the routing table is used to forward routed traffic across sites until the leaf nodes can learn the specific IP address of the remote endpoints via data plane learning.

The CLI output below shows instead the endpoint tables on the leaf node in Site1 once the data plane learning of the remote endpoint in Site2 has happened (similar output would be obtained for the leaf node in Site2). The next-hop of the VXLAN tunnel to reach the remote endpoint is represented by the O-UTEP address of the remote fabric (172.16.200.100).

Leaf103Site1

Leaf103-Site1# show endpoint vrf Tenant-1:VRF1

Legend:

S - static s - arp L - local O - peer-attached

V - vpc-attached a - local-aged p - peer-aged M - span

B - bounce H - vtep R - peer-attached-rl D - bounce-to-proxy

E - shared-service m - svc-mgr

+-----------------------------------+---------------+-----------------+--------------+-------------+

VLAN/ Encap MAC Address MAC Info/ Interface

Domain VLAN IP Address IP Info

+-----------------------------------+---------------+-----------------+--------------+-------------+

Tenant-EFT:VRF1 10.10.5.1 tunnel39

13 vlan-883 0050.56b9.1bee LV po1

Tenant-EFT:VRF1 vlan-883 10.10.4.1 LV po1

Leaf 103Site1

Leaf103-Site1# show interface tunnel 39

Tunnel39 is up

MTU 9000 bytes, BW 0 Kbit

Transport protocol is in VRF "overlay-1"

Tunnel protocol/transport is ivxlan

Tunnel source 10.1.0.68/32 (lo0)

Tunnel destination 172.16.200.100/32

Similarly, communication can be freely achieved between endpoints connected to the EPG1-Stretched group. The only difference is that those endpoints are part of the same IP subnet (10.10.3.254/24) that is stretched across sites.As a consequence, Layer 2 bridging and not Layer 3 routingis what allows toestablishcommunication between them, as it can be noticed by looking at the endpoint table below:

Leaf 101Site1

Leaf101-Site1# show endpoint vrf Tenant-1:VRF1

Legend:

s -arp H - vtep V -vpc-attached p - peer-aged

R - peer-attached-rlB - bounce S - static M - span

D - bounce-to-proxy O- peer-attached a - local-aged m - svc-mgr

L - local E - shared-service

+--------------------------------+---------------+-----------------+--------------+-------------+

VLAN/ Encap MAC Address MAC Info/ Interface

Domain VLAN IP Address IP Info

+--------------------------------+---------------+-----------------+--------------+-------------+

1/Tenant-1:VRF1 vxlan-16154555 0050.56a2.380f tunnel26

3 vlan-886 0050.56b9.54f3 LV po1

In this case, only the MAC addresses of the remote endpoint is learned on the local leaf node, together with the information that it is reachable through a VXLAN tunnel (tunnel26). Not surprisingly, the VXLAN tunnel isestablishedalso in this case between the VTEP of the local leaf node and the O-UTEP address of Site2 (172.16.200.100).

Leaf 101Site1

Leaf101-Site1# show interface tunnel 26

Tunnel26 is up

MTU 9000 bytes, BW 0 Kbit

Transport protocol is in VRF "overlay-1"

Tunnel protocol/transport isivxlan

Tunnel source 10.1.0.68/32 (lo0)

Tunnel destination 172.16.200.100/32

As you may have noticed, no specific security policy wasrequiredto enable communication between endpoints part of the same EPG. This is the default behavior for ACI, which will always allow free intra-EPG communication. Communication between endpoints part of EPG1-Stretched and EPG2-Stretchedwon’tinstead be allowed by default, because of the zero-trust security approach delivered by ACI. The “Inter-EPG Connectivity across Sites” sectionwill cover ingreat detailhow this communicationcanbeallowed.

Verifying Namespace Translation Information for Stretched Objects

The ACI Multi-Site architecture allows to interconnect sitesrepresentingcompletely different namespaces, since policies are locally instantiated by different APIC clusters. Thisessentially meansthat when specific resources (like L2VLAN IDs for BDs, L3VLAN IDs for VRFs, Class-IDs for EPGs) are assignedby each APICcontroller, their values would be different in each siteeven if the objects are stretchedacross sites (and hence represent the same logical items).

Since VXLAN traffic used to establish intersite communication carries this type of information in the VXLAN header, a translation (or namespace normalization) function must be performed on the receiving spine to ensure successful end-to-end communication and consistent policy application.

Note: The namespace normalizationfunction is not required only for stretched objects, but also when creating relationships between local objects defined in different sites.for moreinformation,please refer to the “Cisco ACI Multi-Site architecture” section of the paper below:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739609.html#CiscoACIMultiSitearchitecture

For our specific scenario of inter-site intra-EPG communication, we can then verify howthe translation entries are properly configured on the spine nodes receiving VXLAN traffic from the remote site.Figure49shows the specific values assigned to thestretchedobjects created in the APIC domain of Site1.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (49)

Figure 49.

Class IDs, Segment IDs and Scopes for Objects Created on Site1

Figure50displays instead the values for the same objects assigned by the APIC controller in Site2.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (50)Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (51)

Figure 50.

Class IDs, Segment IDs and Scopes for Objects Created on Site2

As it can be easily noticed comparing the information in the two figures above, the values assigned to the stretched objects(EPG, BD, or VRF) in Site1 differ from the values provisioned in Site2. This is where the translation function performed by the spine comes into the picture to “normalize” those valuesand allow successful data plane connectivity across sites.

The outputs below show the entries onthespinesof Site1 anda spineof Site2 that allow translating the Segment ID and Scope of the BD and VRF that are stretched across sites.You can notice howtranslation mappings are created forVRF1andBD1-Stretched since those are stretched objects, but notfor BD2that is not “L2 stretched” instead.

Spine1101Site1

Spine1101-Site1# showdcimgrrepovnid-maps

--------------------------------------------------------------

Remote | Local

siteVrf Bd | Vrf Bd Rel-state

--------------------------------------------------------------

22359299 |3112963 [formed]

22359299 16252857|3112963 16154555 [formed]

Spine401Site2

APIC-Site2# fabric 401 showdcimgrrepovnid-maps

----------------------------------------------------------------

Node 401 (spine1-a1)

----------------------------------------------------------------

--------------------------------------------------------------

Remote | Local

siteVrf Bd | Vrf Bd Rel-state

--------------------------------------------------------------

13112963 |2359299 [formed]

13112963 16154555|2359299 16252857 [formed]

The outputs below display instead the translation entries on the spine nodes in Site1 and Site2 for the policy information (i.e.,class IDs) of theVRF, BDs, andEPGs.It is worth noticing how in a case (for VRF1) the local and remote class ID values areactually thesame (49153).Inother cases, the same class ID value is used in the two fabrics for different purposes: for example, 49154representstheclass ID of EPG2-Stretched in Site1and alsothe class ID of EPG1-Stretched in Site2. This reinforcesthe point that each APIC domain assigns valueswithlocalsignificanceand hence the namespace normalization function is needed to allow successful intersite communication.

Spine1101Site1

Spine1101-Site1# showdcimgrreposclass-maps

----------------------------------------------------------

Remote | Local

siteVrfPcTag | VrfPcTag Rel-state

----------------------------------------------------------

2 291635816386 | 2129927 32770 [formed]

2 281805616387 | 2916360 16386 [formed]

2 235929949155 | 3112963 49154 [formed]

2 235929949153 | 3112963 49153 [formed]

2 235929949154 | 3112963 16387 [formed]

Spine401Site2

Spine401-Site2# showdcimgrreposclass-maps

----------------------------------------------------------

Remote | Local

siteVrfPcTag | VrfPcTag Rel-state

----------------------------------------------------------

1 301465732770 | 2326532 16386 [formed]

1 291636016386 | 2818056 16387 [formed]

1 311296349154 | 2359299 49155 [formed]

1 311296316387 | 2359299 49154 [formed]

1 311296349153 | 2359299 49153 [formed]

Inter-EPG Connectivity Across Sites (Intra-VRF)

The first use case to consider for the establishment of intersite connectivity between endpoints connected to different EPGs is the one displayed inFigure51, which applies to the intra-VRF scenario.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (52)

Figure 51.

Inter-EPGsConnectivity AcrossSites (Intra-VRF)Use Case

Differently from the stretched EPG use cases previously described, in this case, theEPG/BD objects are locally provisioned in each site andconnectivitybetween them must beestablishedby creating aspecific security policy (i.e.,contract) specifying what type of communication is allowed.It is worth noticing that similar considerations to what is described in the following section would applyforestablishingconnectivity between EPGs, independently from the fact that they are locally deployed or stretched (Figure52).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (53)

Figure 52.

Inter-EPGs Communication Between Local and/or Stretched EPGs

Creating Site Local EPGs/BDs

The creation of site-local EPGs/BDsissimilar towhat is describedin the stretched EPG use case. The main difference is that those objectsshould bedefined intemplatesthatareonly associated with thespecificACI fabricswhere thepolicies should be provisioned.Figure53displays the creation of EPG1-S1 and BD1-S1 that need to be only provisioned to Site1 (a similar configuration is needed in the template associated with Site2for the local EPGs/BDs objects).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (54)

Figure 53.

EPGs/BDs Defined in aSite-SpecificTemplate

Notice that inter-template references would hence be neededfor exampleto map local BDs to the previously deployed stretched VRF.Nexus Dashboard Orchestrator allows tocross-referenceobjects across templates defined in the same schema or even across different schemas.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (55)

Figure 54.

Local BD and EPG Configuration

After defining the EPG and the BD local objects, it isrequiredto perform the same site-local configuration discussed for the stretched EPG use cases: the BD subnet is assigned at the site-local level (since theBDs are localized) and there is the requirement to map the local EPG to a local domain (Physical, VMM, etc.).

Applying a Security Contract between EPGs

Once thelocal EPG/BD objects are created in eachfabric,toestablishcommunication between them it isrequiredto apply a security policy (contract) allowing all traffic or specificprotocols.The contract and the associated filter(s) can be defined in the Template-Stretched, so as to make it available to both fabrics.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (56)

Figure 55.

Create a Contract in the Template-Stretched

The contract must reference one or more security filters, used to specifywhat traffic should be allowed. Notice howit is also possible to create a filter with a “Deny” entry (“Permit” was the onlyoptionavailable in older releases).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (57)

Figure 56.

Definea Deny/Permit Filter Associated to the Contract

The last step consists in creating the specificfilter’s entry used to define the traffic flows that should bepermitted(or denied). In the specific example inFigure57below we simply use the default settings that translate to match all traffic.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (58)

Figure 57.

Create the Filter’s Entry to Deny/Permit Traffic

Once thecontract with the associated filters is ready, it is possible to define the EPG that “provides” the contract and the EPG that “consumes” it.Thebest practicesrecommendationwhenusing contracts with ACI Multi-Site istoalways clearly identify a provider and a consumer sideforall thecontractsthat are used. This is critical especially when the goal is to attach a Service-Graph to the contract, as discussed in detail in the “Service Node Integration with ACI Multi-Site” section. For more detailed information on the use of ACI contracts, please refer to the document below:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-743951.html

Verifying EPG-to-EPG Intersite Communication

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (59)Once the contract is applied, intersite connectivity between endpoints part of the different EPGs can beestablished.Before the endpoints start communicating with each other, they are locally learned on the leaf node they connectto, as shown in the outputs below.

Figure 58.

Endpoints Connected to Local EPGs

Leaf101Site1

Leaf101-Site1# show endpoint vrf Tenant-1:VRF1

Legend:

s -arp H - vtep V -vpc-attached p - peer-aged

R - peer-attached-rlB - bounce S - static M - span

D - bounce-to-proxy O- peer-attached a - local-aged m - svc-mgr

L - local E - shared-service

+--------------------------------+---------------+-----------------+--------------+-------------+

VLAN/ Encap MAC Address MAC Info/ Interface

Domain VLAN IP Address IP Info

+--------------------------------+---------------+-----------------+--------------+-------------+

55 vlan-819 0050.56b9.1beeLpV po1

Tenant-1:VRF1 vlan-819 10.10.1.1LpVpo1

Leaf303Site2

Leaf303-Site2# show endpoint vrf Tenant-1:VRF1

Legend:

s -arp H - vtep V -vpc-attached p - peer-aged

R - peer-attached-rlB - bounce S - static M - span

D - bounce-to-proxy O- peer-attached a - local-aged m - svc-mgr

L - local E - shared-service

+--------------------------------+---------------+-----------------+--------------+-------------+

VLAN/ Encap MAC Address MAC Info/ Interface

Domain VLAN IP Address IP Info

+--------------------------------+---------------+-----------------+--------------+-------------+

34 vlan-118 0050.56b3.e41e LV po4

Tenant-1:VRF1 vlan-118 10.10.2.2 LV po4

On the routing table of the leaf node,as a result ofthe contract, it is also installed the IP subnet associated with the remote EPGpointing to the proxy-TEP address provisioned on the local spine nodes. The reverse happens on the leaf node in Site2.

Leaf101Site1

Leaf101-Site1# show ip route vrf Tenant-1:VRF1

IP Route Table for VRF "Tenant-1:VRF1"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

10.10.1.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 01:01:38, static, tag 4294967294

10.10.1.254/32, ubest/mbest: 1/0, attached, pervasive

*via 10.10.1.254, vlan54, [0/0], 01:01:38, local, local

10.10.2.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 00:04:51, static, tag 4294967294

Leaf303Site2

Leaf303-Site2# show ip route vrf Tenant-1:VRF1

IP Route Table for VRF "Tenant-1:VRF1"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

10.10.1.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.0.136.66%overlay-1, [1/0], 00:06:47, static, tag 4294967294

10.10.2.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.0.136.66%overlay-1, [1/0], 00:06:47, static, tag 4294967294

10.10.2.254/32, ubest/mbest: 1/0, attached, pervasive

*via 10.10.2.254, vlan33, [0/0], 00:06:47, local, local

Once connectivity between the endpoints isestablished,the leaf nodes in each site learn via data-planeactivitythe specific information for the remote endpoints. The output below shows for example the endpoint table for the leaf node in Site1.

Leaf101Site1

Leaf101-Site1# show endpoint vrf Tenant-1:VRF1

Legend:

s -arp H - vtep V -vpc-attached p - peer-aged

R - peer-attached-rlB - bounce S - static M - span

D - bounce-to-proxy O- peer-attached a - local-aged m - svc-mgr

L - local E - shared-service

+--------------------------------+---------------+-----------------+--------------+-------------+

VLAN/ Encap MAC Address MAC Info/ Interface

Domain VLAN IP Address IP Info

+--------------------------------+---------------+-----------------+--------------+-------------+

Tenant-1:VRF1 10.10.2.2 tunnel26

55 vlan-819 0050.56b9.1beeLpV po1

Tenant-1:VRF1 vlan-819 10.10.1.1LpV po1

Theremote endpoint 10.10.2.2 is learned as reachable via the VXLAN tunnel26. As expected, the destination of such a tunnel is the O-UTEP address for Site2 (172.16.200.100).

Leaf101Site1

Leaf101-Site1# show interface tunnel 26

Tunnel26 is up

MTU 9000 bytes, BW 0 Kbit

Transport protocol is in VRF "overlay-1"

Tunnel protocol/transport isivxlan

Tunnel source 10.1.0.68/32 (lo0)

Tunnel destination 172.16.200.100/32

Verifying Namespace Translation Information

As discussed for the stretched EPG use cases, the creation of translation entries in the spines isrequiredevery timean intersite communication must beestablishedusing the VXLAN data path. In the specific use case of inter-EPG connectivity between EPGs/BDs that are locally deployedineach fabric, the creation of a security policy between them leads to the creation of the so-called ‘shadow objects’ (Figure59)in the remote site’s APIC domain.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (60)

Figure 59.

Creation of Shadow Objects

Starting from ACI release 5.0(2), the shadow objects arehidden by defaulton APIC.Toenable their display, it isrequiredto setcheck the flag for the “ShowHidden Policies”optionshown inFigure60.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (61)

Figure 60.

Enabling the Display of Shadow Objects

The use of the “Template Deployment Plan” feature, available since Nexus Dashboard Orchestrator release 3.4(1), is quite interesting as it allows to clearly provide the information of what objects are created and where. In our example, when we configure the EPG2-S2 in Site2 to consume the contract provided by EPG1-S1 in Site1 (information provided in the “Deploy to sites” window in Figure 61), the Deployment Plan highlights the creation of those shadow objects in both sites (figure 62).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (62)

Figure 61.

Adding a Consumed Contract to EPG2-S2

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (63)

Figure 62.

Creation of Shadow Objects Highlighted by the Deployment Plan

The creation of shadow objectsis required tobe able to assignthem the specificresources (Segment IDs, class IDs, etc.) that must be configured in the translation tables of the spines to allow for successful intersite data plane communication.

For example, when looking at the APIC in Site2, we can notice that theEPG and BD locally defined in Site2are appearing as shadow objectsthere.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (64)

Figure 63.

Display of Shadow Objects on APIC

EPG1-S1 and BD1-S1 are objects that were locally created only in Site1 (since theyrepresentsite-local objects). However, theestablishmentof asecurity policybetween EPG1-S1 and EPG2-S2 caused the creation of those objects alsoon the APIC managing Site2.The same behavior isexhibitedfor EPG2-S2 and BD2-S2.Figure64andFigure65display thespecific Segment IDs and class IDs values assigned to those objects(the VRF isnotshown as the entries are the same previously displayed inFigure49andFigure50).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (65)

Figure 64.

Segment IDs and Class IDs forLocal andShadow Objects in Site1

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (66)

Figure 65.

Segment IDs and Class IDs forLocal andShadow Objectsin Site2

Those valuesare then programmed in the translation tables of the spinesto ensure they can perform the propertranslation functions when trafficis exchangedbetweenendpointsin Site1 part of EPG1-S1and endpoints in Site2part of EPG2-S2.

For what concerns the Segment IDs, the onlytranslationentry that isrequiredis the one for the VRF. This is because whenroutingbetween sites, the VRF L3 VNID value is inserted in the VXLAN header to ensurethat the receiving site can then perform the Layer3 lookup in the right routing domain. There is no need ofinstalling translationentries for the Segment IDs associated with the BDssincethere will never be intersite traffic carrying those values in the VXLAN header (given thatthose BDs are not stretched).

Spine 1101Site1

Spine1101-Site1# showdcimgrrepovnid-maps

--------------------------------------------------------------

Remote | Local

siteVrf Bd | Vrf Bd Rel-state

--------------------------------------------------------------

22359299 |3112963 [formed]

Spine 401Site2

Spine401-Site2# showdcimgrrepovnid-maps

--------------------------------------------------------------

Remote | Local

siteVrf Bd | Vrf Bd Rel-state

--------------------------------------------------------------

13112963 |2359299 [formed]

The class IDs for the EPGs and shadow EPGs are instead displayed in the output below.

Spine 1101Site1

Spine1101-Site1# showdcimgrreposclass-maps

----------------------------------------------------------

Remote | Local

siteVrfPcTag | VrfPcTag Rel-state

----------------------------------------------------------

2 235929932771 | 311296316388 [formed]

2 235929916390 | 311296332772 [formed]

Spine 401Site2

Spine401-Site2# showdcimgrreposclass-maps

----------------------------------------------------------

Remote | Local

siteVrfPcTag | VrfPcTag Rel-state

----------------------------------------------------------

1 311296332772 | 235929916390 [formed

1 311296316388 | 235929932771 [formed]

In addition to theprogramming of the translation entries on the spine, the assignment of class IDs to the shadow EPGs is also important to be able toproperlyapplythe security policy associated with the contract. As already discussed in the “Verifying EPG-to-EPG Intersite Communication” section, whenintra-VRFintersite traffic flows are established, remote endpoint information is learned onthe local leaf nodes. This ensures that the contract can always be appliedat the ingress leaf node for both directionsof the flow.

The output below shows thesecurity rules programmed on leaf 101 in Site1, which is where the endpointpart of EPG1-S1 is locally connected. As you can notice, there is apermit entryassociated to the contract C1for communication between 16388 (the class ID of EPG1-S1) and 32772 (the class ID of the shadow EPG2-S2). There is also an entry for the return flow, which will be used only if for some reason the policycan’tbe applied in theingressdirection on the remote leaf node in Site2.

Leaf101Site1

Leaf101-Site1# show zoning-rule scope 3112963

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

|4151 | 0 | 0 | implicit |uni-dir | enabled | 3112963 | |deny,log|any_any_any(21) |

|4200 | 0 | 0 |implarp |uni-dir | enabled | 3112963 | | permit |any_any_filter(17) |

|4198 | 0 | 15 | implicit |uni-dir | enabled | 3112963 | |deny,log|any_vrf_any_deny(22) |

|4213 | 0 | 32771 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4219 |16388 | 32772 | default |uni-dir-ignore | enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

|4220 |32772 | 16388 | default | bi-dir | enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

|4203 | 0 | 32770 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

Similar output can be foundon leaf 303 in Site2 where the endpoint part of EPG2-S2 is locally connected.Notice how the class IDs valuesusedhere are the onesprogrammedin Site2 forthe shadowEPG1-S1 (32771) and local EPG2-S2 (16390).

Leaf303Site2

Leaf303-Site2# show zoning-rule scope 2359299

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

|4183 | 0 | 0 | implicit |uni-dir | enabled | 2359299 | |deny,log|any_any_any(21) |

|4182 | 0 | 0 |implarp |uni-dir | enabled | 2359299 | | permit |any_any_filter(17) |

|4181 | 0 | 15 | implicit |uni-dir | enabled | 2359299 | |deny,log|any_vrf_any_deny(22) |

|4176 | 0 | 16387 | implicit |uni-dir | enabled | 2359299 | | permit |any_dest_any(16) |

|4190 | 0 | 16386 | implicit |uni-dir | enabled | 2359299 | | permit |any_dest_any(16) |

|4205 | 0 | 16389 | implicit |uni-dir | enabled | 2359299 | | permit |any_dest_any(16) |

|4207 |16390 | 32771 | default | bi-dir | enabled | 2359299 | Tenant-1:C1 | permit |src_dst_any(9) |

|4206 |32771 | 16390 | default |uni-dir-ignore | enabled | 2359299 | Tenant-1:C1 | permit |src_dst_any(9) |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

Use of Preferred Group for Enabling Intersite Connectivity

An alternative approach to the use of contracts to allow inter-EPG communication intra-VRF is the use of the Preferred Group functionality.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (67)

Figure 66.

Use of Preferred Group for Free Intra-VRF Communication Between EPGs

For each defined VRF there is support for one Preferred Groupand EPGs can be selectively added to it. EPGs that are part of the Preferred Group can communicate with each other withoutusing a contract. Communication with EPGs that are not part of the Preferred Group still mandates the definition of a security policy(Figure66).

Preferred Groupon APIC must be globally enabled(at the VRF level) to ensurethe functionality getsactivated. As highlighted inFigure67, by default this knob is disabled.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (68)

Figure 67.

Global Preferred Group Knob on APIC

Whenthe global knob is disabled, EPGs in the Preferred Groupwon’tbe able to freely communicate with each other. However,the global VRF-level knobis not exposed on Nexus Dashboard Orchestratorandthe behavior is the followingif adding/removing EPGs to the Preferred Group only on the Nexus Dashboard Orchestrator:

When adding the first EPG to the Preferred Groupon NDO, NDO will also take care of enabling the globalknob at the VRF level.

When removing the last EPG from the Preferred Groupon NDO, NDO willalso take care of disabling the global knob at the VRF level.

The behavior is a bit different for brownfield scenarios, where Preferred Group is already globally enabled at the APIC level(for example because the Preferred Group functionality is enabled for EPGs deployeddirectly onAPIC before NDOwasintegrated into the design).Under those conditions,before starting to add EPGs to the Preferred Group on NDO,it is strongly recommended to import the VRFobject fromAPIC intoNDO.Doing that provides to NDO the information thatpreferred group is already enabled (at the APIC level):this ensuresthat when removing the last EPG from the Preferred Group on NDO, NDO doesnot disable the global knob to avoidimpactingcommunication between EPGs that maystillbe part of the Preferred Group and not managed by NDO(i.e.theEPGsthathave been configured directly on APIC and not imported into NDO).

The configuration to add an EPG to the Preferred Groupon NDOis quite simple and shown inFigure68; it can be applied to EPGs that are locally defined in a site or stretched across locations.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (69)

Figure 68.

Adding an EPG to Preferred Group on NDO

Once the configuration is deployed, translation entries and shadow objects will be automatically created to ensure intersite connectivity can beestablishedbetween all the EPGs that are part of the Preferred Group. Please refer to the ACI scalability guides for more information about themaximum number of EPGs that can bedeployedas part of the preferred groupin a Multi-Site deployment.

The output below highlights the security rules programmedon the ACI leaf nodesas a result oftheenablement.

Leaf 101 Site1

Leaf101-Site1# show zoning-rule scope 3112963

+---------+--------+--------+----------+---------+---------+---------+------+----------+----------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+---------+---------+---------+------+----------+----------------------+

|4151 | 0 | 0 | implicit |uni-dir| enabled | 3112963 | |deny,log|any_any_any(21) |

|4200 | 0 | 0 |implarp |uni-dir| enabled | 3112963 | | permit |any_any_filter(17) |

+---------+--------+--------+----------+---------+---------+---------+------+----------+----------------------+

Note: Theentries with “0” asSrcEPGandDstEPGareimplicit permit rules that are added because ofthe PreferredGroupconfiguration(implicit deny ruleswouldbe added if an EPG that is not in the preferred group is added).For more information on Preferred Group, please refer to the paper below:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-743951.html#Preferredgroup

Use of vzAny

Another interestingfunctionality that can be used at the VRF level for EPGsis vzAny. vzAny is a logicalgroupingconstructrepresentingall the EPGs that are deployed inside a given VRF (i.e.theEPGs are mapped to BDs that are part of the VRF).The use of vzAny allows simplifying the application ofsecurity policies to implement two specific use cases: the creation of many-to-one connectivity model and creation of any-to-any connectivity model.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (70)

Figure 69.

vzAny Use Cases

It is important to highlighta couple ofrestrictions that apply to the use of vzAnyin a Multi-Site deployment:

As of Nexus Dashboard Orchestrator 3.5(1) release, it is not possible to have vzAny consuming and/or providing acontract with an attachedservice graph. This requires implementation changes also at the APIC and switching level so will be supported in the future.

As shown in the figure above,vzAnycan be the consumer of a contract for a shared service scenario (i.e.,the provider is an EPG in a different VRF). However, vzAny cannot be the provider of a contract if the consumer is in an EPG in a different VRF(this is a restriction that applies also to ACI single fabric designs).

The second use caserepresentsan alternative approach to Preferred Group whenthe goal is to remove the application of security policies and just use ACI Multi-Site forestablishingnetwork connectivity across fabrics. From a provisioning perspective, the required configuration is quite simple and just requiresdefininga contract with associated a “permit all” filter (as it was shown inpreviousFigure56andFigure57. Once the contract is created, it is possible toapply it to vzAny at the VRF level, as shown inFigure70.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (71)

Figure 70.

Configure vzAny to Provide/Consume a “Permit all” Contract

The result of the configuration shown above is that translation entries and shadow objects are going to be created in both APIC domains to ensure that all intra-VRF communication can happen freely. From a functional point of view, this configuration is analogous to setto “Unenforced” thepolicy controlenforcement preference for the VRF. Notice that this “VRF unenforced”optionis not supported with Multi-Siteand is not considered a best practice not even with single fabric deployments. The recommendation is hence to use the vzAny configuration hereby described to achieve the same goal.

Note: As of Nexus Dashboard Orchestrator release 3.5(1),the configuration of vzAny is mutually exclusive with the use of the Preferred Group functionality. It is hence important to decide upfrontwhat approach to take, depending on the specific requirements.

Inter-EPGs Connectivity across Sites (Inter-VRF – Shared Services)

“Shared services”representsaspecific use case for establishingintersite connectivity between EPGsthat are part of different VRFs.Figure69already introduced the concept of shared services in the context of vzAny, but the same functionality can be deployed between specific EPGs.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (72)

Figure 71.

Shared Services Use Case

Fromahigh-levelpoint of view, theconsiderations are the same already discussed as part of the “Applying a Security Contract between EPGs” for the intra-VRF scenario; to establish connectivity betweenendpoints part of different EPGs, it is required tocreate a contract between them.However,fewextra considerations become relevant when this connectivity must be createdacross VRFs, as discussed in the following section.

Provisioning the “Shared Services” configuration

Theprovisioning of theShared Services use case highlighted inFigure71requires few specificsteps.

Defininga new VRF3and associate BD2-S2 to that.

Defining the right scope for the contract: a newly created contractby default has the scope of“VRF”.This means thatit will be effective only when applied between EPGs that are part of the sameVRF butwouldnot allow communication in the shared services scenario.It is hencerequiredto properly change the scope to “Tenant” or “Global” depending if the VRFs arepart of the same tenant or defined across tenants.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (73)

Figure 72.

Setting the Proper Scope for the Contract

From a routing perspective,the use of different VRFs ensures logical isolation between separate routing domains.For the shared services use case, it is hencerequiredtopopulate the properprefixesinformationin the different VRFs to be able toestablishconnectivity across those differentrouting domains. This functionality is usually referred to as “route-leaking”.The first requisite for enabling the leaking of routes between VRFs is toset the “Shared between VRFs” option fortheIP subnets associated with the BDs, as highlighted inFigure73.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (74)

Figure 73.

Configuring the BD Subnets for being Leaked between VRFs

In ACI, the leaking of prefixes between VRFshappensina different waydepending onthe specific direction that is considered (consumer to provider orprovider to consumer).

The IPsubnetsassociated to the BD of theconsumerVRF2(BD1-S1)are leaked into theproviderVRF3based on the configuration of thecontract between the EPGs. The leaking in the opposite direction (from the provider VRF3to the consumer VRF2) is instead the result of a specific configurationapplied to theprovider EPG2-S2.

As displayed inFigure74, the same prefix previously configured under BD2-S2 (the BD of the provider EPG)must be configured under the EPG2-S2 itself. The same flags applied to the BD should also be set here, with the addition of the “No Default SVI Gateway” option, which is required asthere is no need to install the default gateway as aresult of this config that is only neededforleaking the route and for being able to apply the securitypolicy (as it will be clarified in the next section).

Note: The requirement of specifying the subnet’s prefix (or prefixes) under the provider EPGessentiallymakesit harder to provision route-leaking if multiple EPGs were defined under the sameBD (associated to a given IP subnet), as it would require to somehowidentifythe specific endpoints deployed as part of each EPG, despite being addressed from the same IP subnet range.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (75)

Figure 74.

Configure the Prefix Under the Provider EPG

Verifying Shared Services Intersite Communication

Figure75below shows the scenario that we just provisioned for the shared services use case.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (76)

Figure 75.

Endpoints Connected to Local EPGs in Separate VRFs (Shared Services Use Case)

The application of a contract betweenthe EPGs locally defined in each site and part of different VRFs has the consequence of generating the creation of shadow objects in the opposite site. In addition to the EPGs and the BDs, also the VRFs are now instantiatedas shadow objects.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (77)

Figure 76.

Creation of Shadow Objects for the Shared Services Use Case

This can be verified as usual onAPIC and on the spines. Please refer to theprevioussections for more information on how todisplay the shadow objects and retrieve the values configured in the translation tables on the spines.Figure77 and 78below highlight thesegment IDs and class IDs associated with local and shadow objects in each APIC domain.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (78)

Figure 77.

Segment IDs and Class IDs for Local and Shadow Objects in Site1

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (79)

Figure 78.

Segment IDs and Class IDs for Local and Shadow Objects in Site2

As a result of the creation of the contract and the IP prefix configuration under the provider EPG, the subnets of BD1-S1 and BD2-S2 are leaked between VRFs, as can be seen in the output below.

Leaf101Site1

Leaf101-S1# show ip route vrf Tenant-1:VRF2

IP Route Table for VRF "Tenant-1:VRF2"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

10.10.1.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 00:08:31, static, tag 4294967294,rwVnid: vxlan-2129922

10.10.1.254/32, ubest/mbest: 1/0, attached, pervasive

*via 10.10.1.254, vlan13, [0/0], 00:08:31, local, local

10.10.2.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 00:08:31, static, tag 4294967294,rwVnid: vxlan-2785286

Leaf303Site2

Leaf303-Site2# show ip route vrf Tenant-1:VRF3

IP Route Table for VRF "Tenant-1:VRF3"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

10.10.1.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.0.136.66%overlay-1, [1/0], 00:35:33, static, tag 4294967294,rwVnid: vxlan-2326528

10.10.2.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.0.136.66%overlay-1, [1/0], 00:35:33, static, tag 4294967294,rwVnid: vxlan-2654213

10.10.2.254/32, ubest/mbest: 1/0, attached, pervasive

*via 10.10.2.254, vlan49, [0/0], 00:35:33, local, local

The output above highlights howeach of the leaked routescontainsthe specific information of theSegment ID to be used when encapsulatingtraffic toward the destination. In Site1,vxlan-2785286representsthesegment ID assigned to the local shadow VRF3 instance. Similarly,vxlan-2326528in Site2representsthe segment ID assigned to the local shadow VRF2 instance.Encapsulating traffic on the ingress leaf with the Segment ID of the VRF where the remote destination is connected ensures that the receiving leaf in the remote site can properly perform the lookup in the rightrouting domain.

Different fromthe intra-VRF use case, where the security policy is always enforced on theingressleaf nodeat a steady state, in the shared services scenario thesecurity policy should always be enforced on theconsumer leaf nodes. This is doneto avoid scalability issues for the TCAM programming on the provider leaf assuming that many consumer EPGs try to access a common shared resource.

To ensure this is the case, twothings are happening:

Data-plane learning of endpoint information isnot happening, to avoidlearningthe class ID information that would cause the application of the policyon theproviderleaf.

The class IDof the provider EPG is statically programmed on all the consumer leaf nodes as the result of theIP prefix configuration under the provider EPG previously discussed. For the specific scenario displayed inFigure75,it is possible to verify with the command below that the class ID for the 10.10.2.0/24 subnet is configured on the consumer leaf in Site1:

Leaf101Site1

Leaf101-Site1#moquery-d sys/ipv4/inst/dom-Tenant-1:VRF2/rt-[10.10.2.0/24]

Total Objects shown: 1

# ipv4.Route

prefix :10.10.2.0/24

childAction :

ctrl :pervasive

descr :

dn :sys/ipv4/inst/dom-Tenant-1:VRF2/rt-[10.10.2.0/24]

flushCount :1

lcOwn :local

modTs :2020-11-13T22:14:20.696+00:00

monPolDn :

name :

nameAlias :

pcTag :26

pref :1

rn :rt-[10.10.2.0/24]

sharedConsCount :0

status :

tag :4294967294

trackId :0

Note: As displayed inpreviousFigure77,pcTag26 represents theclassID for shadow EPG2-S2 installed in the APIC controller for Site1.

As a result, thefollowing security rule is installed on the consumer leaf in Site1 to ensure the policy can be applied:

Leaf101Site1

Leaf101-Site1# show zoning-rule scope 2129922

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+------------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+------------------------+

|4223 | 0 | 0 | implicit |uni-dir | enabled | 2129922 | |deny,log|any_any_any(21) |

|4224 | 0 | 0 |implarp |uni-dir | enabled | 2129922 | | permit |any_any_filter(17) |

|4225 | 0 | 15 | implicit |uni-dir | enabled | 2129922 | |deny,log|any_vrf_any_deny(22) |

|4228 | 0 | 16388 | implicit |uni-dir | enabled | 2129922 | | permit |any_dest_any(16) |

|4227 | 26 |16397 | default |uni-dir-ignore | enabled | 2129922 | Tenant-1:C1 | permit |src_dst_any(9) |

|4226 | 26 | 0 | implicit |uni-dir | enabled | 2129922 | |deny,log|shsrc_any_any_deny(12) |

|4213 |16397 | 26 | default | bi-dir | enabled | 2129922 | Tenant-1:C1 | permit |src_dst_any(9) |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+------------------------+

Notice how theProviderEPGisgettingaspecial class ID for the shared services use case(26, in this specific example for Site1).Thisvalueistaken from a pool that has global uniqueness across all the deployed VRFs. This is different for the intra-VRF use case, where the class IDs assigned are locally significant for each VRF.

Finally, the same considerations (and provisioning steps) described above can also be applied when the goal isestablishingcommunication betweenVRFs that are part of different tenants (Figure79).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (80)

Figure 79.

Inter-Tenant Shared Services Use Case

The only specific considerations that apply to this deployment model are the following:

The contract must be provisioned with scope “Global” anddefined in the provider tenant (Tenant-2 in the example above).

Thecreation of the contract between EPGs that are part of separate VRFs and tenants wouldcause alsothe instantiation of “shadow tenants” in the scenarios where the tenants are only locally deployed in each site.

Finally, the contract defined in the provider tenantmust beexported to the consumer tenant as a“contract interface”.However, this is automatically done by the Orchestrator Servicewhen the contract is applied between EPGs that are part of different tenants (which is why the provisioning, from the Orchestratorperspective,is identical to the use case shown inFigure76).

Connectivity to the External Layer 3 Domain

The use cases discussed in theprevioussections dealt with the establishment of Layer 2 and Layer 3 connectivity between ACI sites part of the same Multi-Site domain, usually referred to as “east-west” connectivity. In this section, we are going instead to describemultiple use casesprovidingaccessto the DC resources from the external Layer 3 network domain,genericallydefined as “north-south” connectivity.

Use Case 1: Site-Local L3Out Connections to Communication with External Resources (Intra-VRF)

Thisfirst use case,shown inFigure80,covers access to a common set ofexternalresourcesfrom local L3Out connections deployed in each site.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (81)

Figure 80.

Site Local L3Outs Providing Access to a Common Set of External Resources

The EPGs and BDs(a mix of site-local and stretched objects), all part of the same VRF1 routing domain, have already been provisionedfor the use cases previously described. The first required configuration step consists hence in creating the L3Outs in each local site.Figure81shows how to do that on Nexus Dashboard Orchestrator for L3Out1-S1 defined in fabric 1. The same can be done to define L3Out1-S2 in thetemplate associated with Site2.It is a best practice recommendation to define L3Outs with unique names in eachsite, as itprovidesmore flexibility for the provisioning of many use cases (as it will be clarified in the following sections).

Note: The diagram shown inFigure80is using a single BL node in each fabric and it is hence not representative of a real productionenvironmentthat normally leverages atleast apair of BL nodes for the sake of redundancy.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (82)

Figure 81.

Create a Local L3Out in Site1

Notice that in the current Nexus Dashboard Orchestrator 3.5(1) release it is only possible to create theL3Out object from NDO,whereasthe configuration of logical nodes, logical interfaces, routing protocol, etc.is still handled at the specific APIC domain level. Describing in detail how to create an L3Out on APIC is out of the scope of this paper, for more informationplease refer to the paper below:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/guide-c07-743150.html

Once the L3Outs in each site are provisioned, we canproceedwith the creation of the External EPGassociated with the L3Out. As a best practice recommendation, a single External EPGshould be deployed whenthe L3Out connectionsprovideaccess to a commonset of external resources.The use of a ‘stretched’ External EPG, shown inFigure82below,simplifies the definition of the security policy required forestablishingnorth-south connectivity.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (83)

Figure 82.

Use of a ‘Stretched’ External EPG

Since the same External EPG must be deployed in both fabrics, its configuration should be done as part of the Template-Stretched that is associated with both sites.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (84)

Figure 83.

Creation ofa ‘Stretched’ External EPG

The External EPG should then have one or more IP prefixes defined for being able to classify external resources aspart of the EPG (to be able to apply security policies with internal EPGs). The example, inFigure84it isdisplayeda common approach consisting ofthe use of a ‘catch-all’ 0.0.0.0/0 prefix to ensure that all the external resources can be classified as part of this specific External EPG.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (85)

Figure 84.

Define a ‘Catch-all’ Classification Subnet

Once the External EPG has beendefined, it is thenrequiredto map it at the site level to the local L3Out objects previously defined for each fabric.Figure85shows the association of the Ext-EPG to the L3Out defined in Site1, a similar mapping is required forthe L3Out in Site2.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (86)

Figure 85.

Mapping the External EPG to the Local L3Out Connection

Note: The NDO GUIgives alsothe capability of mapping an External EPG to the L3Out at the global template level. However, in the scenario discussed in this use case where separate L3Out connections are created in each fabric, it is mandatory to create the mapping at the site level.

Once the External EPG has been provisioned and mapped to the local L3Outs,two final steps arerequired toestablishtheN-S connectivityshown inFigure82:

Establish a security policy between the internal EPGs and theExternal EPG:we can use the same contract C1 previously used forintra-VRF EPG-to-EPG connectivity.For what concerns the contract’s direction, it is irrelevant if the Ext-EPG isprovidingthe contractand the internal EPGs are consuming it or vice versa.

Alternatively, it is possible to use the Preferred Group or vzAny functionality to allow free north-south connectivity, in place of applying the contract. A specific consideration applies when using Preferred Group for the Ext-EPG: 0.0.0.0/0 is not supported in that case for classifying all the traffic originated from external sources. The recommended configuration to cover the same address space consists in splitting the range into two separateparts, asshowninFigure86.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (87)

Figure 86.

Classification Subnet when Adding the External EPG to the Preferred Group

Announce the internal BD subnets toward the external network domain:for achieving this,firstit isrequiredtoset the “Advertised Externally” flag forthe IP subnet(s) associated with the BDs, as shown inFigure87below.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (88)

Figure 87.

Set the “Advertised Externally” flag to announce theBD subnet toward the external network

As a secondstep, it isrequiredto specify out of which L3Outthe BD subnet prefixes should be advertised. This istypically achieved on Nexus Dashboard Orchestratorby associating the L3Out to the BD at the site level.Figure88shows the configuration required forBD1-S1 that is locally deployed in Site1. For BDs that are stretched across sites, the BD should beinsteadmapped in each site to the locally defined L3Out.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (89)Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (90)Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (91)

Figure 88.

Mapping the L3Out to the BD

Note: The use of unique name L3Out connections in each site is quite important to have tight control on where to announce the BD’s subnets based on the mapping performed in the figure above.

Use Case 1 Verification

Once the configuration previously described is fully provisioned, north-south connectivity can be successfully established. Even when the internal EPGs establish a security contract with a stretched external EPG (as shown in previous Figure 82), for the intra-VRF scenario hereby discussed the prefixes information for IP subnets that are locally defined in each site will be only sent out the local L3Out connections. This ensures that inbound traffic will always be steered toward the fabric where that subnet is locally defined. For BDs that are stretched, the stretched IP subnet will instead be advertised by default out of the L3Outs defined in both sites, which essentially means that inbound traffic could be received in the ‘wrong’ site and will then need to be re-routed across the ISN (Figure 89).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (92)

Figure 89.

Suboptimal Inbound Traffic Path

Theinboundtraffic flows destined to endpoints belonging to the stretchedEPG/BD can beoptimizedby configuring the host-based routing functionality, which allows to advertise out ofeachL3Out the specific /32 prefixes for theendpointsdiscoveredin the local site.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (93)

Figure 90.

Host-based RoutingAdvertisem*ntforInbound Traffic Optimization

The advertisem*nt ofhost-basedrouting information can be enabled on NDOfor eachBDandshould be done only for the BDs that are stretched across sites. As highlighted inFigure91, the “Host Route” flag is enabled for BD1-Stretched and this is done at the site level.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (94)

Figure 91.

Enabling Host-based Routing on NDO

For what concerns the outbound traffic flows,from the point of view of the compute leaf nodes in each fabricthe only path toward the externalIP prefix 192.168.1.0/24 is always and only via the local Border Leaf (BL)node.This is because external prefixes learned on an L3Out connection in Site 1 are not advertised by default to Site 2 (and vice versa)using the MP-BGP control plane between spine nodes, unless the intersite L3Out functionality is enabled(this will be discussed in more detail in the “Deploying Intersite L3Out” section).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (95)

Figure 92.

Outbound Communication always using the local L3Out

In the output below, 10.1.0.69representstheTEP address of the Border Leafnode in site 1,whereas10.0.224.96 is the TEP address of the BorderLeafnode in Site2.

Leaf101Site1

Leaf101-Site1# show ip route vrf Tenant-1:VRF1

IP Route Table for VRF "Tenant-1:VRF1"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

10.10.1.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 00:47:34, static

10.10.1.254/32, ubest/mbest: 1/0, attached, pervasive

*via 10.10.1.254, vlan43, [0/0], 03:04:11, local, local

10.10.2.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 00:46:04, static

10.10.3.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 00:47:38, static

10.10.3.254/32, ubest/mbest: 1/0, attached, pervasive

*via 10.10.3.254, vlan64, [0/0], 4d08h, local, local

192.168.1.0/24, ubest/mbest: 1/0

*via 10.1.0.69%overlay-1, [200/0], 03:02:43, bgp-65501, internal, tag 3

Leaf303Site2

Leaf303-Site2# show ip route vrf Tenant-1:VRF1

IP Route Table for VRF "Tenant-1:VRF1"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

10.10.2.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.0.136.66%overlay-1, [1/0], 02:03:12, static

10.10.2.254/32, ubest/mbest: 1/0, attached, pervasive

*via 10.10.2.254, vlan16, [0/0], 04:21:10, local, local

10.10.3.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.0.136.66%overlay-1, [1/0], 02:04:45, static

10.10.3.254/32, ubest/mbest: 1/0, attached, pervasive

*via 10.10.3.254, vlan25, [0/0], 5d09h, local, local

192.168.1.0/24, ubest/mbest: 1/0

*via 10.0.224.96%overlay-1, [200/0], 00:19:39, bgp-100, internal, tag 30

From a security policy point of view (unless Preferred Group is used), for intra-VRF north-south traffic flows the contract is always and only applied on the compute leaf node(and never on the border leaf nodes). This is true independently from the direction of the contract (i.e.who is the provider and who is the consumer), but under the assumption that theVRF’s “Policy Control Enforcement Direction” isalwayskept tothe default “Ingress” value.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (96)

Figure 93.

Default VRF’s Settings

Note: It is strongly recommended to keep this setting to its default value, as it isrequiredfor being able to apply Service Graph and PBR to north-south traffic flows. For moreinformation,please refer to the “Service Node Integration with ACI Multi-Site” section.

The output below shows the zoning-rule configuration on the BorderLeafnodefor Site1.16388representsthe class ID for the internal EPG1-S1,whereas49153 is the Class ID for VRF1.Whentraffic is received on an L3Out with the External EPGconfigured with the 0.0.0.0/0 prefix for classification, it is assigned the class ID ofthe VRF (of the L3Out) instead of the specific class ID of the External EPG (you need to use a more specific classification subnet for that, as shown later in this section).By looking at the last entry in the table below(rule ID 4225), you would hence conclude that the security policy forinboundtrafficcan beapplied on theborder leaf node 104.

Leaf104Site1

Leaf104-Site1# show zoning-rule scope 3112963

+---------+--------+--------+----------+---------+---------+---------+-------------+----------+----------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+---------+---------+---------+-------------+----------+----------------------+

|4148 | 0 | 0 | implicit |uni-dir| enabled | 3112963 | |deny,log|any_any_any(21) |

|4153 | 0 | 0 |implarp |uni-dir| enabled | 3112963 | | permit |any_any_filter(17) |

|4199 | 0 | 15 | implicit |uni-dir| enabled | 3112963 | |deny,log|any_vrf_any_deny(22) |

|4156 | 0 | 16386 | implicit |uni-dir| enabled | 3112963 | | permit |any_dest_any(16) |

|4206 | 0 | 32770 | implicit |uni-dir| enabled | 3112963 | | permit |any_dest_any(16) |

|4216 | 0 | 32771 | implicit |uni-dir| enabled | 3112963 | | permit |any_dest_any(16) |

|4213 |16387 | 15 | default |uni-dir| enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

|4218 |49153 | 16387 | default |uni-dir| enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

|4145 |16388 | 15 | default |uni-dir| enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

|4225 |49153 | 16388 | default |uni-dir| enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

+---------+--------+--------+----------+---------+---------+---------+-------------+----------+----------------------+

This is not the caseinstead, because leaf 104does not knowhow todeterminethe class ID forthe destination of theexternally originatedtrafficflow(10.10.1.1 in our example, whichrepresentsthe internal endpoint part ofEPG1-S1). This is because theinternalendpointinformationisnot learned onthe BL nodeas a result ofthe north-south communication. Additionally,the specific IP subnet associated with EPG1-S1(10.10.1.0/24)isalsolocally installed withoutanyspecific class ID information (see the “any” value in the“pcTag”row).

Leaf104Site1

Leaf104-Site1#moquery-d sys/ipv4/inst/dom-Tenant-1:VRF1/rt-[10.10.1.0/24]

Total Objects shown: 1

# ipv4.Route

prefix :10.10.1.0/24

childAction :

ctrl :pervasive

descr :

dn :sys/ipv4/inst/dom-Tenant-1:VRF1/rt-[10.10.1.0/24]

flushCount :0

lcOwn :local

modTs :2020-11-16T20:24:29.023+00:00

monPolDn :

name :

nameAlias :

pcTag :any

pref :1

rn :rt-[10.10.1.0/24]

sharedConsCount :0

status :

tag :0

trackId :0

The output below showsinstead the zoning-rule entries on the compute leaf where the internal endpoint 10.10.1.1 is connected that allow to apply the security policy forinboundand outboundflows with the external client 192.168.1.1.

Leaf101Site1

Leaf101-Site1# show zoning-rule scope 3112963

+---------+--------+--------+----------+---------+---------+---------+-------------+----------+----------------------+

| Rule ID | SrcEPG | DstEPG | FilterID | Dir | operSt | Scope | Name | Action | Priority |

+---------+--------+--------+----------+---------+---------+---------+-------------+----------+----------------------+

| 4151 | 0 | 0 | implicit | uni-dir | enabled | 3112963 | | deny,log | any_any_any(21) |

| 4200 | 0 | 0 | implarp | uni-dir | enabled | 3112963 | | permit | any_any_filter(17) |

| 4198 | 0 | 15 | implicit | uni-dir | enabled | 3112963 | | deny,log | any_vrf_any_deny(22) |

| 4203 | 0 | 32770 | implicit | uni-dir | enabled | 3112963 | | permit | any_dest_any(16) |

| 4228 | 0 | 32771 | implicit | uni-dir | enabled | 3112963 | | permit | any_dest_any(16) |

| 4210 | 16387 | 15 | default | uni-dir | enabled | 3112963 | Tenant-1:C1 | permit | src_dst_any(9) |

| 4199 | 49153 | 16387 | default | uni-dir | enabled | 3112963 | Tenant-1:C1 | permit | src_dst_any(9) |

| 4224 | 16388 | 15 | default | uni-dir | enabled | 3112963 | Tenant-1:C1 | permit | src_dst_any(9) |

| 4223 | 49153 | 16388 | default | uni-dir | enabled | 3112963 | Tenant-1:C1 | permit | src_dst_any(9) |

+---------+--------+--------+----------+---------+---------+---------+-------------+----------+----------------------+

Forinboundflows, the packet carries the VRF class ID in the VXLAN header (49153), so rule 4223 above allows to apply the policy for packets destined to the locally connected endpoint part of EPG1-S1 (identifiedby Class ID 16388). For outbound flows, entry 4224 is applied, as all the external destinations that are reachable via an External EPG using 0.0.0.0/0 for classificationare identified with the specific Class IDvalue of 15.Notice that the same entry is also available on the BorderLeafnode, but it does not have any effectfor outbound flowssince the compute node set a specific bit in the VXLAN header of the packets sent toward the BorderLeafnode tocommunicate the fact that the policy has already been applied.

If the 0.0.0.0/0 classification subnet in the External EPG was instead replaced by a more specific entry (for example 192.168.1.0/24to match the subnet of the external clients in our specific example), the zoning-rule table in the compute leaf node would change as shown in the output below.The specific rule IDs4194 and4219 allow in this case to apply the security policy forrespectivelyinboundand outboundcommunication betweenEPG1-S1 (Class ID 16388) and the ExternalEPG (Class ID32773).

Leaf101Site1

Leaf101-Site1# show zoning-rule scope 3112963

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

|4151 | 0 | 0 | implicit |uni-dir | enabled | 3112963 | |deny,log|any_any_any(21) |

|4200 | 0 | 0 |implarp |uni-dir | enabled | 3112963 | | permit |any_any_filter(17) |

|4198 | 0 | 15 | implicit |uni-dir | enabled | 3112963 | |deny,log|any_vrf_any_deny(22) |

|4203 | 0 | 32770 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4228 | 0 | 32771 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4219 |16388 | 32773 | default |uni-dir-ignore | enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

|4194 |32773 | 16388 | default | bi-dir | enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9)|

|4225 |16387 | 32773 | default |uni-dir-ignore | enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

|4217 |32773 | 16387 | default | bi-dir | enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

Use Case 2: Site-Local L3Out Connections to Communication with External Resources (Inter-VRF/Shared Services Inside the Same Tenant)

The second use case to consider is the one where the L3Out connections are part of a different VRFthan the internal EPGs/BDs, a scenario usually referredto as “shared services”.In this use case, the L3Out VRF(VRF-Shared)isdefined in the same tenantwhere theinternalEPGs/BDs belong.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (97)

Figure 94.

Inter-VRFNorth-south Connectivity

The only different provisioning stepsfrom theintra-VRFuse case previously discussed are thefollowing:

Configure a contract with the scope “Tenant”.

Apply the contract between an internal EPG and the External EPG: differently from the intra-VRFuse case 1, the leaf node where the security policy is applied depends on the specificprovider and consumer side, as it will be clarified in the “Use Case 2 Verification” section.

Configure theinternal BD subnetto ensure it can be advertised outside of the local L3Out. For this to happen across VRFs, there is no need to map the BD to the L3Out (as done in the intra-VRF use case) but it is simplyrequiredtoselect the “Shared between VRFs”flagin addition to the “Advertised Externally”one, as shown inFigure95.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (98)

Figure 95.

Setting to Leak a BD’sSubnetinto a different VRF

If the internal EPG is the provider of the contract, the same subnet associated to the BD must also be defined under the EPG itself.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (99)

Figure 96.

Setting of the Subnet under the Provider EPG

Notice how the same flags already used for the BD must also be set for the EPG’s subnet(Nexus Dashboard Orchestratorwould prevent the deployment of the template if that is not the case). Since the configuration above is solely needed for enabling the leaking of the routesbetween VRFs, the“No Default SVI Gateway” flag shouldadditionallybe configured (since the default gateway is already instantiated as the result of the specific BD’s configuration).

Properly configure the subnets associated with the External EPG to ensure inter-VRF N-S connectivity can be established. This requires performing the setting shown in Figure 97.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (100)

Figure 97.

Required Setting of the External EPG for inter-VRF North-southConnectivity

In the example above, 0.0.0.0/0 is defined under the Ext-EPG, so thedifferentflagssetin the figurecause the following behavior:

“External Subnets for External EPG”: map to this external EPG(i.e.associate the corresponding class ID)all theincoming traffic(whatever is its source IP address). As previously mentioned, in the specific case of 0.0.0.0/0, the class ID associated with all incoming traffic isin reality theVRF class ID (and not the External EPG class ID).

“Shared Route Control”with “Aggregate Shared Routes”: allows to leak into the internal VRF all the prefixes that are learned from the external routers (or locally configured as static routes).Without the “Aggregate Shared Routes” flag set, only the 0.0.0.0/0 route would be leaked, if and only if itwasreceived from the external router.The same considerations would apply when configuring a prefixdifferent than 0.0.0.0/0.

“Shared Security Import”: this isrequiredto ensure that theprefix 0.0.0.0/0 (catch-all) is installed on the compute leaf nodes where the internal VRF is deployed with the associated class ID. This allows the compute leaf to properly apply thesecuritypolicyforflows originated by locally connected endpoints and destined to theexternal network domain.

Note: When specifying a more specific IP subnet ( for example 192.168.1.0/24), the use of the “Aggregate Shared Routes” isrequiredto leak more specific prefixes part of the /24 subnets that may be learned on the L3Out. Without the flag set, only the /24 prefix would be leaked if received from the external routers.

One importantconsideration for the inter-VRFuse case is related to how the External EPG(s) associated to the L3Out is (are) deployed.As previously mentioned, in this use case the subnet of a BD is advertised toward the external network domain based on the specific use of the flags discussed above, and there is not a need to explicitly map the BD to the L3Out. This means that when deploying a stretched External EPG(as previously shown inFigure82), you now don’t have the capability of controlling out of which L3Out a BD’s subnet will be announced and by default theIP subnets that only exist in a sitewill beadvertised also out of the L3Out of the remote site (Figure98).

Figure 98.

Advertising BD Subnets in the Shared Services Use Case

In the scenario above,it ispossibleto changethe default behavior and ensure that the routing information advertised out of the local L3Out becomes preferable for locally defined subnets(while enabling host-based routing is still possible for the stretched subnets).Most commonly EBGP adjacencies areestablishedwith the external routers, soa simple wayto achieve this is, for example,using the AS-Path prepend functionality.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (101)

Figure 99.

Optimizing Inbound Traffic with the Use of AS-Path Prepend

Figure100,Figure101,andFigure102highlight the steps needed for this configuration. Notice that the creation and application of a routemap to the L3Outis currently supported only on APIC and not on Nexus Dashboard Orchestrator.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (102)

Figure 100.

Creation of a “default-export” route-map associated to the L3Out

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (103)

Figure 101.

Route Control Context Configuration (Match and SetPrefixes)

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (104)

Figure 102.

Completionof Route Control Context SetPrefix

Note: Enabling host-based routing for not stretched subnets is an alternative approach to optimize inbound traffic path, assuming there are not scalability concerns.

Use Case 2 Verification

The same considerations made in Figure 89,Figure 90,andFigure91forconnectivitybetweeninternal EPGs/BDs andthe external network domaincontinue to apply also in this use case.This means thatoptimal inbound routing can be influenced by enabling the host-based routing functionality at the specific BD level,whereas outbound communication always flows via the local L3Out connection.

Figure100shows the deployment of VRFs required forthe specificnorth-southinter-VRFuse case. As noticed, both VRFs are deployed on the BL node, whereas only the internal VRF isusually presenton the compute leaf node.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (105)

Figure 103.

VRFs Deployment and VNIDs in VXLAN Encapsulated Traffic

Additionally,outboundtrafficVXLANencapsulated on thecomputeleaf uses the VRF-Shared VNID in the header, so that the BL node receiving the traffic will be able toperform the L3 lookupin the VRF-Shared domain before sending the trafficto the external domain. The outputs below show thecontent of the routing tables on the compute leaf node and on the BL node.On the compute leaf101, the external prefix 192.168.1.0/24 is leaked into the VRF1 routing table, with the information of rewriting the VNID in the VXLAN header to2293765(representingtheVNID forVRF-Sharedin Site1). On the BL node 104, the internal subnet 10.10.1.0/24 is instead leaked into the VRF-Shared routing table, with the information of rewriting the VNID in the VXLAN header to3112963(representingtheVNID forVRF1 in Site1).

Leaf101Site1

Leaf101-Site1# show ip route vrf Tenant-1:VRF1

IP Route Table for VRF "Tenant-1:VRF1"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

10.10.1.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 00:21:25, static,rwVnid: vxlan-3112963

10.10.1.254/32, ubest/mbest: 1/0, attached, pervasive

*via 10.10.1.254, vlan43, [0/0], 3d16h, local, local

192.168.1.0/24, ubest/mbest: 1/0

*via 10.1.0.69%overlay-1, [200/0], 00:27:33, bgp-65501, internal, tag 3,rwVnid: vxlan-2293765

Leaf104Site1

Leaf104-Site1# show ip route vrf Tenant-1:VRF-Shared

IP Route Table for VRF "Tenant-1:VRF-Shared"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

10.10.1.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 00:23:32, static, tag 4294967292,rwVnid: vxlan-3112963

192.168.1.0/24, ubest/mbest: 1/0

*via 172.16.1.1%Tenant-1:VRF-Shared, [20/0], 1d20h, bgp-65501, external, tag 3

In the inter-VRF scenario,the leaf nodes wherethesecurity policyenforcementis appliedwhen using contracts between internal and External EPGs,depends onwho is the provider and the consumer of the contract.

If the Ext-EPG is the consumer and the internal EPG is the providerof the contract(typical scenario when external clients send traffic toward the data center to “consume” a service offered by an application hosted there), thesecurity policy is applied on the first leafthe where the traffic is received. This means on the BL node forinboundtraffic and on the compute leaf for outbound trafficand this is valid independently fromwhich sitethe internalendpoint is deployed.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (106)

Figure 104.

Security Policy Enforcement when the Ext-EPG istheConsumer

If instead the contract’sdirection is reversedand the External EPG is configured as the provider ofthe contract(typically for communicationsinitiatedfrom the data centerto connect to an external service), the security policy is consistently applied on the compute leaf node for both legs of the traffic flow.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (107)

Figure 105.

Security Policy Enforcement when the Ext-EPG is the Provider

Use Case 3: Site-Local L3Out Connections to Communication with External Resources (Inter-VRF/Shared Services Between Different Tenants)

In thisshared serviceuse case, theVRF for the internal EPGs/BDs and the VRF for the L3Out are defined in different Tenants.The configuration steps andthedeploymentconsiderations made for theprevioususe case continue to apply here, with only the following differences:

The scope of the contract must now be set to “Global”.

The contract must be defined in a template associated with the “Provider” tenant. Applying this configuration on Nexus Dashboard Orchestrator automatically ensures that on APIC the contract gets exported toward the “Consumer” tenant where it can be seen as a “Contract Interface”, as shown in Figure 106 and Figure 107.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (108)

Figure 106.

Contract Imported into the Consumer Tenant

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (109)

Figure 107.

Contract InterfaceCreated on the Consumer Tenant

All the other considerations, including the advertisem*nt of BD subnets toward the external network, remainexactly the sameas in theprevioususe case 2.

Deploying Intersite L3Out

In all the use cases previously described, therequirementhas always beento havea local L3Out connection available in each fabricpartof the Multi-Site domainfor outboundcommunicationwith external resources. This defaultbehaviorin anACI Multi-Sitedeploymentdoes not allow to covera couple ofspecificscenarioswhere there may be a need to communicate withresourcesaccessible via the L3Out connection that is only deployed in a specificfabric.

The first scenario is toestablishnorth-southconnectivity between an internal endpoint connected to a site and the L3Out connection deployed in a remote site,asshown inFigure108.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (110)

Figure 108.

IntersiteNorth-south Connectivity Scenario

The second scenario is the enablement of transit routing between L3Out connectionsdeployed in different sites, as shown inFigure106.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (111)

Figure 109.

Intersite Transit Routing Scenario

ACI release4.2(1) and MSO release 2.2(1) introduced support for the “Intersite L3Out” functionality, which allows changing the default Multi-Site behavior to be able to cover the twouse casesshown in the figures above, both for intra-VRFand inter-VRF (and/or inter-tenant) deployment scenarios.

It is worth noticing that as of ACI release 5.2(3) and NDO release 3.5(1), intersite L3Out it is not supported in conjunction with the use of Preferred Groups or vzAny. It is hence required to apply specific contract between the EPGs and Ext-EPGs (or between Ext-EPGs) defined in separate sites.

Note: Moredetailsaround the specific use cases requiring the deployment of the intersite L3Outfunctionalityand the technical aspects of control and data planebehaviorcan be found in the ACI Multi-Site paper below (in this document the focus is mostly on how to deploythis functionality):

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739609.html#ConnectivitytotheexternalLayer3domain

Intersite North-South Connectivity (Intra-VRF)

The first scenario hereby considered is theone displayed inFigure110for theestablishment of intersite north-south connectivityintra-VRF.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (112)

Figure 110.

Intersite L3Out forNorth-southConnectivity (intra-VRF)

As shown above,once the BDs and contracts are properly configured(allowing connectivity between EPG1-S1 and Ext-EPG-S1 and between EP2-S2 and Ext-EPG-S1),two-way north-south communication can beestablishedby default only for endpoints that are connected to the same fabricwith the L3Out connection (EPG1-S1 in Site 1).Remote endpointspart of EPG2-S2arealso able to receiveinboundtrafficflows(leveragingthe VXLAN data plane connectivity across the ISN). However,thereturn communication is not possible as theexternal prefix 192.168.1.0/24 is not advertised across sites. This default behavior of ACI Multi-Site can bemodifiedthroughthe enablement of the “Intersite L3Out” functionality.The following configuration stepsperformedon Nexus Dashboard Orchestratorare requiredtoachieve two-way north-south communication between the endpoint in Site 2 part of EPG2-S2 and the external network 192.168.1.0/24.

1. Properly configure BD2-S2to advertise the IP subnet out of L3Out-S1. This requiresmakingthe IP subnet“Advertised Externally” andmappingthe BDtotheremoteL3Out, both actions done at the sitelevel since the BD is locally defined in Site 2.Notice that to be able to associate BD2-S2 defined in SIte2 with L3Out-S1, it is mandatory to have the L3Out object defined on a NDO template. If the L3Out was initially created on APIC, it is possible to import the L3Out object into NDO.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (113)

Figure 111.

Configuration to Advertise BD2-S2’s Subnet out ofL3Out1-S1

2. Configure Ext-EPG-S1 to properly classify incoming traffic. As previously discussed in this document, the use of a “catch-all” 0.0.0.0/0 prefix is quite common when the L3Outprovidesaccess toevery external destination, but in this specific use case, a more specific prefixisconfigured asthe L3Outlikelyprovidesaccess to a specific set of external resources.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (114)

Figure 112.

Classification Subnet Configured Under Ext-EPG-S1

3. As explained in greater detail in the ACI Multi-Site paper previously referred above, the Intersite L3Out connectivity is achieved by creating a VXLAN tunnel directly between the compute leaf node, where the internal endpoint is connected, and the BL node connected to the external resource. For this to be possible,it isfirst of allrequiredtodefine an external TEP pool forthe ACI fabric part of the Multi-Site domain where the L3Out is deployed. This allows to provision a TEP address(part of the specified external TEP pool)for each border leaf nodein that fabric, to ensurethat the direct VXLAN tunnel can beestablishedfrom a remote compute leaf node(since all the TEP addresses assigned by default to the fabric leaf and spine nodes are part of the original TEP pool configured during the fabric bring upoperation and such pool may not be routable between sites).Even if technically the external TEP pool is only needed for the fabric where the L3Out is deployed, it is recommended to provide one for eachfabric part of the Multi-Site domain (just to ensure communication will work right away if/when a local L3Out is createdat a later time).

The configuration of the external TEP poolcan be done on NDO as part of the“Infra Configuration”workflowfor each Podof the fabrics part of the NDO domain, as shown inFigure113.

Note: If a fabric part of the NDO domain is deployed as a Multi-Pod fabric, a separate external TEP pool must be specified for each Pod part of the fabric. The external TEP pool can range between a /22 and a /29 prefix.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (115)

Figure 113.

Configuringthe External TEP Pool for each Pod

Once theexternalTEP pool configuration is pushed to the fabrics, the first result is the provisioning of a dedicated loopback interface on the BL nodes,representingtheexternalTEP address assigned to that nodethat will be used as a destination of the VXLAN tunnelinitiatedfrom the compute nodein a remote site. This can be seen for example on the BL node of Site 1 in the output below:

Leaf104Site1

Leaf104-Site1# show ip intbrivrf overlay-1

IP Interface Status for VRF "overlay-1"(4)

Interface Address Interface Status

eth1/49 unassigned protocol-up/link-up/admin-up

eth1/49.6 unnumbered protocol-up/link-up/admin-up

(lo0)

eth1/50 unassigned protocol-up/link-up/admin-up

eth1/50.7 unnumbered protocol-up/link-up/admin-up

(lo0)

eth1/51 unassigned protocol-down/link-down/admin-up

eth1/53 unassigned protocol-down/link-down/admin-up

eth1/54 unassigned protocol-down/link-down/admin-up

vlan8 10.1.0.30/27 protocol-up/link-up/admin-up

lo0 10.1.0.69/32 protocol-up/link-up/admin-up

lo1 10.1.232.65/32 protocol-up/link-up/admin-up

lo4 192.168.101.232/32 protocol-up/link-up/admin-up

lo1023 10.1.0.32/32 protocol-up/link-up/admin-up

4. Theprovisioning of the external TEP pool is not sufficient to trigger the exchange of external prefixes between sites, required to ensure that the outbound flow shown in thepreviousFigure110can be sent toward the BL node in Site 1. For that to happen, it is also required toapplya contractbetweenEPG2-S2in Site 2 and the Ext-EPG associated to the L3Out in Site 1.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (116)

Figure 114.

Apply a Contract between EPG2-S2 andExt-EPG-S1

Once that contract relationship is established, a VPNv4/VPNv6 prefix exchange will be triggered between the spines allowing to advertise the external prefix 192.168.1.0/24 to Site 2 and this will allow for the successful establishment of outbound communication. This is confirmed by looking at the routing table of the compute leaf in Site 2.

Leaf 304 Site2

Leaf304-Site2# show ip route vrf Tenant-1:VRF1

IP Route Table for VRF "Tenant-1:VRF1"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

10.10.2.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.0.136.66%overlay-1, [1/0], 03:32:38, static,rwVnid: vxlan-2359299

10.10.2.254/32, ubest/mbest: 1/0, attached, pervasive

*via 10.10.2.254, vlan41, [0/0], 1d01h, local, local

192.168.1.0/24, ubest/mbest: 1/0

*via 192.168.101.232%overlay-1, [200/0], 00:00:02, bgp-100, internal, tag 65501,rwVnid:vxlan-3112963

As seen above, the next-hopfor the external prefixis the external TEP pool address assigned to the BL node in Site 1. Also,the routing table shows the information that all the traffic destined toward the external destination 192.168.1.0/24 should be encapsulatedusing the VXLAN ID3112963 in the header. This valuerepresentsthe VXLAN ID for the L3Out VRF (VRF1) in Site 1, as shown in the output below:

Leaf104Site1

Leaf104-Site1# show vrf Tenant-1:VRF1 detail

VRF-Name: Tenant-1:VRF1, VRF-ID: 41, State: Up

VPNID: unknown

RD: 103:3112963

This VXLAN ID valueis added in the VXLAN header to allow the receiving BL node to derive the information of whatVRF to perform the Layer 3 lookup for the destination. Since the VXLAN tunnel is directlyterminatedon the BLnode, there is no translation happening on the spines in Site 1 and,as a consequence, the compute leaf node needs to know the correct VXLAN IDrepresentingVRF1 in Site

This VXLAN ID valueis added in the VXLAN header to allow the receiving BL node to derive the information of whatVRF to perform the Layer 3 lookup for the destination. Since the VXLAN tunnel is directlyterminatedon the BLnode, there is no translation happening on the spines in Site 1 and,as a consequence, the compute leaf node needs to know the correct VXLAN IDrepresentingVRF1 in Site 1. This information is hence communicated from Site 1 to Site 2 as part of the EVPN control plane update, as it can be verified in the BGP routing table of the compute leaf node in Site 2.

Leaf304Site2

Leaf304-Site2#show ip bgp 192.168.1.0/24 vrf Tenant-1:VRF1

BGP routing table information for VRF Tenant-1:VRF1, address family IPv4 Unicast

BGP routing table entry for 192.168.1.0/24, version 30destptr0xa2262588

Paths: (1 available, best #1)

Flags: (0x08001a 00000000) onxmit-list, is inurib, is besturibroute, is in HW

vpn: version 218279, (0x100002) onxmit-list

Multipath: eBGP iBGP

Advertised path-id 1, VPN AF advertised path-id 1

Path type: internal 0xc0000018 0x80040ref0 adv pathref2, path is valid, is best path, remote site path

Imported from 103:19890179:192.168.1.0/24

AS-Path: 655013 ,path sourced external to AS

192.168.101.232 (metric 63) from 10.0.0.66 (172.16.200.1)

Origin IGP, MED not set, localpref 100, weight 0 tag 0,propagate0

Received label 0

Received path-id 2

Extcommunity:

RT:65501:19890179

SOO:65501:33554415

COST:pre-bestpath:166:2684354560

COST:pre-bestpath:168:3221225472

VNID:3112963

Intersite North-South Connectivity (Inter-VRFs)

The only difference in this scenario is that the L3Out is part of a different VRF(VRF-Shared) compared to the internal endpoint (still part of VRF1).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (117)

Figure 115.

Intersite L3Out forNorth-SouthConnectivity (inter-VRF)

The configuration steps3and4describedabove remainidentical(except the need to ensure the scope of the applied contract isnow at least“Tenant”), what now is changing isthe configurationrequiredto leak routes between VRFs and be abletoproperlyapply the policy. The same considerationsmade in the “Use Case 2: Site-Local L3Out Connections to Communicate with External Resources (Inter-VRF/Shared Services inside the Same Tenant)” apply here as well.Figure116shows the configuration requiredfor thesubnet defined under the BD and EPG (assuming EPG1-S1 is the provider of the contract, else the subnet under the EPG is not required) and the subnet configuration under Ext-EPG-S1.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (118)

Figure 116.

Provisioning the Configuration for the Intersite North-South inter-VRF Use Case

It isworth noticing thatthe same configuration must pre provisioned from NDO for the use cases previously described in this section if the VRFs are defined in separate tenants. The only difference us that in such case thescope of the contractmust be “Global” and needs to be defined in theProvider Tenant. More information can be found in the “Use Case 3: Site-Local L3Out Connections to Communicate with External Resources (Inter-VRF/Shared Services between Different Tenants)” section.

Intersite Transit Routing Connectivity (Intra-VRF)

In theprevioususe cases, the intersite L3Out functionality was introduced to allow communication across sites between endpoints and external resources. A specific use case where intersite L3Out is handy is the onewhere ACI Multi-Site plays the role of a “distributed core” interconnecting separate external network domain. This deployment model is referred to as intersite transit routingand shown inFigure115for the use case where both L3Out connections are part of the same VRF1 routing domain.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (119)

Figure 117.

Intersite Transit Routing Connectivity (Intra-VRF)

A separate External EPG is defined for each L3Out connection in this case, asthey provide connectivity to different external routed domains that require to communicate with each other. The following are the provisioning steps required to implement this scenario.

Define one or more prefixes associated to the Ext-EPGs to ensure incoming traffic can be properly classified. If the only deployed L3Outs were the ones shown in figure above, a simple 0.0.0.0/0 prefix could be used forboth of them. In areal-lifescenario, however, it would be common to have differentL3Out connectionsbeingused toprovideaccess to all the other external resources, somore specific prefixes are specified for the Ext-EPGs of the L3Out connections enabling intersite transit routing.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (120)

Figure 118.

Configuration of classification subnetson Ext-EPGs in Site 1 and 2

Ensure that the prefixes learned on each L3Out can be advertised out of the remote L3Out. In the specific example discussed in this section, the IP prefix 192.168.1.0/24 is received on the L3Out in Site 1 and should be advertised out of the L3Out in Site 2, and vice versa for the 192.168.2.0/24 prefix.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (121)

Figure 119.

Announcing Specific Prefixes out of the L3Out in Each Site

The laststep to be able to successfully enable intersite transit routing consists in creatinga security policy between the Ext-EPGs associated to the L3Outs deployed in different sites.In the example inFigure120, Ext-EPG-S1 in Site 1 is providing thecontract C1 that is then consumed by Ext-EPG-S2 in Site 2.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (122)

Figure 120.

Applying a Security Policy Between Ext-EPGs

Once the provisioning steps above have been deployed to the APIC domains in Site 1 and Site 2, intersite transit connectivity can beestablished. It is possible verify on the BL nodes in each site that the remote external prefixes are indeed received:

Leaf104Site1

Leaf104-Site1# show ip route vrf Tenant-1:VRF1

IP Route Table for VRF "Tenant-1:VRF1"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

192.168.1.0/24, ubest/mbest: 1/0

*via 172.16.1.1%Tenant-1:VRF1, [20/0], 00:35:20, bgp-65501, external, tag 3

192.168.2.0/24, ubest/mbest: 1/0

*via 192.168.103.229%overlay-1, [200/0], 00:05:33, bgp-65501, internal, tag 100,rwVnid:vxlan-2359299

Leaf201Site2

Leaf201-Site2# show ip route vrf Tenant-1:VRF1

IP Route Table for VRF "Tenant-1:VRF1"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

192.168.1.0/24, ubest/mbest: 1/0

*via192.168.101.232%overlay-1, [200/0], 00:03:02, bgp-100, internal, tag 65501,rwVnid:vxlan-3112963

192.168.2.0/24, ubest/mbest: 1/0

*via 172.16.2.1%Tenant-1:VRF1, [20/0], 00:38:25, bgp-100, external, tag 30

As shown above, the prefix 192.168.2.0/24is learned in Site 1 via the VPNv4 control plane sessionsestablishedbetween the spinesand installed on the local BL node with the next-hoprepresentingthe external TEP address assigned to the BL node in Site 2 (192.168.103.229)and the specific VXLAN ID to userepresentingVRF1 in Site 2 (vxlan-2359299). Similar considerations apply to the 192.168.1.0/24 that is advertised from Site 1 to Site 2.

From a policy enforcement perspective,the contract is always appliedinboundon the BL node where the traffic is received from the external network(Figure121).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (123)

Figure 121.

Security Policy Applied Always Inbound onthe BL Nodes

For this to be possible, it is required that each BL node knows the class-ID for thelocal andremoteexternal prefix, as it can be verified using the commands below.

Leaf104Site1

Leaf104-Site1#vsh-c 'show system internal policy-mgrprefix'

Vrf-Vni VRF-Id Table-Id Table-State VRF-NameAddr Class Shared Remote Complete

======= ====== =========== ======= ============================ ================================= ====== ====== ====== ========

3112963 41 0x29 Up Tenant-1:VRF1 192.168.1.0/24 49156 TrueTrue False

3112963 41 0x29 Up Tenant-1:VRF1 192.168.2.0/24 16392 TrueTrue False

Leaf201Site2

Leaf201-Site2#vsh-c 'show system internal policy-mgrprefix'

Vrf-Vni VRF-Id Table-Id Table-State VRF-NameAddr Class Shared Remote Complete

======= ====== =========== ======= ============================ ================================= ====== ====== ====== ========

2359299 31 0x1f Up Tenant-1:VRF1 192.168.2.0/24 49161 False True False

2359299 31 0x1f Up Tenant-1:VRF1

As noticed above, the Class-IDs for the prefixes are different in each site and theycorrespondto the values associated with thelocal Ext-EPG and to the shadow Ext-EPG that is createdas a result ofthe establishment of the contractual relationship.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (124)

Figure 122.

Class-ID Values for Local and Shadow Ext-EPGs in Each Site

As shown above,the establishment of the contract between the Ext-EPGs causes the instantiation of the whole shadow L3Out (with associated Ext-EPG) in each remote APIC domain(L3Out-S2 is the shadow object in the APIC in Site 1,whereasL3Out-S1 is the shadow object in the APIC in Site 2).The creation of those shadow EPG objects allows them to map the specific IP prefixes configured under theExt-EPGs with the proper Class-ID values.

Being able to associate the remote external prefixes with the right Class-ID value is criticalforapplyingthe policyinboundto the BL node. This is confirmed by looking at the zoning-rule tables for the BL nodes in Site 1 and Site 2.

Leaf104Site1

Leaf104-Site1# show zoning-rule scope 3112963

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

|4156 | 0 | 0 | implicit |uni-dir | enabled | 3112963 | |deny,log|any_any_any(21) |

|4232 | 0 | 0 |implarp |uni-dir | enabled | 3112963 | | permit |any_any_filter(17) |

|4127 | 0 | 15 | implicit |uni-dir | enabled | 3112963 | |deny,log|any_vrf_any_deny(22) |

|4124 | 0 | 49154 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4212 | 0 | 49153 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4234 | 0 | 32771 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4199 |49156 | 16392 | default |uni-dir-ignore | enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

|4213 |16392 | 49156 | default | bi-dir | enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

Note: 3112963is the Segment-IDvaluefor VRF1 in Site 1 (thisinformationcan be retrievedusingthe “show vrf <VRF_name> detail”command).

Leaf201Site2

Leaf201-Site2#show zoning-rule scope 2359299

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

|4183 | 0 | 0 | implicit |uni-dir | enabled | 2359299 | |deny,log|any_any_any(21) |

|4108 | 0 | 0 |implarp |uni-dir | enabled | 2359299 | | permit |any_any_filter(17) |

|4213 | 0 | 15 | implicit |uni-dir | enabled | 2359299 | |deny,log|any_vrf_any_deny(22) |

|4214 | 0 | 32772 | implicit |uni-dir | enabled | 2359299 | | permit |any_dest_any(16) |

|4212 | 0 | 32771 | implicit |uni-dir | enabled | 2359299 | | permit |any_dest_any(16) |

|4201 | 0 | 16392 | implicit |uni-dir | enabled | 2359299 | | permit |any_dest_any(16) |

|4109 |49161 | 49162 | default | bi-dir | enabled | 2359299 | Tenant-1:C1 | permit |src_dst_any(9) |

|4182 |49162 | 49161 | default |uni-dir-ignore | enabled | 2359299 | Tenant-1:C1 | permit |src_dst_any(9) |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+----------------------+

Note: 2359299is the Segment-ID value for VRF1 in Site 2.

Intersite Transit Routing Connectivity (Inter-VRFs)

Intersite transit routing communication is also possible in the shared service scenario, shown inFigure123, where theL3Outs deployed in each site are part of different VRFs.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (125)

Figure 123.

Intersite Transit Routing Connectivity (Inter-VRFs)

The required provisioning stepsarevery similarto the intra-VRF scenario previously discussed.

Properly configure the flags on the Ext-EPGs associated with the IP prefixes used for classifying the traffic. The setting of those flags isrequiredto leak theIP prefixes between VRFs and toproperly installin the remote BL Nodes the Class-ID value for those prefixes.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (126)

Figure 124.

Setting the Flags for Route-leaking and Class-ID Installation

Ensure that the prefixes learned on each L3Out can be advertised out of the remote L3Out. This requires the exact configuration previously shown inFigure119for the intra-VRF use case.

Apply a security contract between theExt-EPGs. This can be done with the same configuration shown inFigure120, with the only difference that the scope the contract C1 must be set as “Tenant” or “Global”, depending on if the VRFs are part of the same Tenant or defined in separate Tenants.

As shown inFigure125, theintersite transit routing communication is then established across fabrics leveraging thefact that the BL node in Site 1 would encapsulate traffic toward theremote BL node in Site 2 using the VRF Segment-ID representing theShared-VRF in that remotefabric, and vice versa.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (127)

Figure 125.

Using Remote VRF Segment-ID when Sending Traffic to the Remote Site

This can be verified by looking at the output below.

Leaf104Site1

Leaf104-Site1# show ip route vrf Tenant-1:VRF1

IP Route Table for VRF "Tenant-1:VRF1"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

192.168.1.0/24, ubest/mbest: 1/0

*via 172.16.1.1%Tenant-1:VRF1, [20/0], 07:37:22, bgp-65501, external, tag 3

192.168.2.0/24, ubest/mbest: 1/0

*via192.168.103.229%overlay-1, [200/0], 01:12:10, bgp-65501, internal, tag 100,rwVnid:vxlan-2097156

Note: 2097156is the Segment-ID value for VRF-Shared in Site 2 (this information can be retrieved using the “show vrf <VRF_name> detail” command).

Leaf201Site2

Leaf201-Site2# show ip route vrf Tenant-1:VRF-Shared

IP Route Table for VRF "Tenant-1:VRF-Shared"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

192.168.1.0/24, ubest/mbest: 1/0

*via192.168.101.232%overlay-1, [200/0], 00:02:07, bgp-100, internal, tag 65501,rwVnid:vxlan-3112963

192.168.2.0/24, ubest/mbest: 1/0

*via 172.16.2.1%Tenant-1:VRF-Shared, [20/0], 01:16:31, bgp-100, external, tag 30

Note: 3112963is the Segment-ID value for VRF1 in Site 1.

From the point of view of thepolicy enforcement, the same behavior previously shown inFigure121continues to be valid also in theinter-VRF scenario. The only difference is that the Class-ID for the remote external prefix is now installed as aresult of the“Shared Security Import” flag setting shown inFigure124for theExt-EPGs. This can be verified using the same commands used for the intra-VRF use case.

Leaf104Site1

Leaf104-Site1#vsh-c 'show system internal policy-mgrprefix'

Vrf-Vni VRF-Id Table-Id Table-State VRF-NameAddr Class Shared Remote Complete

======= ====== =========== ======= ============================ ================================= ====== ====== ====== ========

3112963 41 0x29 Up Tenant-1:VRF1 192.168.1.0/24 10930 TrueTrue False

3112963 41 0x29 Up Tenant-1:VRF1 192.168.2.0/24 10934 TrueTrueFalse

Leaf201Site2

Leaf201-Site2#vsh-c 'show system internal policy-mgrprefix'

Vrf-Vni VRF-Id Table-Id Table-State VRF-NameAddr Class Shared Remote Complete

======= ====== =========== ======= ============================ ================================= ====== ====== ====== ========

2097156 34 0x22 Up Tenant-1:VRF-Shared 192.168.1.0/24 5492 TrueTrueFalse

2097156 34 0x22 Up Tenant-1:VRF-Shared 192.168.2.0/24 32771 TrueTrueFalse

As can be seen also inFigure126,the class-ID values assigned to the local and shadow Ext-EPGs are now different from the onesusedfor the intra-VRF use case, as global values must be used to ensure they are unique across VRFs.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (128)

Figure 126.

Class-ID Values for Local and Shadow Ext-EPGs in Each Site

Theconfiguration of the zoning-rule entries on the BL nodes allows to confirm that the security policy is always appliedinboundon the BL node that is receiving the traffic flow from the external network.

Leaf104Site1

Leaf104-Site1# show zoning-rule scope 3112963

+---------+--------+--------+----------+----------------+---------+---------+-------------+-----------------+----------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+----------------+---------+---------+-------------+-----------------+----------------------+

|4156 | 0 | 0 | implicit |uni-dir | enabled | 3112963 | |deny,log |any_any_any(21) |

|4232 | 0 | 0 |implarp |uni-dir | enabled | 3112963 | | permit |any_any_filter(17) |

|4127 | 0 | 15 | implicit |uni-dir | enabled | 3112963 | |deny,log |any_vrf_any_deny(22) |

|4124 | 0 | 49154 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4212 | 0 | 49153 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4234 | 0 | 32771 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4213 |10930 | 14 | implicit |uni-dir | enabled | 3112963 | |permit_override|src_dst_any(9) |

|4199 |10930 | 10934 | default |uni-dir-ignore | enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

|4206 |10934 | 10930 | default | bi-dir | enabled | 3112963 | Tenant-1:C1 | permit |src_dst_any(9) |

+---------+--------+--------+----------+----------------+---------+---------+-------------+-----------------+----------------------+

Note: 3112963is the Segment-ID value for VRF1 in Site 1 (this information can be retrieved using the “show vrf <VRF_name> detail” command).

Leaf201Site2

Leaf201-Site2# show zoning-rule scope 2097156

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+------------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+------------------------+

|4182 | 0 | 0 | implicit |uni-dir | enabled | 2097156 | |deny,log|any_any_any(21) |

|4109 | 0 | 0 |implarp |uni-dir | enabled | 2097156 | | permit |any_any_filter(17) |

|4190 | 0 | 15 | implicit |uni-dir | enabled | 2097156 | |deny,log|any_vrf_any_deny(22) |

|4198 | 5492 | 32771 | default |uni-dir-ignore | enabled | 2097156 | Tenant-1:C1 | permit |src_dst_any(9) |

|4176 |32771 | 5492 | default | bi-dir | enabled | 2097156 | Tenant-1:C1 | permit |src_dst_any(9) |

|4222 | 5492 | 0 | implicit |uni-dir | enabled | 2097156 | |deny,log|shsrc_any_any_deny(12) |

+---------+--------+--------+----------+----------------+---------+---------+-------------+----------+------------------------+

Note: 2097156is the Segment-ID value for VRF-Sharedin Site 2.

The same configuration discussed in this section would berequiredtoestablishinter-VRF transit routing connectivity when the VRFs are defined in different tenants. The onlything to ensure in that case is that the contract is defined with scope “Global” as part of the Provider tenant.

Service Node Integration with ACI Multi-Site

The basic assumption for service node integration with ACI Multi-Site is that one (or more) set of dedicated service nodes are deployed in each fabric part of the Multi-Site domain. Thesupportof clustered services across sites is in fact limited andwon’tbeconsidered in the context of this paper, as it is not the most common nor recommended approach in a Multi-Site deployment.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (129)

Figure 127.

Service Node Integration with ACI Multi-Site

The first immediate consequence isthateachspecific service node functionalityshould bedeployed in each fabricinthe most possibleresilientway.Figure128shows three different ways toachieve local service node resiliencyinside each fabric(the specific example in the figure refers to firewalling services, but the same considerations apply to other types of service nodes):

Deployment of anactive/standby cluster:this usually implies that the whole cluster is seen as a single MAC/IP addresses pair, even if there are some specific service nodes on the market who donot preserve the active MAC addressas a result ofa node switchover event (we’lldiscuss alsothis case in more details below).

Deployment of an active/active cluster:in thespecificcase ofaCisco ASA/FTDdeployment, the whole clustercan bereferencedby a single MAC/IP addresses pair (owned by all the nodes belonging to the cluster).Other active/active implementations on the market may instead result inhaving each cluster node owning a dedicated and unique MAC/IP pair.

Note: The deployment of an A/A cluster in a fabric part of a Multi-Site domain requires the minimum ACI release 5.2(2e) to be used in that site.

Deployment of multiple independent service nodes in each fabric, each one referenced by a unique MAC/IPaddressespair.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (130)

Figure 128.

Different Options for the Resilient Deployment of a Service NodeFunctionin an ACI Fabric

Note: The focus in this paper is the deployment of service nodes in Layer 3 mode, as it is the most common deployment scenario. Service graph configurationsupports alsothe use of Layer 1 and Layer 2PBR, for more information please refer to the paper below:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739971.html

Independently from the specificredundancy deploymentoptionof choice, the configurationrequiredto define the service node andthe associated PBR policymust always be performedat the APIC level,foreachACIfabric part ofaMulti-Site domain.As it will be clarified better in the following sections, the different redundancy optionsshown inFigure128can be deployed basedon thespecific service node and PBR policyconfiguration that is created on APIC.This configuration also varies depending on if one or more service nodes must be inserted in the communication between EPGs, so those scenarios will be considered independently.

For a more detailed discussion of service node integration options, please refer to the paper below:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-743107.htm

Service Graph with PBR with Multi-Site for the Insertion of a Single Service Node

The first use case to consider is the insertion of a single service nodeto redirecting the traffic flowsestablishedbetween two EPGs. Different scenarios can be deployed, depending on if the EPGs are deployed for internal endpoints or associated with L3Outs (Ext-EPG), and if they are part of the same VRF or different VRFs (and/or Tenants).

Independently from the specific use case, the first step consists in defining a logical service nodefor each fabric that is part of the Multi-Site domain. As shown inFigure129below,the creation ofthissinglelogicalservice nodemust beperformedatthe APIC controllerlevel and not from NDO.In the specific example below, theconfigurationleverages two firewall nodes(usually referred to as “concrete devices”)in each sitebut similar considerationsapplywhendeploying a different type of service node(server load balancer, etc.).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (131)

Figure 129.

Deployment of aSingle Service Nodeon APIC

Note: the configuration shown above must be performed on all the APIC controllers managing the fabrics that are part of the Multi-Site domain.Notice also that the service node must be configured as“unmanaged”by unchecking the “Managed” flag, which is the onlypossible optionfor integration with Multi-Site.

Thetwoconcretefirewallnodesshown abovearedeployed as part of a cluster (active/standby or active/active) or as independent nodes, based on the specific redundancyoptionof choice (one of themodelsshowninpreviousFigure128). For morespecificinformation on how to build a cluster configuration with Cisco ASA/FTD firewall devices, please refer to the documentation below:

https://www.cisco.com/c/en/us/support/security/firepower-ngfw/products-installation-and-configuration-guides-list.html

https://www.cisco.com/c/en/us/support/security/asa-5500-series-next-generation-firewalls/products-installation-and-configuration-guides-list.html

Independently from the specific redundancy model of choice, each APIC controller exposes to NDOasingle logical service node(intheexamplein the figure abovenamed“Site1-FW”), which is connected to the fabric via a single logical one-arm interface (named “One-Arm-Site1-FW”). Those specificobjectsarethenused when provisioning the service graphwith PBRconfiguration on NDO, as it will be shownlater in this section.

Once the logicalfirewallservicenodeisdefinedforeach fabric, the second configuration step that must be performed at the APIC level is the definition of the PBR policy.Figure130shows the creation of the PBR policy for the active/standby and active/active cluster optionssupported with Cisco firewalls: in this case, a single MAC/IP pair is specifiedinthe policy, since it represents theentire firewall cluster(i.e.it is assigned to all the active firewall nodes in the cluster).The redirectionfor each fabricin this case is always performed to thespecific MAC/IPvalue, that could be deployed on a single concrete device (active/standby cluster) or on many concrete devices (independent nodes).

Note: Starting from ACI Release 5.2(1), the configuration of the MAC address in the PBR policy is not mandatory anymore and the MAC associated to the specified IP address can instead be dynamically discovered. This new functionality requires the enablement of tracking for the PBR policy. For more information on this new PBR functionality, please refer to the paper below:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739971.html

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (132)

Figure 130.

Definition of the PBR Policy for a Firewall Cluster (single MAC/IP pair)

Figure131shows instead of the PBR policy that is required whenthe logical firewall service node is seen asseparate MAC/IP addresses pairs:this is the case when deploying independent service nodes in each fabric or even with somethird-partyfirewallcluster implementations.The redirectionof traffic happens in this case to different MAC/IPaddresses on a per-flow basis and functionality called“symmetric PBR”(enabled by default on ACI leaf nodes starting from EX models and newer)ensures that both legs of the same traffic floware always redirected to the same MAC/IP pair.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (133)

Figure 131.

Definition of the PBR Policy for Independent” Firewall Nodes (Multiple MAC/IP Pairs)

The name of thecreatedPBR policy(“PBR-FW-Site1 inthe specific examples inFigure130andFigure131) is then exposed to Nexus Dashboard Orchestrator to be used for theprovisioning of the service graph with PBR configuration.

Note: A similar configuration is needed on theotherAPICcontrollers of thefabrics part of the Nexus Dashboard Orchestrator domain.

When defining the PBR policy to redirect traffic to the MAC/IP addresses pairs, and independently from the use of the new 5.2(1) dynamic MAC discovery functionality previously mentioned, it is always a best practice recommendation to create an associated“tracking” configuration, ensuring that the fabric can constantly verify the health of theservice node.When multiple MAC/IP pairs are used, theimportance of tracking isevidentto ensurethatthe failed node associated with aspecific MAC/IPvaluestops being used for traffic redirection.But there are convergence improvementsin using tracking also in the scenario (as the Cisco active/activefirewallcluster) where multiple nodes inside a fabric use the same MAC/IP value.For more information on how to configureservice nodetrackingin an ACI fabric, please refer to the documentation below:

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/5-x/l4-l7-services/cisco-apic-layer-4-to-layer-7-services-deployment-guide-50x/m_configuring_policy_based_redirect.html

The use cases discussed in this section forinsertion ofa firewallservice node usingservice graph with PBRarehighlighted inFigure132.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (134)

Figure 132.

Service Graph with PBR for Insertion of Firewall Service Node

As shown above, the consumer and provider EPGs can be part of the same VRF (and tenant) or also deployed in separate VRFs (and tenants, ifrequired). Also, in the most common deployment model the Firewall node is deployed in Layer 3 modeand connected to a Service BD in one-arm mode. Doing this simplifies the routing configuration on the Firewall (a simple static default route pointing to the Service BD IP address isrequired), butwhenpreferred it is also possible to connect theinside and outside interfaces of theFirewall to separate BDs(two-arms mode).

Note: Service node insertion using service graph with PBR is not supported for the intersite transit routing use case(i.e.L3Out to L3Out communication). Hence, only a “regular” ACI contract can be applied in that case between L3Outs defined in different sites, as previously discussed in the “Intersite Transit Routing Connectivity (Intra-VRF)” and “Intersite Transit Routing Connectivity (Inter-VRFs)” sections.

Firewall Insertion for North-South Traffic Flows (Intra-VRF)

The first use case to provision is the one requiring the insertion ofafirewallservicefor intra-VRF north-south connectivity.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (135)

Figure 133.

PBR Policy Applied on the Compute Leaf Nodes for Inbound Traffic Flows

Figure133showshow the PBR policy is always applied on the compute leaf nodes for all inbound traffic flows, no matterthespecific L3Outreceivingthe traffic. This behaviorrequires the VRF tohave the “Policy Control Enforcement Direction” configured as “Ingress”, which is the default valuefor all the VRFs created on APIC or NDO.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (136)

Figure 134.

Default VRF Setting for Policy Control Enforcement Direction

The same behavior applies to the outbound traffic flow, and this is the key functionality that ensures that redirection happens to the same firewall services node already used for the inbound flow (the leveraged firewall services are always in the same fabric where the internal endpoint is connected).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (137)

Figure 135.

PBR Policy applied on the Compute Leaf Nodes for Outbound Traffic Flows

Notice that this is always the caseindependently from the specific L3Out connection that is used to communicate with the external devices: the outbound traffic from endpoint 10.10.3.2 normallyleveragesthe local L3Out, even if theinboundflows may have been received on the L3Out ofSite1(as shown inpreviousFigure133).

The provisioning steps tobe performed on NDO tointegrate thefirewallfor north-southtraffic flows (intra-VRF) are described below.

Configure the subnet of theconsumer and providerBDs to ensure they can be advertised out of the L3Out in each site. This requiresconfiguringtheBDsubnetsas “Advertised Externally” and mapping the BDsto thespecificL3Outs where the prefix should be advertised, as described inthe previous “Connectivity to the External Layer 3 Domain” section.

Configure the External EPG to properly classify incoming traffic. Assuming a stretched Ext-EPG is deployed, it is common to specify a “catch-all” 0.0.0.0/0 prefix with the associated “External Subnets for External EPGs” flag set.

Define the “service BD” used to connect thefirewallnodes deployed in each fabric.This BD must be provisioned from NDOin a template associated with all the sites. The BD is configuredas aLayer 2stretchedobject, but there is noneedto enable BUM trafficforwarding(which allows preventing cross-site traffic flooding for this specific BD). There isalsonot a requirementto configure an EPGfor thefirewallnodes connectedto this BD, as that will be automatically created when deploying the service graph.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (138)

Figure 136.

Provisioning of the Firewall “Service BD”

Note: A single “Service BD” is deployed in this example, as the firewall is configured in “one-arm” mode (as previously shown inFigure130). If the firewall wasinstead deployed in “two-arms” mode, two separate “service BDs” would beprovisioned, one for each firewall interface.Also, at the time of writing this paper, it is not supported to insert in a service graph a service node that is connected to the fabric via an L3Out connection.

Create the service graph on the Nexus Dashboard Orchestrator forfirewallinsertion: assuming that the service node needs to be inserted for communication between endpoints connected to different fabrics, the service graph should be defined in the template associated with all the sites part of the Multi-Site domain(i.e.the service graph is provisioned as a ‘stretched’ object).As shown inFigure137,theconfiguration for theservice graph isprovisioned in two parts:first,at the global template leveltospecifywhichservice nodeshould beinserted (a firewall inthis specific example).Second, at the site level to map the specificlogical firewalldevice thathas been defined on APIC and it is now exposed to Nexus Dashboard Orchestrator(see previousFigure129).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (139)

Figure 137.

Definition of the Service Graph on NDO

Define a contract and associate to it the service graph.The contract is usually defined in a template mapped to all the sites and in the example inFigure138, a “Permit-All” filter isassociatedwith the contractto ensure that all trafficisredirected to the firewall. Itis possible to change this behavior and make the filter more specific, if thegoal is instead to redirect to the firewall only specific trafficflows.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (140)

Figure 138.

Define a Contract with Associated Service Graph(Global Template Level)

Also, once the service graph is associated with the contract, it is thenrequiredto specify the BD(s)where thefirewalllogical node is connected.Inour specific example thefirewallis connected in one-arm mode,henceit is possible to specify the same“Service-BD” for both consumer and providerfirewallconnectors (interfaces). Notice also thatthe “Service-BD” must be associated with the connectors at the global template level, which is the mainreason whythat BD must be provisioned as a stretched object available in all the sites.

It is alsorequiredtoapply a configuration at the site-local level, to be able toassociate the PBR policy to each service node interface (consumer and provider connectors).As shown inFigure139, in our specific example where the service nodes are connected in one-arm mode, the same PBR policy is applied forbothconnectors, but that would not be the case when the firewall isconnected in two-arms mode. Also, in specific service-graph deployments, there may be needed to apply the PBR policy only for one interface (i.e.for one specific direction of traffic) and not for both (for example for SLB deployments where only return traffic originated from the server farm must be redirected to the SLB node).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (141)

Figure 139.

Associate the PBR Policy to the Service Node’s Interfaces

The last provisioning step consists inapplying the previously defined contract betweentheinternal EPGs and theexternal EPG. As previously discussed in the “Connectivity to the External Layer 3 Domain” section, the definition of a stretched external EPG is recommended for the L3Outs deployed across sites that provide access to the same set of external resources, as it simplifies the application of the security policy.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (142)

Figure 140.

Applying the Contract to Consumer and Provider EPGs

In the infra-VRF scenariodiscussed in this section, it doesnot matter which side is the provider or the consumer, the PBR policy is always applied on the compute leaf node anyway, as long as the VRF has the “Policy Control Enforcement Direction” set as “Ingress” (default configuration).

Note: As of NDO release 3.1(1),vzAny cannot be used in conjunction with a contract that has associated a service graph. The onlyoptionto apply a PBR policy between two EPGs (internal and/or external) consists hence in creating a specific contract, as in the example above.

Once the provisioning steps described above are completed, a separate service graph is deployed in each APIC domain and north-south traffic flows start getting redirected through thefirewallnodes.Figure141below shows how to verify that theservice graphs have been successfully rendered onAPIC (verify there are nofaults highlighting deployment issues).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (143)

Figure 141.

Rendering of the Service Graph on the APIC Controller

It is also possible to verify on the compute node that the traffic is properly redirected to thefirewallnode, as highlighted in the output below.

Leaf101Site1

Leaf101-Site1#show zoning-rule scope 3112963

+---------+--------+--------+----------+----------------+---------+---------+------+------------------+----------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+----------------+---------+---------+------+------------------+----------------------+

|4194 | 0 | 0 | implicit |uni-dir | enabled | 3112963 | |deny,log |any_any_any(21) |

|4203 | 0 | 0 |implarp |uni-dir | enabled | 3112963 | | permit |any_any_filter(17) |

|4227 | 0 | 15 | implicit |uni-dir | enabled | 3112963 | |deny,log |any_vrf_any_deny(22) |

|4217 | 0 | 32771 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4197 | 0 | 49153 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4200 |32773 | 16391 | default |uni-dir | enabled | 3112963 | | permit |src_dst_any(9) |

|4223 |32773 | 16388 | default |uni-dir | enabled | 3112963 | | permit |src_dst_any(9) |

|4181 | 0 | 49157 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4109 |16388 | 49158 | default |uni-dir-ignore | enabled | 3112963 | |redir(destgrp-4) |src_dst_any(9) |

|4228 |49158 | 16391 | default | bi-dir | enabled | 3112963 | |redir(destgrp-4) |src_dst_any(9) |

|4170 |16391 | 49158 | default |uni-dir-ignore | enabled | 3112963 | |redir(destgrp-4) |src_dst_any(9) |

|4198 |49158 | 16388 | default | bi-dir | enabled | 3112963 | |redir(destgrp-4) |src_dst_any(9) |

|4208 |32773 | 49158 | default |uni-dir | enabled | 3112963 | | permit |src_dst_any(9) |

+---------+--------+--------+----------+----------------+---------+---------+------+------------------+----------------------+

Regarding the topology shown inFigure133andFigure135,16388 representsin Site1the class-ID ofEPG1-S1 (where the endpoint 10.10.1.1 is connected), whereas49158 is the class-ID for the Stretched-Ext-EPG. At the same time, 16391 represents insteadthe class-ID ofEPG1-Stretched inside Site1. The output above shows howa redirection policy is applied toboth legs of the communication between the internalEPGsandtheexternal EPG. Thefollowing command points out the specific node (50.50.50.10 is the IP of the firewallin Site1) to which the traffic is being redirected.

Leaf101-Site1# show serviceredirinfo group 4

=======================================================================================================================================

LEGEND

TL:Threshold(Low) | TH: Threshold(High) | HP:HashProfile | HG:HealthGrp | BAC: Backup-Dest | TRA: Tracking | RES: Resiliency

=======================================================================================================================================

GrpIDName destination HG-nameBACoperStoperStQual TL TH HP TRAC RES

===== ==== =========== ============== === ======= ============ === === === === ===

4 destgrp-4dest-[50.50.50.10]-[vxlan-3112963] Not attached N enabled no-oper-

grp 0 0sym nono

Note: If multiple independent concrete deviceswereused tobuild the logicalfirewallservice node, the redirection policy would show multiple IP destinations (one for each concrete device).

One last considerationappliesto thespecific scenario where a local L3Out connection is not deployed (or becomes unavailable because of a failure scenario). In this case,theintersite L3Outfunctionalitycan be usedto ensure thatinboundand outbound traffic flows canleveragethe L3Out in Site1 also for communication withendpoints connected to Site2, as discussed in the“Deploying Intersite L3Out” section.Intersite L3Outcan be combined with a service graph with PBR; the two functionalitiesworkindependently from each other, but because of internal validation, thebehavior shown inFigure142below is only supported when the fabrics run ACI 4.2(5) (or any later release part of the 4.2(x) train)or 5.1(1) and any later releases.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (144)

Figure 142.

Intersite L3Out and Service Graph with PBR

Firewall Insertion for North-South Traffic Flows (Inter-VRFs)

Starting from ACI release 4.2(5) and 5.1(1), service insertion for north-south traffic flow is supported also for the inter-VRF use casewhere the L3Out and the internal EPGs are mapped to different VRFs (that can be deployed in the same tenant or in different tenants).The functional behavior in this case is the same already shown previousFigure133andFigure135for the intra-VRF scenario.Also,in this case, the PBR policyis always applied to the compute leaf nodes to avoid the creation of asymmetric traffic across the independent service node functions deployed in separate ACI fabrics.

From a provisioning perspective,the followingspecificconsiderations apply inthe inter-VRFsuse case:

The internalconsumerBD subnets and Ext-EPG prefixes must be properly configured to ensureroute-leaking happens and the BD subnets are advertised toward theexternalnetwork. For more configuration information on how to achieve this, please refer to theprevious“Connectivity to the External Layer 3 Domain” section.

For intra-tenantdeployments, the“Service-BD” can be configured as part of either VRFs, as long as it is a stretched object. For inter-tenant scenarios, the “Service-BD” must instead bepartto the VRF definedinthe provider Tenant.

The contract with the associated service graph must havea scope of “Tenant” (if the VRFs are part of the same tenant) or “Global” (if the VRFs are part of different tenants).In the inter-tenant scenario, the contract must be defined in a template (usually a stretched template) associated with the providerTenant.

For ensuring that the application of the PBR policy is alwayshappening on the compute leaf nodes, the Ext-EPGis alwaysdefined asthe provider of the contractwhereasthe internal EPGs are the consumer.

Once theprovisioning steps above are completed, thenorth-south traffic flows would behave exactly like in the intra-VRF use case.This applies also to the scenario where the service graph is combined with intersite L3Out, similarly to what shown inFigure 142for the intra-VRFcase.

From a verification perspective,the first thing to check is that the routes are properly leaked between the VRFs,Theoutput below showsthe specific example wherethe external prefix 192.168.1.0/24 is leakedinto VRF1 on the compute leafnode,whereasthe subnet for BD1-S1 (10.10.1.0/24)is leakedinto VRF-Shared on the border leafnode.

Leaf101Site1

Leaf101-Site1# show ip route vrf Tenant-1:VRF1

IP Route Table for VRF "Tenant-1:VRF1"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

10.10.1.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 12:48:25, static

10.10.1.254/32, ubest/mbest: 1/0, attached, pervasive

*via 10.10.1.254, vlan57, [0/0], 00:29:17, local, local

192.168.1.0/24, ubest/mbest: 1/0

*via 10.1.0.69%overlay-1, [200/0], 01:18:33, bgp-65501, internal, tag 3,rwVnid:vxlan-2293765

Leaf104Site1

Leaf104-Site1# show ip route vrf Tenant-1:VRF-Shared

IP Route Table for VRF "Tenant-1:VRF-Shared"

'*' denotes best ucastnext-hop

'**' denotes best mcastnext-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

10.10.1.0/24, ubest/mbest: 1/0, attached, direct, pervasive

*via 10.1.112.66%overlay-1, [1/0], 00:31:30, static, tag 4294967292,rwVnid:vxlan-3112963

192.168.1.0/24, ubest/mbest: 1/0

*via 172.16.1.1%Tenant-1:VRF-Shared, [20/0], 21:11:40, bgp-65501, external, tag 3

Notice howassociated tothe leaked prefixesis the info of the specificSegment ID to insert in the VXLAN header when sending the traffic to the other leaf node (vxlan-2293765is assigned to VRF-Shared,whereasvxlan-3112963is assigned to VRF1). This ensures thatthe receiving leaf node can perform the Layer 3 lookup in the right VRF.

From a security policy perspective,traffic received on the BL node from the external network is associated with the Ext-EPG(based on matching the prefix configured for classification under the Ext-EPG)and assigned a corresponding class-ID(5493in the specific example below). Internal endpointspart of the consumer VRF are instead classified with a “special” Class-ID value 14 so that the rule installed in the HW ensures that theinboundflow isforwardedinto the fabric.

Leaf104Site1

Leaf104-Site1# show zoning-rule scope 2293765

+---------+--------+--------+----------+---------+---------+---------+------+-----------------+----------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+---------+---------+---------+------+-----------------+----------------------+

|4217 | 0 | 0 | implicit |uni-dir| enabled | 2293765 | |deny,log |any_any_any(21) |

|4181 | 0 | 0 |implarp |uni-dir| enabled | 2293765 | | permit |any_any_filter(17) |

|4233 | 0 | 15 | implicit |uni-dir| enabled | 2293765 | |deny,log |any_vrf_any_deny(22) |

|4153 | 5493 | 14 | implicit |uni-dir| enabled | 2293765 | |permit_override|src_dst_any(9) |

|4203 | 0 | 16390 | implicit |uni-dir| enabled | 2293765 | | permit |any_dest_any(16) |

|4242 | 29 | 5493 | default |uni-dir| enabled | 2293765 | | permit |src_dst_any(9) |

|4207 | 29 | 14 | implicit |uni-dir| enabled | 2293765 | |permit_override|src_dst_any(9) |

+---------+--------+--------+----------+---------+---------+---------+------+-----------------+----------------------+

Once the traffic gets to the compute leaf, the PBR policy kicks in and causes the redirection of traffic to the service node. This is highlighted in theline below (source class-ID 5493, destination class-ID 16391 thatrepresentsEPG1-S1).Notice alsothe presence of the redirection rule for the reverse traffic originated from EPG1-S1 and destined to the external network domain.

Leaf101Site1

Leaf101-Site1# show zoning-rule scope 3112963

+---------+--------+--------+----------+----------------+---------+---------+------+------------------+------------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+----------------+---------+---------+------+------------------+------------------------+

|4194 | 0 | 0 | implicit |uni-dir | enabled | 3112963 | |deny,log |any_any_any(21) |

|4203 | 0 | 0 |implarp |uni-dir | enabled | 3112963 | | permit |any_any_filter(17) |

|4227 | 0 | 15 | implicit |uni-dir | enabled | 3112963 | |deny,log |any_vrf_any_deny(22) |

|4180 | 5493 | 16391 | default |uni-dir-ignore | enabled | 3112963 | |redir(destgrp-5) |src_dst_any(9) |

|4235 |16391 | 5493 | default | bi-dir | enabled | 3112963 | |redir(destgrp-5) |src_dst_any(9) |

+---------+--------+--------+----------+----------------+---------+---------+------+------------------+------------------------+

It is worth noticing that the compute leafcan derive the right class-ID for traffic destined to the external destination 192.168.1.0/24,based on the fact thatthis information is programmed on the compute leaf nodeas a result ofthe setting of the “Shared Security Import” flagassociated to the subnet configured under the Ext-EPG. This informationcan be retrieved using the command below:

Leaf101Site1

Leaf101-Site1#vsh-c 'show system internal policy-mgrprefix'

Requested prefix data

Vrf-Vni VRF-Id Table-Id Table-State VRF-NameAddr Class Shared Remote Complete

======= ====== =========== ======= ============================ ================================= ====== ====== ====== ========

3112963 42 0x2a Up Tenant-1:VRF1192.168.1.0/245493 TrueTrue False

Firewall Insertion for East-West Traffic Flows (Intra-VRF)

When the service node must be inserted for intra-VRF communication between two internal EPGs(also referred to as “east-west” use case),a different mechanism than the one used for north-south (i.e.always applying the PBR policy on the compute leaf node)must be used to avoid the creation of asymmetric traffic across independent service nodes.

In the current implementation, it isleveragingthe fact that every contract relationship between EPGs always defines a “consumer” and a “provider” side. Starting from ACI release 4.0(1), the application of the PBR policy is hence anchored always on the compute leaf where theproviderendpoint is connected, usually referred to as the “provider leaf”.

Figure143showsthe PBR policy in action when the communication is initiatedfrom the consumer endpoint, which represents themost common scenario. The traffic is forwarded byMulti-Site to the provider leaf, where the PBR policy is applied redirecting the traffic to the service node. Once the service node has applied the policy, the traffic is then delivered to the provider endpoint.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (145)

Figure 143.

PBR for Communication between Consumer and Provider EPGs

As a result of the communication flow above, the specific consumer endpoint informationislearned on the provider leaf node. This implies that when the provider endpoint replies, the PBR policy can be appliedagain on the provider leaf, allowing to redirect the traffic to the same service node thathandledthe firstleg of the communication (Figure144). Once the service node has applied the policy, the traffic can then be forwarded across the ISN toward the consumer endpoint.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (146)

Figure 144.

PBR for Communication between Provider and Consumer EPGs

Theconclusion that can be drawn from the figures aboveis that for every east-west communicationbetween endpoints connected to the fabrics, the traffic is always going to be redirected tothe service node in the site where theprovider endpoint islocated.

It is critical to ensure that it is always possible toidentifya consumer and a provider side in zoning rules for each given contract relationship between EPGs.This means that the same EPG should never consume andprovidethe same contract and the definition of different contracts may be needed depending on the specific deployment scenario.

Also, if two different contracts were applied between the same pair of EPGs (as to be able to differentiate the provider and consumer EPG for each of them), it is critical to ensure that the zoning rules created by those two contractsdon’thave overlapping rules with same contract and filter priorities. Defining zoning rules with the same priority thatidentifythe same type of traffic could lead to a not deterministic forwarding behavior (creating asymmetric traffic through different firewalls).

As a typical example, it would not work to create two contracts both using a “permit any” rule to redirect all the traffic. If one contract is “permit any” and the other contract is “permit ICMP only”,the zoning-rules created by the contract with “permit ICMP only”wouldhave higher priority.

One last important consideration: we need to ensure that thePBR policy can be appliedon the providerleafeven in the specific scenario where the communication may beinitiatedby the provider endpoint. In such scenario, there may not beyetconsumerendpoint informationavailable on the provider leaf (learned via data-plane communication, as explained above), so a differentmechanism isrequiredto derive the destination class-ID of the consumer EPG and apply the policy. In the current implementation this isachievedbased on a “static” approach consisting in configuring a subnet prefix under the consumer EPG. This information is then propagated from NDO to the APIC domain on the provider site and allowsAPICto installthat prefix on the provider leaf node, with the associated class-IDidentifyingthe consumer EPG.

It is hence critical to ensure that the prefix configured under the consumer EPGincludes all the IP addresses of the endpoints part of that EPG. In a “network-centric” ACI deployment (where a single EPG is defined in a BD), this is easily achievable by associating to the EPG the same IP subnet configured for the BD. In “application-centric” use cases, where multiple EPGs may bedefined underthe sameBD, it may become much more challengingbeing able toidentifya prefix including only the endpoints connected to a specific EPG and the only solution may beto use specific /32 prefixes for everyendpointthat is connected to the EPG. This last approach does notrepresenta viableoptionin real-life deployments, so the use of service-graph with PBRfor east-west communication is usually recommended and restrictedonlyto “network-centric” configurations.

For what concerns the provisioning of the configuration requiredfor achieving the behavior described above,the assumption is that the tenant EPGs/BDs are alreadydeployed,and endpoints are connected to them (EPGs can either be locally defined in each site or stretched across sites). Also, the logicalfirewallservice nodes and PBR policies have been deployed in each site, as described in the “Service Graph with PBR with Multi-Site for the Insertion of a Single Service Node” section.Once those pre-requisite steps are done, it is possible to follow pretty much identically thesteps already described as part of the “Firewall Insertion for North-South Traffic Flows (Intra-VRF)”:

Define a stretched service BD where thefirewallnodes should be connected.

Create theservice graph (also as a stretched object). Notice that the same service graph used for north-south communication could also be used for east-west traffic flows.

Create a contract (with scope VRF) and associate the service graph to it.

Specify a prefix under the consumer EPG allowing to match the IP addresses of all the consumer endpoints that are part of the EPG. The same flags used for the subnet under the BD should be configuredfor this prefix, with the addition ofthe “No Default SVI Gateway” flag.

Apply the contract between the consumer and provider EPGs.

Once the contract is applied, east-west communication is successfullyestablishedwith proper redirection to the service node.

Looking at the endpoint table on the provider leaf, it is possible to verify how the consumer endpoint is indeed learned. The consumer endpoint is remotelylocated, hencereachablevia a VXLAN tunnel (tunnel26)establishedbetween the provider leaf and the O-UTEP address of the spines in the remote site.

Leaf 101 Site1

Leaf101-Site1# show endpoint vrf Tenant-1:VRF1

Legend:

s - arp H - vtep V - vpc-attached p - peer-aged

R - peer-attached-rl B - bounce S - static M - span

D - bounce-to-proxy O - peer-attached a - local-aged m - svc-mgr

L - local E - shared-service

+-----------------------------------+---------------+-----------------+--------------+-------------+

VLAN/ Encap MAC Address MAC Info/ Interface

Domain VLAN IP Address IP Info

+-----------------------------------+---------------+-----------------+--------------+-------------+

Tenant-1:VRF1 10.10.2.2 tunnel26

60 vlan-819 0050.56b9.1bee LV po1

Tenant-1:VRF1 vlan-819 10.10.1.1 LV po1

As a result of the prefix configuration under theconsumer EPG,the prefix is installed on the provider leaf with the associated class-ID (49163).

Leaf101Site1

Leaf101-Site1# cat /mit/sys/ipv4/inst/dom-Tenant-1:VRF1/rt-\[10.10.2.0--24\]/summary

# IPv4 Static Route

prefix :10.10.2.0/24

childAction :

ctrl :pervasive

descr :

dn :sys/ipv4/inst/dom-Tenant-1:VRF1/rt-[10.10.2.0/24]

flushCount :0

lcOwn :local

modTs :2020-12-16T13:27:29.275+00:00

monPolDn :

name :

nameAlias :

pcTag :49163

pref :1

rn :rt-[10.10.2.0/24]

sharedConsCount :0

status :

tag :0

trackId :0

This ensures that the PBR policy can always be applied on the provider leaf, even in cases where the specific consumer endpoint information is not yet learned. In the output below, 16388 is the class-ID for thelocal provider EPG1-S1, so it is possible to see how redirection to the service node is applied for both directions of the traffic (consumer to provider and vice versa).

Leaf101Site1

Leaf101-Site1# show zoning-rule scope 3112963

+---------+--------+--------+----------+----------------+---------+---------+------+------------------+----------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+----------------+---------+---------+------+------------------+----------------------+

|4194 | 0 | 0 | implicit |uni-dir | enabled | 3112963 | |deny,log |any_any_any(21) |

|4203 | 0 | 0 |implarp |uni-dir | enabled | 3112963 | | permit |any_any_filter(17) |

|4227 | 0 | 15 | implicit |uni-dir | enabled | 3112963 | |deny,log |any_vrf_any_deny(22) |

|4197 | 0 | 49153 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4138 | 0 | 16393 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4217 | 0 | 32771 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4222 | 0 | 49162 | implicit |uni-dir | enabled | 3112963 | | permit |any_dest_any(16) |

|4230 |49163 | 16388 | default | bi-dir | enabled | 3112963 | |redir(destgrp-6) |src_dst_any(9) |

|4170 |16388 | 49163 | default |uni-dir-ignore | enabled | 3112963 | |redir(destgrp-6) |src_dst_any(9) |

|4202 |16394 | 16388 | default |uni-dir | enabled | 3112963 | | permit |src_dst_any(9) |

|4174 |16394 | 49163 | default |uni-dir | enabled | 3112963 | | permit |src_dst_any(9) |

+---------+--------+--------+----------+----------------+---------+---------+------+------------------+----------------------+

Firewall Insertion for East-West Traffic Flows (Inter-VRFs)

The service node integrationfor east-west communication between EPGs that are part of different VRFs worksessentially likethe intra-VRF scenario just discussed. The PBR policy is always applied on the provider leaf nodeand the onlyspecific considerations for this scenarioin terms of provisioning are detailedbelow.

Enable the “Shared between VRFs” flag for both the consumer and providerBDs.

Ensure that the same flag is also configured for the prefix configured under the consumer EPG (Nexus Dashboard Orchestrator would prevent to deploying the configuration if thatwasnot the case).

Toleak the BD subnet from theproviderto theconsumer VRF, the subnetprefix associated to the provider BDmustalsobe configured under the provider EPG.

Note: The same considerationsaround “network-centric” and “application-centric” deployments apply also when configuring the prefix under the provider EPG to trigger the route leaking functionality.

The scope of the contract with the associated service graphshould be changed to “Tenant” (if the VRFs are deployed in the same Tenant) or to “Global” (if the VRFs are deployed in different Tenants).

For inter-tenant deployments, the service BD, the servicegraphand thecontract should all be deployed as part of the provider Tenant.

As it was the case forintra-VRF, also in the inter-VRFs east-west service-graphthe application of the PBR policy ispossiblethanks to the configuration of the prefix under the consumer EPG that triggers theprovisioningof that prefix, and the associated class-ID, on the provider leaf node:

Leaf101Site1

Leaf101-Site1# cat /mit/sys/ipv4/inst/dom-Tenant-1:VRF-Shared/rt-\[10.10.2.0--24\]/summary

# IPv4 Static Route

prefix :10.10.2.0/24

childAction :

ctrl :pervasive

descr :

dn :sys/ipv4/inst/dom-Tenant-1:VRF-Shared/rt-[10.10.2.0/24]

flushCount :1

lcOwn :local

modTs :2020-12-16T14:30:51.006+00:00

monPolDn :

name :

nameAlias :

pcTag :10936

pref :1

rn :rt-[10.10.2.0/24]

sharedConsCount :0

status :

tag :4294967292

trackId :0

It is worth noticing how the consumerprefix is now assigned a class-ID value (10936) taken from the global range that is unique across all the VRFs.The same applies to theclass-ID for the provider EPG, which now gets the value of 32 as shown in the output belowpointing out the rules used to redirect the traffic flows to the service node.

Leaf101Site1

Leaf101-Site1# show zoning-rule scope 2293765

+---------+--------+--------+----------+----------------+---------+---------+------+------------------+----------------------+

| Rule ID |SrcEPG|DstEPG|FilterID| Dir|operSt| Scope | Name | Action | Priority |

+---------+--------+--------+----------+----------------+---------+---------+------+------------------+----------------------+

|4230 | 0 | 0 | implicit |uni-dir | enabled | 2293765 | |deny,log |any_any_any(21) |

|4200 | 0 | 0 |implarp |uni-dir | enabled | 2293765 | | permit |any_any_filter(17) |

|4234 | 0 | 15 | implicit |uni-dir | enabled | 2293765 | |deny,log |any_vrf_any_deny(22) |

|4222 |10936 | 32 | default | bi-dir | enabled | 2293765 | |redir(destgrp-6) |src_dst_any(9) |

|4191 | 32 | 10936 | default |uni-dir-ignore | enabled | 2293765 | |redir(destgrp-6) |src_dst_any(9)|

|4236 | 0 | 49154 | implicit |uni-dir | enabled | 2293765 | | permit |any_dest_any(16) |

+---------+--------+--------+----------+----------------+---------+---------+------+------------------+----------------------+

Service Graph with PBR with Multi-Site for the Insertion of Two (or more) Service Nodes

The use of service graph and PBR allows also tochain togethertwo (or more) service nodefunctions so that communication between endpoints part of two EPGs can allow only afterthe traffic is gone through the operation performed by each service node.This can apply to north-south and east-west traffic flows, as highlighted inFigure145.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (147)

Figure 145.

Two Nodes Service Graph with PBR

The provisioning of a multi-node service-graph function with PBR issimilartowhat was previously discussedfor a single node use case. The first step consists in defining themultiple service node logical functions that should be offered by each fabric part of the Multi-Site domain.Figure146shows thecreation of two logical L4/L7 devices performed at the APIC level. Each logical device will then be implemented with one, two, or moreconcrete service nodes, depending on the specific deployment/redundancymodel of choice, as shown in previousFigure128.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (148)

Figure 146.

Definition of two Logical Firewall Nodes on the APIC of Site1

The second provisioning step performed on APIC consists in defining the PBR policies allowing to redirect the traffic through the service nodes. Since we have defined two service nodes,it is hencerequiredto define two separate PBR policies as well.As shown inFigure147, each policy redirects traffic to a specific MAC/IP pair, identifying each specific service node function.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (149)

Figure 147.

PBR Policies for a Two Service nodes Redirection

Note: A similar configuration must be performed for all the fabrics part of the Multi-Site domain.

At this point, it is possible to provide the specific configuration allowing to stitch the two service nodes inthe middle ofcommunications between EPGs for both north-south and east-west traffic flows

As forthe single service node use case, the PBR policy for north-south communication must always be applied on the compute leaf nodes. This is always the case for intra-VRF use casesas long asthe VRFremainsconfigured with the defaultingresspolicy enforcement direction; in the inter-VRF scenario, it is instead mandatory to ensure that the Ext-EPG is always configured as the provider of the contract that has associated the service graph.

Nodes Insertion for North-South Traffic Flows

Figure148highlights how the PBR redirection for north-south flows alwaysensures that the service nodes that are utilized arethe ones located inthe same site with the internal endpoint.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (150)

Figure 148.

PBR Redirection for North-South Traffic Flows

The provisioning steps to be performed on NDO to integrate thefirewallfor north-south traffic flows (intra-VRF) are described below.

Configure the subnet of theconsumer and providerBDs to ensure they can be advertised out of the L3Out in each site. This requires configuring the BD subnets as “Advertised Externally” and to map the BDs to the specific L3Outs where the prefix should be advertised, as described in the previous “Connectivity to the External Layer 3 Domain” section.

Configure the External EPG to properly classify incoming traffic. Assuming a stretched Ext-EPG is deployed, it is common to specify a “catch-all” 0.0.0.0/0 prefix with the associated “External Subnets for External EPGs” flag set.

Define the “service BD” used to connect thefirewallnodes deployed in each fabric. This BD must be provisioned from the Nexus Dashboard Orchestrator in a template associated with all the sites. TheService-BD isprovisioned identically as shown inFigure136for the single service node insertion use case.

Create the service graph on the Orchestrator for the insertion of the two service nodes: this should also be done on the template associated to all the sites part of the Multi-Site domain (i.e.the service graph is provisioned as a ‘stretched’ object). As shown in Figure149, the configuration for the service graph is provisioned in two parts: first, at the global template level to specify which service nodes should be inserted (twofirewallsin this specific example). Second, at the site level to map the specific thelogic firewall devicesthat havebeen defined on APIC andarenow exposed to Nexus Dashboard Orchestrator (see previous Figure146). 

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (151)

Figure 149.

Definition of the Two-Nodes Service Graph on NDO

Define a contract and associate to it the service graph. The contract is usually defined in a template associated with all the sites and in the example inFigure150, a “Permit-All” filter is associated with the contract to ensure that all traffic is redirected to the firewall. Itis possible to change this behavior and make the filter more specific if thegoal is instead to redirect to the firewall only specific traffic flows.

As shown in the following two figures, once the service graph is associated with the contract, it is thenrequiredto perform a two steps configuration: at the global template level,we need to specify the BD where thefirewalllogical nodes are connected (Figure150). In our specific example, the firewalls are connected in one-arm mode, hence it is possible to specify the same “Service-BD” for both consumer and provider firewall connectors (interfaces). Notice also that the “Service-BD” must be associated with the connectors at the global template level, which is the main reason why that BD must be provisioned as a stretched object in all the sites.

Also, at the site level, it is requiredinstead to associate the PBR policy to each service node(Figure151).The redirection policy is associated with each interface of the service node (consumer and provider connectors). In our specific example where the service nodes are connected in one-arm mode, the same PBR policy is applied for each connector, but thatwouldnot for example be the case when the firewall isconnected in two-arms mode. Also, in specific service-graph deployments, there may be needed to apply the PBR policy onlyfor one interface (i.e.,for one specific direction of traffic)

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (152)

Figure 150.

Definition of the Contract with Associated Service Graph(Global Template Level)

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (153)

Figure 151.

Association of the PBR Policy to Each Service Node Interface (Site Local Level)

The last provisioning step consists in applying the previously defined contract between the internal EPGs and the external EPG. As previously discussed in the “Connectivity to the External Layer 3 Domain” section, the definition of a stretched external EPG is recommended for the L3Outs deployed across sites that provide access to the same set of external resources, as it simplifies the application of the security policy.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (154)

Figure 152.

Applying the Contract to Consumer and Provider EPGs

In the infra-VRF scenario discussed in this section, it does not matter which side is the provider or the consumer, the PBR policy is always applied on the compute leaf node anyway.

Note: As of NDO release 3.5(1), vzAny cannot be used in conjunction with a contract that has associated a service graph. The onlyoptionto apply a PBR policy between two EPGs (internal and/or external) consists hence in creating a specific contract, as in the example above.

Once the provisioning steps described above are completed, a separate service graph is deployed in each APIC domain and north-south traffic flows start getting redirected through thefirewallnodes. Please refer to the “Firewall Insertion for North-South Traffic Flows (Intra-VRF)” for more information on how to verifythe correct behavior of the redirection.

Very similarprovisioning steps are needed for the inter-VRF (and/or inter-tenant) use case. You can find more info on how to deploy this scenario in theprevious“Firewall Insertion for North-South Traffic Flows (Inter-VRFs)” section.

Two Service Nodes Insertion for East-West Traffic Flows

The same two nodes service graph provisioned for the north-south use case can also be re-utilized for the east-west scenario shown inFigure153andFigure154.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (155)

Figure 153.

2-NodesPBR for Communication between Consumer and Provider EPGs

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (156)

Figure 154.

2-NodesPBR for Communication betweenProviderandConsumerEPGs

The same considerations discussed in the “Firewall Insertion for East-West Traffic Flows (Intra-VRF)” and “Firewall Insertion for East-West Traffic Flows (Inter-VRFs)” sections continue to apply also for the 2-nodes scenario. This means that the only additional configuration step that is requiredto “anchor” theapplication of the PBR policy on the provider leaf node consists in configuring the prefix under the consumer EPG.

Integrating ACI Multi-Pod and ACI Multi-Site

In many real-life deployment scenarios, customers have the need to integrate the ACI Multi-Pod and Multi-Site architecture to be able to tackle the specific requirements that can be satisfied by deploying tightly coupled ACI DCs (Multi-Pod) with loosely coupled ACI DCs (Multi-Site).

Figure 155 below shows an example of a topology where an ACI Multi-Pod fabric and a single Pod ACI fabric are deployed as part of the same Multi-Site domain.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (157)

Figure 155.

Integration between ACI Multi-Pod and ACI Multi-Site

More details about the specific deployment considerations to integrate those architecture can be found in the CI Multi-Site white paper. The next two sections highlight what are the required configuration steps to deploy the architecture shown above, taking into consideration two specific use cases:

Adding a Multi-Pod fabric and single Pod Fabric to the same Multi-Site domain.

Converting a single Pod fabric (already part of a Multi-Site domain) to a Multi-Pod fabric.

Adding a Multi-Pod fabric and single Pod Fabric to the same Multi-Site domain

This use case is typical of a scenario where a Multi-Pod fabric has already been deployed to bundle together as part of the same “logical DC” different ACI islands (representing rooms, halls, buildings ot even specific DC locations) and it is then required to add it to the same Multi-Site domain with a separate single Pod fabric (representing for example a Disaster Recovery site).

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (158)

Figure 156.

Adding a Multi-Pod fabric and single Pod Fabric to the same Multi-Site domain

The initial assumptions are the following:

The Multi-Pod fabric is already up and running, so that the spine nodes in different Pods are peering EVPN across an IPN infrastructure interconnecting the Pods (i.e., the required L3Out in the “infra” tenant has already been created, either manually or by leveraging the APIC Multi-Pod wizard).

Note: For more details on how to bring up a Multi-Pod fabric, please refer to the configuration document below:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739714.html

The NDO 3.5(1) service has been enabled on the Nexus Dashboard compute cluster.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (159)

Figure 157.

NDO Service Enabled on Nexus Dashboard

Note: The use of a vND standalone node shown above is only supported for lab or Proof oc Concept (PoC) activities and not for real life production deployments.

· The ACI Multi-Pod fabric has been onboarded to the Nexus Dashboard platform.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (160)

Figure 158.

Site1 (ACI Multi-Pod Fabric) Onboarded on Nexus Dashboard

The first step to add the ACI Multi-Pod fabric to the Multi-Site domain consists in setting the state of the fabric as “Managed” on the Nexus Dashboard Orchestrator UI and assigning it a unique Site ID.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (161)

Figure 159.

Set the Fabric State to “Managed” and Assign It a Unique Site ID

At this point, it is possible to access the “Configure Infra” section to start the required provisioning to add the fabric to the Multi-Site domain (as shown in the inital section “Nexus Dashboard Orchestrator Sites Infra Configuration”. You will notice several differences compared to the scenario of adding a new single Pod fabric, due to the fact that an L3Out part of the Infra Tenant already exists on APIC (as it was created during the provisioning of the Multi-Pod fabric) and that same L3Out must be used also for Multi-Site. Hence, NDO takes ownership of the Infra L3Out and automatically imports several configuration parameters from APIC, leaving to the user only the responsibility of configuring the remaining items:

Site level configuration as shown in the figure below, the BGP and OSPF related fields are automatically provisioned based on the information retrieved from the Infra L3Out on APIC. The only configuration required at the site level consists in enabling the “ACI Multi-Site” knob and specify the “Overlay Multicast TEP” address used to receive L2 BUM and L3 Multicast traffic. As mentioned at the beginning of this paper, for the O-MTEP you should provision an IP address that is routable across the ISN infrastructure connecting the ACI fabrics.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (162)

Figure 160.

Fabric Level Configuration

Pod Level Configuration: the only parameter that needs to be provisioned for each Pod is the Overlay Unicast TEP address, used to send and receive intersite VXLAN traffic for unicast Layer 2 and Layer 3 communication.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (163)

Figure 161.

Pod Level Configuration

As described by the information shown when hovering with the mouse over the “i” icon, the provisioned O-UTEP address must be different from the Data Plane TEP address that was configured on APIC during the ACI Multi-Pod fabric configuration (172.16.1.1). As it is the case for the O-MTEP address, also the O-UTEP address must be routable across the ISN to ensure that the VXLAN data plane communication between sites can be successfully established.

Note: In case Remote-Leaf nodes were added to the Multi-Pod fabric, a separate Anycast TEP address would have also been assigned to the spines of each Pod for establishing VXLAN communication between each Pod and the RL nodes: the O-UTEP used in each Pod for Multi-Site must also be different from the Anycast TEP address used for RL deployment.

As noticed in figure 160 above, External TEP Pools already defined on APIC will also be automatically inherited by NDO (192.168.111.0/24 in this specific example). As described in the “Deploying Intersite L3Out” section, the use of External TEP Pools is required to enable Intersite L3Out communication.

Spine Level Configuration: for each spine, the interfaces part of the infra L3Out (and used for Multi-Pod) are automatically retrieved from APIC and shown (interfaces 1/63 and 1/64 in the example below). The only required configuration is enabling BGP on the subset of spines that need to function as BGP Speakers (i.e., creating BGP EVPN adjacencies with the BGP speakers in remote sites) and specify the BGP-EVPN Router-ID representing the IP address of the loopback interfaces used to establish those remote adjacencies. As shown below, it is possible in this case to re-use the same address that was already assigned to the spine for establishing the EVPN adjacencies between Pods required by Multi-Pod.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (164)

Figure 162.

Spine Level Configuration

It is recommended to deploy a pair of BGP speakers per fabric to provide redundant EVPN adjacencies with the remote sites. If the fabric is Multi-Pod, one spine in two separate Pods should be provisioned as Speaker (Pod1-Spine 1 and Pod2-Spine1 in the specific example above). The spines that are not Speakers become Forwarders by default and establish only EVPN peerings with the local Speakers. For more information on the role of BGP speakers and forwarders and how control and data planes work when integrating Multi-Pod and Multi-Site, please refer to the ACI Multi-Site paper below:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739609.html#IntegrationofCiscoACIMultiPodandMultiSite

Once the Multi-Pod fabric is successfully added to the Multi-Site domain, it is then possible to add the single Pod fabric that in our example represents the DR site. How to achieve this task has already been covered as part of the “Adding the ACI Fabrics to the Multi-Site Domain” section.

Converting a single Pod fabric (already part of a Multi-Site domain) to a Multi-Pod fabric

This second scenario is simpler, since the starting point is the one where two single-Pod fabrics are already added as part of the Multi-Site domain. The goal is then to expand one of the two fabrics by adding a second Pod.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (165)

Figure 163.

Converting a single Pod fabric (already part of a Multi-Site domain) to a Multi-Pod fabric

Figure 164 below highlights the two single-Pod fabrics initially part of the Multi-Site domain. As noticed, both spines in each Pod are deployed as BGP speakers (“BGP peering on”) for the sake of fabric-level resiliency.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (166)

Figure 164.

Two Single-Pod Fabrics Part of the Multi-Site Domain

The first step consists in running the ACI Multi-Pod wizard on APIC for Site1 to add a second Pod and build a Multi-Pod fabric. Detailed information on how to build a Multi-Pod fabric can be found in the paper below:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739714.html

Important Note

Please be aware of a specific software defect impacting this specific use case (CSCvu76783). The issue happens only when running the Multi-Pod wizard on APIC to add a second Pod to a fabric that is already part of a Multi-Site domain. Given the presence of the infra L3Out created for Multi-Site, the Multi-Pod wizard skips completing some specific settings for the nodes in Pod-1. When running pre-5.2(1) ACI code, a possible workaround consists in manually setting the parameters below once the Multi-Pod wizard is completed:

1. Under tn-infra > networking > L3Outs > "intersite" > Logical Node Profile > “Profile" > Configured Nodes > each spine "Node" is missing 2 items. One, the checkbox "Use Router as loopback address" is unchecked. 2nd, the checkbox called "External remote Peering" is unchecked when it should be checked.

2. Under tn-infra > Policies > Fabric External Connection Policy > "Policy" is also missing two settings. One, "Enable Pod Peering Profile" is unchecked when it should be checked. Two, Fabric Ext Routing Profile is missing the network for Pod-1.

Once the second Pod has been successfully added to the Multi-Pod fabric, it is possible to trigger an infra rediscovery on NDO to ensure the Pod can be added to Site1 and displayed on the UI.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (167)

Figure 165.

Refreshing the Inra View to Display the Newly Added Pod

At this point it is possible to complete the configuration of Pod2, by provisioning the required parameters both at the Pod level and at the Spine level.

Pod Level Configuration: The only parameter that needs to be provisioned for each Pod is the Overlay Unicast TEP address (172.16.200.100 in this example). The External TEP Pool is automatically inherited from APIC, as it was configured as part of the Multi-Pod wizard workflow.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (168)

Figure 166.

Pod Level Configuration for the Newly Added Pod

Spine Level Configuration: The configuration for the interfaces defined in the infra L3Out is automatically inherited from APIC, as the same interfaces must also be used to send and receive Multi-Site traffic. The only required configuration in the new Pod is defining one of the two spines as BGP Speaker (turning on the “BGP peering” knob), so that it can establish BGP EVPN adjacencies with the spines in the remote Site2.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (169)

Figure 167.

Spine Level Configuration for the Newly Added Pod

Since having two spines configured as BGP Speakers is sufficient for each fabric, after enabling BGP for Pod2-Spine the recommendation is to disable it for Pod1-Spine 2, making this spine simply a Forwarder.

Once the “Deploy” button is hit, the infra configuration is provisioned and the Multi-Pod fabric is successfully added to the Multi-Site domain.

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (170)Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (171)

Cisco Nexus Dashboard Orchestrator - Cisco Multi-Site Deployment Guide for ACI Fabrics (2024)

FAQs

How many APICs are recommended by Cisco for a Cisco ACI fabric? ›

A general recommendation in determining APIC cluster sizes is to deploy three APICs in fabrics scaling up to 80 leaf switches. If recoverability is a concern, a standby APIC can be added to the deployment.

Which of the following is the recommended deployment model for ACI multi-site? ›

Note: Cisco ACI Multi-Pod remains the recommended architectural approach for the deployment of active/standby service-node pairs across data centers and active/active clustered service nodes with the same virtual IP and virtual MAC addresses across data centers.

What is Cisco ACI multi-site orchestrator? ›

Cisco ACI Multi-Site Orchestrator (MSO)

Cisco Multi-site orchestrator is part of Cisco ACI Anywhere vision which allows a single security and connectivity policy having a single pane of glass to manage all multi-cloud environments.

What is Cisco Nexus Dashboard Orchestrator? ›

Cisco Nexus Dashboard Orchestrator (NDO) provides consistent network and policy orchestration, scalability, and disaster recovery across multiple data centers through a single pane of glass while allowing the data center to go wherever the data is.

What is the minimum number of Apics? ›

Three is the minimum number of controllers supported for the APIC cluster due to high availability reasons. For instance, every piece of data in the object model is replicated across the controllers in the cluster, because the cluster acts as a distributed storage and data processing system.

What is the minimum number of apics does Cisco recommend to deploy in a production cluster? ›

The Cisco Application Policy Infrastructure Controller (APIC) appliance is deployed in a cluster. A minimum of three controllers are configured in a cluster to provide control of the Cisco ACI fabric.

What is the difference between multisite and multipod ACI? ›

One difference, however, is how Multi-Site handles multi-destination traffic. While Multi-Pod utilizes multicast in the Inter-Pod Network, Multi-Site uses head-end replication, decreasing the requirements of the upstream Inter-Site Network (ISN).

Which two protocols are used for fabric discovery in ACI? ›

The Cisco ACI fabric uses LLDP- and DHCP-based fabric discovery to automatically discover the fabric switch nodes, assign the infrastructure VXLAN tunnel endpoint (VTEP) addresses, and install the firmware on the switches.

What is multi-site deployment? ›

A multi-site deployment using multiple Enterprise Control Manager parent server and mixed child servers. The following are required: Group managed products or child servers. Consider the following when you group managed products and child servers: Company network and security policies.

What is ACI in Cisco Nexus? ›

Cisco Application Centric Infrastructure (ACI) Architecture

It creates the policies that define the data center's network infrastructure. Nexus 9000 Switches: Nexus 9000 switches use the ACI Fabric OS to communicate with APIC and create infrastructure based on policies.

On what does the Cisco ACI fabric object oriented operating system run? ›

The ACI fabric object-oriented operating system (OS) runs on each Cisco Nexus 9000 Series node. It enables programming of objects for each configurable element of the system. All the switch nodes contain a complete copy of the concrete model.

What is ACI fabric in networking? ›

The Cisco Application Centric Infrastructure (ACI) allows applications to define the network infrastructure. It is one of the most important aspects in Software Defined Network or SDN. The ACI architecture simplifies, optimizes, and accelerates the entire application deployment life cycle.

What is the Nexus dashboard in aci? ›

Cisco Nexus Dashboard provides a unified and seamless experience for common data center services (Figure 2) to your networks, whether Cisco Application Centric Infrastructure (Cisco ACI ®), Cisco NX-OS (through its Cisco Nexus Dashboard Fabric Controller service and/or running in standalone mode).

What is the Nexus Dashboard Fabric Controller? ›

Cisco Nexus Dashboard Fabric Controller (NDFC) is the network management platform for all NX-OS-enabled deployments. It spans new fabric architectures, storage network deployments, and IP Fabric for Media.

What is the benefit of Nexus dashboard? ›

One platform makes it easy
  • Simplest way to manage Cisco Nexus networks. Gain visibility that frees you to do what you need, where and when you need to do it.
  • Configure with ease. ...
  • Minimize congestion, risk, and downtime. ...
  • Greater visibility and sustainability.

How do the Cisco Apics initially connect to and configure the fabric in band? ›

Navigate to the APIC web GUI path; Fabric > Access Policies > Pools > VLAN .
  1. Name - The name of the VLAN Pool. ...
  2. Name - The name of the Attachable Access Entify Profile. ...
  3. Name - The name of the Leaf Profile. ...
  4. Interface Selector Profiles - Choose the Attached Entity Profile created in Step 1.5.
May 9, 2024

What is fabric in Cisco ACI? ›

An ACI fabric forms a Clos-based spine-and-leaf topology and is usually depicted using two rows of switches. Depending on the oversubscription and overall network throughput requirements, the number of spines and leaf switches will be different in each ACI fabric.

What are the three core components of the Cisco ACI architecture? ›

Cisco ACI Architecture

This architecture decouples the network control plane from the data forwarding plane, allowing for greater flexibility, scalability, and manageability. ACI consists of three key components: the Application Policy Infrastructure Controller (APIC), the leaf switches, and the spine switches.

References

Top Articles
Latest Posts
Article information

Author: Moshe Kshlerin

Last Updated:

Views: 6296

Rating: 4.7 / 5 (77 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Moshe Kshlerin

Birthday: 1994-01-25

Address: Suite 609 315 Lupita Unions, Ronnieburgh, MI 62697

Phone: +2424755286529

Job: District Education Designer

Hobby: Yoga, Gunsmithing, Singing, 3D printing, Nordic skating, Soapmaking, Juggling

Introduction: My name is Moshe Kshlerin, I am a gleaming, attractive, outstanding, pleasant, delightful, outstanding, famous person who loves writing and wants to share my knowledge and understanding with you.