Archives for category: Nexus

ACI has the ability to divide the fabric up into multiple tenants, or multiple VRFs within a tenant. If communication is required between tenants or between VRFs, one common approach is to route traffic via an external device (e.g. a firewall or router). However, ACI is also able to provide inter-tenant or inter-VRF connectivity directly, without traffic ever needing to leave the fabric. For inter-VRF or inter-tenant connectivity to happen, two fundamental requirements must be satisfied:

  1. Routes must be leaked between the two VRFs or tenants that need to communicate.
  2. Security rules must be in place to allow communication between the EPGs in question (as is always the case with ACI).

Read the rest of this entry »

It’s been a while since I posted – I’ve been spending a lot of time getting up to speed on the new Nexus 9000 switches and ACI. The full ACI fabric release is a few months away, but one of the interesting programmability aspects of the Nexus 9000 (running in “Standalone”, or NX-OS mode) is the NX-API. So what is it exactly? Read the rest of this entry »

One of the interesting design considerations when deploying FabricPath is where to put your layer 3 gateways (i.e. your SVIs). Some people opt for the spine layer (especially in smaller networks), some may choose to deploy a dedicated pair of leaf switches acting as gateways for all VLANs, or distribute the gateway function across multiple leaves. Whatever choice is made, there are a couple of challenges that some have come across.

As you may know, vPC+ modifies the behaviour of HSRP to allow either vPC+ peer to route traffic locally. This is certainly useful functionality in a FabricPath environment as it allows dual active gateways, however what if you want your default gateways on the Spine layer (where there are no directly connected STP devices, and therefore no real need to run vPC+)? What you end up with in this case is vPC+ running on your Spine switches in order to gain access to the dual active HSRP forwarding, but with no actual vPC ports on the switch. This works fine – but most people would prefer not to have vPC+ running on their Spine switches if they can avoid doing so.

Anycast-HSRP-1 Read the rest of this entry »

Virtual Port Channel (vPC) is a technology that has been around for a few years on the Nexus range of platforms. With the introduction of FabricPath, an enhanced version of vPC, known as vPC+ was released. At first glance, the two technologies look very similar, however there are a couple of differences between them which allows vPC+ to operate in a FabricPath environment. So for those of us deploying FabricPath, why can’t we just use regular vPC? Read the rest of this entry »

In a FabricPath deployment, it is important to have all FabricPath VLANs configured on every switch participating in the FP domain. Why is this? The answer lies in the way multi-destination trees are built.

A multi-destination tree is used to forward broadcast, unknown unicast and multicast traffic through the FabricPath network:

FP-MD-Tree

Read the rest of this entry »