Archives for posts with tag: Nexus

Welcome to the second part of my blog series on ACI. Before we start to delve into policies, contracts, filters and all the other goodness to be found in ACI, we need to actually provision a fabric and bring it online. As it turns out, this is a really easy process and one which should take only a short amount of time. Obviously before we begin, we need to make sure everything is cabled correctly (leaf nodes to spine nodes, APIC controllers to leaf nodes, APIC out of band connectivity, etc). Read the rest of this entry »

It’s been a while since I posted – I’ve been spending a lot of time getting up to speed on the new Nexus 9000 switches and ACI. The full ACI fabric release is a few months away, but one of the interesting programmability aspects of the Nexus 9000 (running in “Standalone”, or NX-OS mode) is the NX-API. So what is it exactly? Read the rest of this entry »

One of the interesting design considerations when deploying FabricPath is where to put your layer 3 gateways (i.e. your SVIs). Some people opt for the spine layer (especially in smaller networks), some may choose to deploy a dedicated pair of leaf switches acting as gateways for all VLANs, or distribute the gateway function across multiple leaves. Whatever choice is made, there are a couple of challenges that some have come across.

As you may know, vPC+ modifies the behaviour of HSRP to allow either vPC+ peer to route traffic locally. This is certainly useful functionality in a FabricPath environment as it allows dual active gateways, however what if you want your default gateways on the Spine layer (where there are no directly connected STP devices, and therefore no real need to run vPC+)? What you end up with in this case is vPC+ running on your Spine switches in order to gain access to the dual active HSRP forwarding, but with no actual vPC ports on the switch. This works fine – but most people would prefer not to have vPC+ running on their Spine switches if they can avoid doing so.

Anycast-HSRP-1 Read the rest of this entry »

The requirement for layer 2 interconnect between data centre sites is very common these days. The pros and cons of doing L2 DCI have been discussed many times in other blogs / forums so I won’t revisit that here, however there are a number of technology options for achieving this, including EoMPLS, VPLS, back-to-back vPC and OTV. All of these technologies have their advantages and disadvantages, so the decision often comes down to factors such as scalability, skillset and platform choice.

Now that FabricPath is becoming more widely deployed, it is also starting to be considered by some as a potential L2 DCI technology. In theory, this looks like a good bet – easy configuration, no Spanning-Tree extended between sites, should be a no brainer, right? Of course, things are never that simple – let’s look at some things you need to consider if looking at FabricPath as a DCI solution. Read the rest of this entry »

The lack of support for running layer 3 routing protocols over vPC on the Nexus 7000 is well documented – less well known however is that the Nexus 5500 platform operates in a slightly different way which does actually allow layer 3 routing over vPC for unicast traffic. Some recent testing and subsequent discussions with one of my colleagues on this topic reminded me that there is still (somewhat understandably) a degree of confusion around this.

Let’s start with a reminder of what doesn’t work on the Nexus 7000:

L3-over-vPC-5Kvs7K1

Read the rest of this entry »

Virtual Port Channel (vPC) is a technology that has been around for a few years on the Nexus range of platforms. With the introduction of FabricPath, an enhanced version of vPC, known as vPC+ was released. At first glance, the two technologies look very similar, however there are a couple of differences between them which allows vPC+ to operate in a FabricPath environment. So for those of us deploying FabricPath, why can’t we just use regular vPC? Read the rest of this entry »

In a FabricPath deployment, it is important to have all FabricPath VLANs configured on every switch participating in the FP domain. Why is this? The answer lies in the way multi-destination trees are built.

A multi-destination tree is used to forward broadcast, unknown unicast and multicast traffic through the FabricPath network:

FP-MD-Tree

Read the rest of this entry »