In this post, we’ll take a closer look at some of the most important constructs within the ACI solution – application profiles, End Point Groups (EPGs), contracts and filters. Hopefully you’ve taken a look at the other parts in this series – in part 1, I gave a brief overview of ACI and what I would be covering in the series. Part 2 discussed the fabric bring-up process, with part 3 giving a short tour of the APIC. Read the rest of this entry »

Hello again! Hopefully you’re back after reading parts one and two of this series – in the first post, I covered a basic introduction to ACI and then in the last post, we looked at how the fabric initialisation and discovery process works. After the incredible excitement of building an ACI fabric, what do we do with it? Well, if you’ve never seen what an APIC looks like before, in this post we’ll have a look around the APIC and start to find our way around the GUI.

Read the rest of this entry »

Welcome to the second part of my blog series on ACI. Before we start to delve into policies, contracts, filters and all the other goodness to be found in ACI, we need to actually provision a fabric and bring it online. As it turns out, this is a really easy process and one which should take only a short amount of time. Obviously before we begin, we need to make sure everything is cabled correctly (leaf nodes to spine nodes, APIC controllers to leaf nodes, APIC out of band connectivity, etc). Read the rest of this entry »

This post is the first in a series in which I’m going to describe various aspects of Cisco’s Application Centric Infrastructure (ACI). ACI, if you aren’t already aware, is a new DC network architecture from Cisco which uses a policy based approach to abstract traditional network constructs (e.g. VLANs, VRFs, IP subnets, and many more). What I’m not going to do in these posts is cover too many of the basic concepts of ACI – for that, I recommend you read the ACI Fundamentals book on cisco.com, available here. Instead, my intention is to cover the practical aspects of building and running an ACI fabric, including how to bring up a fabric, basic physical connectivity and integration with virtualisation systems. Read the rest of this entry »

It’s been a while since I posted – I’ve been spending a lot of time getting up to speed on the new Nexus 9000 switches and ACI. The full ACI fabric release is a few months away, but one of the interesting programmability aspects of the Nexus 9000 (running in “Standalone”, or NX-OS mode) is the NX-API. So what is it exactly? Read the rest of this entry »

One of the interesting design considerations when deploying FabricPath is where to put your layer 3 gateways (i.e. your SVIs). Some people opt for the spine layer (especially in smaller networks), some may choose to deploy a dedicated pair of leaf switches acting as gateways for all VLANs, or distribute the gateway function across multiple leaves. Whatever choice is made, there are a couple of challenges that some have come across.

As you may know, vPC+ modifies the behaviour of HSRP to allow either vPC+ peer to route traffic locally. This is certainly useful functionality in a FabricPath environment as it allows dual active gateways, however what if you want your default gateways on the Spine layer (where there are no directly connected STP devices, and therefore no real need to run vPC+)? What you end up with in this case is vPC+ running on your Spine switches in order to gain access to the dual active HSRP forwarding, but with no actual vPC ports on the switch. This works fine – but most people would prefer not to have vPC+ running on their Spine switches if they can avoid doing so.

Anycast-HSRP-1 Read the rest of this entry »

VXLAN (Virtual eXtensible LAN) is an ‘overlay’ technology introduced a couple of years ago to solve the challenges associated with VLAN scalability and to enable the creation of a large number of logical networks (up to 16 million). It also has the benefit of allowing layer 2 segments to be extended across layer 3 boundaries due to the MAC-in-UDP encapsulation used.

One of the most significant barriers to the deployment of VXLAN is its reliance upon IP multicast in the underlying network for handling of broadcast / multicast / unknown unicast traffic – many organisations simply do not run multicast today and have no desire to enable it on their network. Another issue is that while VXLAN itself can support up to 16 million segments, it is not feasible to support such a huge number of multicast groups on any network – so in larger scale VXLAN environments, multiple VXLAN segments might have to be mapped to a single multicast group, which can lead to flooding of traffic where it is not needed or wanted.

With the release of version 4.2(1)SV2(2.1) of the Nexus 1000V, Cisco have introduced ‘Enhanced VXLAN’ – this augmentation to VXLAN removes the requirement to deploy IP multicast and can utilise the VSM (Virtual Supervisor Module) for handling the distribution of MAC addresses to each of the VEMs (Virtual Ethernet Modules). This post will delve into the specifics of enhanced VXLAN and explain how the various modes work. Read the rest of this entry »

A question that I still get quite a lot is how to connect Nexus 2000 Fabric Extenders to their parent switches – is it better to single attach them to the parent, or to dual-home the FEX to both parent switches (using vPC)? Of course there is no right answer for every situation – it depends on the individual environment and sometimes personal preference, but here are a few of my thoughts on this. Read the rest of this entry »

For anyone using the Nexus 1000V virtual switch, it’s sometimes useful to see what is happening directly on the Virtual Ethernet Module (VEM) residing on the host, rather than just having visibility from the Virtual Supervisor Module (VSM). The method to achieve this is via the ‘vemcmd’ syntax on the ESXi host. Unfortunately these commands are not well documented, so I thought it would be useful to provide a list of some of the more common ones.

Read the rest of this entry »

On a recent Packet Pushers podcast, use of the Peer-Gateway feature on the Nexus 7000 and whether it resolves the lack of support for L3 over vPC was briefly discussed. The whole topic has been quite a big source of confusion, so let’s answer it straight away: using Peer-Gateway to try and resolve L3 over vPC issues is not supported, but more importantly in most cases it doesn’t actually work. The question is, why not? There are actually two reasons. Read the rest of this entry »