Adam Raffe Cloud, Data Centre & Networking

Azure CLI JMESPath Queries

I use the Azure CLI for much of what I do in Azure now - true, the same things can usually be achieved through the portal or by using PowerShell, but I just prefer the Linux / Bash nature of the CLI.

One of the things that makes the CLI so nice to use is the powerful query language that it has available - this language is called JMESPath. JMESPath isn’t specific to the Azure CLI though - it’s a query language for JSON (http://jmespath.org/) so it can be used whenever you need to manipulate or query JSON data.

So how can we use it to improve our Azure CLI operations? Let’s start with a simple example. I’ve decided that I want to use the CLI to find out the names of all the virtual machines I have running in a particular resource group. I’m going to start by running the command az vm list -g demo.VMs to return a list of all VMs in the ‘demo.VMs’ resource group (by the way, I’m using the Azure Cloud Shell to run these commands - check it out, it’s great):

Queries-1

Woah! I just got a ton of JSON back from that command. What you see in this screenshot is just the very top of the output for one VM only - this goes on for many more pages. But all I wanted was to find out the names of my VMs! So how do I narrow this down to the information I want? Here’s where JMESPath queries come in.

I’ll add a query to the original command that should give me only the names of the VMs. The command I’ll run is as follows:

az vm list -g demo.VMs --query [].[name]

This gives me back something much more civilised:

Queries-2

Not bad, but still a bit messier than I would like - let’s try this again with the -o table switch at the end of the command:

Queries-3

OK, that looks better.

Now, I’ve decided that - along with the VM name - I want to know the name of the operating system disk attached to each machine. I need to add it to the query, but how do I know what to add? Let’s take a look at part of the original JSON query from one of the VMs:

Queries-4

From the above, it looks like the field I am looking for (name) is buried under the ‘storageProfile’ object and then under ‘osDisk’. So let’s add this to the query and see what happens:

az vm list -g demo.VMs --query [].[name,storageProfile.osDisk.name] -o table

Queries-5

Nice! I can now see the name of the VMs and the OS disk used by each one. However, I’m still not happy that the column headings in my tables are simply labelled ‘Column1’, ‘Column2’, etc. To add a nice friendly column heading, I can add the heading I want to the query as follows:

az vm list -g demo.VMs --query "[].{Name:name,Disk:storageProfile.osDisk.name}" -o table

Queries-6

Perfect. Note that this time, I have used curly brackets for the second part of the query, plus I have enclosed the whole query in quotation marks.

In my examples so far, I’m getting information back about both the Linux and Windows VMs that live in my resource group. The problem is, I’m only really interested in the Windows VMs - so how do I narrow this query down even further to only include the Windows machine? Well, we can set up the query to look only for the elements in our array containing a certain value - in this case, we want to make sure that only the elements that contain ‘Windows’ make it into our output. Here’s how it’s done:

az vm list -g demo.VMs --query "[?contains(name, 'Windows')].{Name:name,Disk:storageProfile.osDisk.name}" -o table

This gives us the following:

Queries-7

Now let’s take this a bit further. Suppose I want to get a list of all the VMs _not _currently running (i.e. deallocated) and with ‘Linux’ in the name - and then start those VMs. One way of achieving this is to do the following:

az vm list -g demo.VMs --show-details --query "[?contains(name, 'Linux') && powerState == 'VM deallocated']".id -o tsv | xargs -L1 bash -c 'az vm start --ids $0'

There’s a bit going on here, so let’s break it down. In the first part of the command, we run the az vm list command, but this time we add the –show-details parameter (only this extended version of the command shows the power state of the virtual machine). Then we add a query that returns only those VMs that a) have ‘Linux’ in the name and b) have a current power state of ‘VM deallocated’. We also want to make sure that we return only the ID of the VM - hence the .id on the end of the query. Now the table output format that we’ve been using up until now isn’t going to work here, so we’ll need to use a different output format - in this case we’re going to use the tab separated output format (-o tsv) instead.

In the second part of the command, we’re taking the output of the first command (which returns the IDs of the VMs we are interested in) and piping this to the az vm start command. The xargs command is used to pass the output values from the first command as input values to the second.

There’s a whole lot more you could do with JMESPath queries - check out the JMESPath site here for more info.

Also, my colleague Rich Cheney put together a self paced lab guide around Azure CLI, BASH and JMESPath - check it out here.

Thanks for reading!

Build a Virtual Data Centre in Azure - Now on Video!

A few months back, I created a lab workshop focused around building virtual data centres in Azure (see here for more details). Given how successful this workshop has been when running it with partners, I’ve now recorded a video with my colleague Dan Baker that takes viewers through the whole process of building the VDC environment. The video is less than an hour long and will walk you through the entire lab build, explaining each step along the way.

The video is available on YouTube now.

Enjoy!

Secure Connectivity from Azure VNets to Public Services

A question I’ve heard a few times recently is “if I have services running in an Azure Virtual Network, how do I securely connect that VNet to Azure public services, such as Blob Storage?”. Microsoft have this week announced a couple of features designed to help with this scenario, but before delving into those, let’s look at the issue we are actually trying to solve.

First, a few basics: a Virtual Network (VNet) is a private, isolated, network within Azure into which you can deploy infrastructure resources such as virtual machines, load balancers and so on:

Secure-VNets1

Although these VMs can (and very often do) have direct Internet access, it is of course possible to restrict connectivity into and out of this VNet according to your requirements.

Now consider the myriad of public services available in Azure, such as Blob Storage and Azure SQL. As these services are public, they do not sit inside a virtual network and are not isolated from the “outside world”. The question is, if I want to connect from a VM inside a VNet to a public service such as Blob Storage, how does that work? It’s actually pretty straightforward - VMs connect to Blob Storage or other Azure services using the normal Internet facing end points (e.g. _.blob.core.windows.net), as shown in the following diagram:

Secure-VNets2.jpg

OK, that’s fine - but doing it this way does open up a couple of issues:

  • The VM needs Internet access - what if I want to prevent the VM from having Internet access but retain access to the Azure services only?

  • The storage account in question is open to anyone else on the public Internet (true, there are various authentication mechanisms such as SAS tokens, but the storage account URL is still fundamentally ‘open’ and not as private as we would like).

Let’s deal with these one at a time.

How do I structure Network Security Groups to allow access only to Azure services?

In this scenario, I want my VM to have access to Azure Blob Storage, however I don’t want that VM to be able to access the wider Internet. Network Security Groups (NSGs) seem like the obvious choice here, but what addresses should I allow or deny in the rules? Do I allow access to the specific Azure data centre IP ranges for storage and deny everything else? What happens if those IPs change at any point?

Luckily, there is now a solution to this issue: NSG Service Tags, which were announced this week and are currently in preview. The idea behind Service Tags is simple: when defining NSG rules, you use the Service Tag for the service you are interested in as the destination, instead of IP addresses. So in my example above, I would have an NSG rule allowing access from my VNet to the storage service in my region (defined using a Service Tag), followed by another rule denying all other Internet access from that VNet.

The rule allowing access to storage would look like this:

Secure-VNets3

So by using NSG Service Tags, we can simply specify the Azure public service we want to give access to without having to worry about IP addresses - much easier.

Public services have Internet reachable IP addresses and are therefore not truly ‘private’.

Our second issue is that - regardless of the authentication mechanisms we implement - the IP addresses for our Azure public services are fundamentally reachable by the entire Internet and therefore cannot be considered truly ‘private’. Wouldn’t it be much nicer if we could have a private connection between our Virtual Network and the service in question? Well, we’re in luck - also announced this week were VNet Service Endpoints, a method of extending private address space to Azure services, such as storage and SQL.

To enable this, the first thing we do is enable Service Endpoints on the VNet (in this case, I’m using storage as an example):

Secure-VNets4

The second thing we need to do is to turn off access from any network on the storage account and only allow the specific VNets in question. This is done from the storage account configuration:

Secure-VNets5

Now that that’s configured, the VM within my VNet / subnet can access objects within the storage account, while that same account can no longer be accessed from the Internet:

Secure-VNets6

If we take a look at the effective routes associated with my virtual machine’s NIC, we can see that we have some additional routes added for the service endpoint:

Secure-VNets7

One other interesting point is that I can no longer access the storage account even from within the Azure portal:

Secure-VNets8

That’s it - hopefully it’s clear from this post how these new features make securing access to Azure public services much easier. Thanks for reading!

Connecting Containers to Azure Virtual Networks

Azure has a number of ways in which to run containers, ranging from simple IaaS VMs running Docker, to Azure Container Service (a service that provisions a full container cluster using Kubernetes, Swarm or DC/OS) and Azure Container Instances. One of the characteristics of these services is that when a container is provisioned, it typically has an IP address allocated to it from within the local host, rather than from the Azure virtual network to which the host is connected. As an example, consider the following scenario where we have a single Azure IaaS virtual machine running Ubuntu and Docker:

Container-net1

In the very simple example above, we have an Azure virtual machine running Docker attached to a virtual network, which has an address range of 10.3.1.0/24. The host itself (or technically, its NIC) is allocated an IP address from the VNet range. The container itself is allocated an IP address from the Docker0 bridge address range (typically 172.17.0.0/16). Although this works fine (NAT takes place within the host to connect the container to the outside world), we lose a certain amount of visibility into the container’s address space from the Azure world - so it becomes more difficult to apply Azure networking features such as Network Security Groups (NSGs). Wouldn’t it be nice if we could have our containers sitting directly on an Azure VNet, with an IP address assigned from that VNet range? In fact, we can now do this using a set of network plugins, available here. Let’s have a look at how these plugins work.

For this example, I’ll be using the CNM (Container Network Model) plugin - there is a CNI (Container Network Interface) version also available. I won’t go into the differences between these two models here as it has been covered at length elsewhere (e.g. here). So the first thing I’ll do is download the plugin and run it in the background (I am using the latest version, which is 0.9 at the time of writing:

curl -sSL https://github.com/Azure/azure-container-networking/releases/download/v0.9/azure-vnet-cnm-linux-amd64-v0.9.tar.gz
tar xzvf azure-vnet-cnm-linux-amd64-v0.9.tar.gz
sudo ./azure-vnet-plugin&

Now that we have the plugin running, we can create a new Docker network using the Azure driver:

sudo docker network create --driver=azure-vnet --ipam-driver=azure-vnet --subnet=10.3.1.0/24 azure e7b58e62fb381c0e5429b2d2b38520eb02513b2e6812d0c1a00a2ba7691bfeda

Let’s break down the above command: first, note that we are creating a network called ‘azure’ using the azure-vnet driver, but also that we are using the azure-vnet IPAM driver for IP address management. This IPAM plugin is necessary for allocating IP addresses to our containers directly from the Azure fabric. For the subnet, we need to match this with the subnet within the Azure VNet that we are connecting to. Let’s verify what we just created:

adraffe@Docker-Host:~$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
e7b58e62fb38 azure azure-vnet local
9894d83e1ebc bridge bridge local
3697ca264bfa host host local
64eb1143821e none null local

Now that we have the network created, let’s have a look at what has been created on the host using ifconfig:

adraffe@Docker-Host:~$ ifconfig
azure2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.3.1.4 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::20d:3aff:fe29:f117 prefixlen 64 scopeid 0x20<link>
ether 00:0d:3a:29:f1:17 txqueuelen 1000 (Ethernet)
RX packets 248 bytes 125708 (125.7 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 356 bytes 60542 (60.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Here, we can see that a new bridge has been created called ‘azure2’. This sits alongside the standard ‘Docker0’ bridge and will be used for connecting our containers to once they have been created. Note that the IP address of this bridge is 10.3.1.4 - this address happens to be the IP of the Docker host itself, allocated from the Azure virtual network.

OK, so let’s create a container and try to connect it to our VNet. I’ll spin up a basic Alpine image and connect it to the network created above:

adraffe@Docker-Host:~$ sudo docker run -it --net=azure alpine
docker: Error response from daemon: No address returned.

Hmm, that didn’t work - it seems that I’m not getting an IP address allocated to my container. Why is this? The reason is that - right now - we need to pre-allocate IP addresses in Azure in order to make them available to containers. This could of course change (and hopefully will) in the future. In order to pre-allocate an address, I need to create an additional IP config and apply it to my Docker host’s NIC. I could do this in a number of ways (portal, ARM templates, etc), but I’ll use the Azure CLI here:

az network nic ip-config create -g Docker --nic-name docker-host191 --private-ip-address 10.3.1.5 --name IPConfig2

Now that we have this additional IP address in place, let’s try creating the container again:

adraffe@Docker-Host:~$ sudo docker run -it --net=azure alpine
/ # ifconfig
eth0 Link encap:Ethernet HWaddr B6:EC:0A:16:CF:59
inet addr:10.3.1.5 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:586 (586.0 B) TX bytes:0 (0.0 B)

This time, the container is created - doing an ifconfig from within the container shows that it has an IP address of 10.3.1.5, which sits directly on the Azure VNet I am using. Let’s have a look at what this looks like:

Container-net2

Connecting Kubernetes Clusters to Azure Virtual Networks

The example above was pretty simple - just a single host running Docker with a very basic container setup. What if I want a full Kubernetes cluster connected in to my Azure VNet?

By default, ACS with Kubernetes uses a basic network plugin called kubenet. With this approach, Kubernetes pods are deployed to a subnet that is different to those residing in the Azure VNet. The ACS Kubernetes plugin works in a very similar way to the example I showed in the first section above - a number of additional IP addresses are added to the host’s NICs and are then allocated to Kubernetes pods as they are created, allowing pods to sit directly on Azure VNets and allowing full use of the Azure SDN features, such as Network Security Groups.

Thanks for reading!

Check Out my Azure 'Virtual Data Centre' Lab!

I’ve just finished working on a new self-guided lab that focuses on the Azure ‘Virtual Data Centre’ (VDC) architecture. The basic idea behind the VDC is that it brings together a number of Azure technologies, such as hub and spoke networking, User-Defined Routes, Network Security Groups and Role Based Access Control, in order to support enterprise workloads and large scale applications in the public cloud.

The lab uses a set of Azure Resource Manager (ARM) templates to deploy the basic topology, which the user will then need to further configure in order to build the connectivity, security and more. Once the basic template build has been completed, you’ll be guided through setting up the following:

  • Configuration of site-to-site VPN

  • Configuration of 3rd party Network Virtual Appliance (in this case a Cisco CSR1000V)

  • Configuration of User Defined Routes (UDRs) to steer traffic in the right direction

  • Configuration of Network Security Groups (NSGs) to lock down the environment

  • Azure Security Center for security monitoring

  • Network monitoring using Azure Network Watcher

  • Alerting and diagnostics using Azure Monitor

  • Configuration of users, groups and Role Based Access Control (RBAC)

I put this lab together to run as a workshop for the partners that I work with, but I am also making it available here for you to run if you are interested. It should take around 3 - 4 hours to complete and all you need is an Azure subscription (you can sign up for a free trial here)

The lab guide is available on Github here:

https://github.com/Araffe/vdc-networking-lab

Let me know what you think!

Adam