It's often necessary to configure Azure virtual machines to use a consistent outbound IP address, to connect to another resource with an IP based whitelist. In a classic cloud service based deployment this was easy, all of the VM's in the cloud service used the cloud services IP for outbound traffic and all was well.
In an Azure Resource Manager (ARM) deployment things are different. There's no concept of cloud services and public IP's are assigned to VM's or load balencers. If you have a public IP assigned to the VM its self then that will use that for outbound traffic, but what if you don't want to assign every VM a public IP , or you want multiple machines to use the same outbound IP?
After investigating this issue it appears the only way to achieve this is to assign a public IP to a load balancer (even if you aren't doing any load balancing) and then place all of the VM's you want to use that IP for outbound traffic in the same availability set. When you add the first VM to the backend pool of the load balancer this availability set then becomes associated with the load balancer. All machines in the availability set that don't have their own public IP will now use this for outbound traffic.
Mixed VM Series
This solution works fine, until you start mixing VM series. I started of with some A series VM's, assigned them to the availability set that the load balancer uses and all worked. However I now needed some D series machines, but I can't add these machines to the availability set the A series machines are in, this generates an error: "The sizes supported by availability set do not match the size of the virtual machine."
It turns out that Azure Deploys VM's into certain cluster depending on their series, and VM's in an availability set need to be in the same cluster, else you see this message. You can find more details of this here. The types of cluster available are:
- Type 1: A0-A4
- Type 2: A0-A7
- Type 3: A8/A9
- Type 4: A0-A7 and D1-D14
- Type 5: G1-G5 (Godzilla)
- Type 6: DS1-DS14
Because my first VM in this availability set had been an A series machine it had placed the set on a type 1 or 2 cluster, meaning when I tried to add a D series machine it failed.
In my particular scenario there is a solution: create the first VM as a D series, this will ensure it is in a type 4 cluster and then allow you to place A and D series VM's in the same availability set. Once I did that all was well.
However, this solution will not work if you want to mix A or D series with G or DS series, there are no common clusters. the only way to resolve this is to create a second load balancer, with a second public IP and associate the second availability set with that. This adds an additional complication that your second IP will not actually get allocated unless you have at least 1 inbound NAT rule on your load balancer pointing to a VM, so if you don't want any inbound rules you end up having to create one and then block it at the security group or VM level. Not ideal but it will work. In that way your second availability set will all use the same outbound IP, but obviously it will be different to the first set.
At the moment, the linking of outbound IP to the availability set to the outbound IP means extra work and resource cost. There are a couple of ways that MS could look to resolve this in the future:
- Allow you to associate more than 1 availability set with a load balancer
- Remove the link altogether and provide another way to link outbound IP's to VM's