Quantcast
Channel: Microsoft Azure Support Team Blog
Viewing all 76 articles
Browse latest View live

Building Windows Server Failover Cluster on Azure IAAS VM – Part 2 (Network)

$
0
0

Hello, cluster fans. In my previous blog, I talked about how to work around the storage block in order to implementing Windows Server Failover Cluster on Azure IAAS VM. Now let’s discuss another important part – Network in cluster on Azure.

Before that, you should know some basic concepts of Azure networking. Here are a few Azure terms we need use to setup the cluster.

VIP (Virtual IP address): A public IP address belongs to cloud service. It also serves as an Azure Load Balancer which tells how network traffic should be directed before being routed to the VM.

DIP (Dynamic IP address): An internal IP assigned by Microsoft Azure DHCP to the VM.

Internal Load Balancer: It is configured to port-forward or load-balance traffic inside a VNET or cloud service to different VMs.

Endpoint: It associates a VIP/DIP + port combination on a VM with a port on either the Azure Load Balancer for public-facing traffic or the Internal Load Balancer for traffic inside a VNET (or cloud service).

You can refer to this blog for more details about those terms for Azure network: http://blogs.msdn.com/b/cloud_solution_architect/archive/2014/11/08/vips-dips-and-pips-in-microsoft-azure.aspx

 OK, enough read, storage is ready and we know the basic of Azure network, can we start to build the cluster?

Yes! The first difference you will see is that you need start the cluster with one node and then add the other nodes as the next step. This is because the cluster name object (CNO) cannot be online since it cannot acquire a unique IP address from the Azure DHCP service. Instead, the IP address assigned to the CNO is a duplicate address of node who owns CNO. That IP fails as a duplicate and can never be brought online. This eventually causes the cluster to lose quorum because the nodes cannot properly connect to each other. To prevent the cluster from losing quorum, you start with one node cluster. Let the CNO’s IP fail and then manually set up the IP address.

Example:

CNO DEMOCLUSTER is offline because IP Address is failed. 10.0.0.4 is VM’s DIP, which is where CNO’s IP duplicates from.

 

 In order to fix this, we will need go into the properties of the IP Address resource and change the address to another address in the same subnet that is not currently in use, for example, 10.0.0.7.

To change the IP address, choose the Properties of the IP Address and specify the new address.

 

 Once the address is changed, right click on the Cluster Name resource and tell it to come online.

 

 Then you can add more nodes to the cluster.

Another way to resolve this issue is to use New-Cluster PowerShell cmdlet and specify static IP during cluster creation.

Take the above environment as example:

New-Cluster -Name DEMOCLUSTER -Node node1,node2 -StaticAddress 10.0.0.7

https://technet.microsoft.com/en-us/library/hh847246.aspx

Note: The Static IP address that you appoint to CNO is not for network communication. The only purpose is to bring the CNO online due to the dependency request. Therefore, you cannot ping that IP; cannot resolve DNS name; cannot use CNO for management since its IP is an unusable IP.

 

Now you’ve successfully created a cluster. Let’s have a highly available role inside it. For the demo purpose, I’ll take File Server as an example since this is the most common role that lot of us can understand.

Note: In production environment, we do not recommend File Server Cluster in Azure because of cost and performance. Take this example as a proof of concept.

Different than cluster on-premises, I recommend you to pause other nodes and keep only one node up. This is to prevent the new file server role from moving among nodes forever because file server’s VCO (virtual computer object) will have a duplicated IP address automatically assigned as the IP on the node who owns this VCO. This IP fails and makes VCO not come online on any node and may eventually cause the failover cluster manager no response. This is a similar scenario as for CNO we just talked before.

Screenshots are more intuitive.

VCO DEMOFS won’t come online because failed status of IP address. This is expected because the dynamic IP address duplicates the IP of owner node.

  

Manually edit the IP to a static unused 10.0.0.8 in this example, now the whole resource group is online.

 

But remember, that IP address is the same unusable IP address as CNO’s IP – you can use it to bring resource online but that is not a real IP for network communication. If this is a File Sever, none of the VMs except the owner node of this VCO can access the File Share. Azure networking loops the traffic back to the node it was originated from.

 

Show time starts, we need utilize load balancer in Azure to make this IP be able to communicate with other machines in order to achieving the client-server traffic.

Load Balancer is an Azure IP resource that can route network traffic to different Azure VMs. The IP can be public facing as VIP, or internal only, like DIP. Each VM needs have the endpoint(s) so the Load Balancer can know where the traffic needs go to. In the endpoint, there are two kinds of ports. Regular port is used for normal client-server communications. For example, port 445 is for SMB file sharing, port 80 is HTTP, port 1433 is for MSSQL, and etc. Another kind of port is probe port. The default port number is 59999. Probe port is to find out which is the active node that hosts the VCO in the cluster. Load balancer sends the probe pings over TCP port 59999 to every node in the cluster, by default, every 10 seconds. When you configure a role in cluster on Azure VM, you need figure out what port(s) the application uses because you will add this port to the endpoint. And then you add the probe port to the same endpoint. After that, you need update the parameter of VCO’s IP address to have that probe port. Finally, load balancer will do the similar port forward task and route the traffic to VM who owns the VCO. All the above settings need complete using PowerShell as the blog was written.

Note: When the blog was written, Microsoft only supports one resource group in cluster on Azure with Active / Passive model only. This is because VCO’s IP can only use cloud service IP address (VIP) or the IP address of the Internal Load Balancer. This limitation is still in effect although Azure now supports the creation of multiple VIP addresses in a given cloud service.

Here is the diagram for Internal Load Balancer (ILB) in cluster which can explain the above theory better:

 

 

The application in this cluster is File Server. That’s why we have port 445. And the IP for VCO is 10.0.0.8, the same as the ILB. There are three steps to configure this:

Step 1: Add the ILB to the Azure cloud service.

 Run the following PowerShell commands on your on-premises machine which can manage your Azure subscription.

# Define variables.

$ServiceName = "demovm1-3va468p3" # the name of the cloud service that contains the VM nodes. Your cloud service name is unique. Use Azure portal to find out service name or use get-azurevm.

 

$ILBName = "DEMOILB" # newly chosen name for the new ILB

$SubnetName = "Subnet-1" # subnet name that the VMs use in the VNet

$ILBStaticIP = "10.0.0.8" # static IP address for the ILB in the subnet

# Add Azure ILB using the above variables.

Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ILBName -SubnetName $SubnetName -ServiceName $ServiceName -StaticVNetIPAddress $ILBStaticIP

# Check the settings.

Get-AzureInternalLoadBalancer –servicename "$ServiceName

  

Step 2: Configure the load balanced endpoint for each node using ILB.

Run the following powershell commands on your on-premises machine which can manage your Azure subscription.

# Define variables.

$VMNodes = "DEMOVM1", “DEMOVM2" # cluster nodes’ names, separated by commas. Your nodes’ names will be different.

$EndpointName = "SMB" # newly chosen name of the endpoint

$EndpointPort = "445" # public port to use for the endpoint for SMB file sharing. If the cluster is used for other purpose, i.e., HTTP, the port number needs change to 80.

# Add endpoint with port 445 and probe port 59999 to each node. It will take a few minutes to complete. Please pay attention to ProbeIntervalInSeconds parameter. This tells how often the probe port detects which node is active.

ForEach ($node in $VMNodes)

{

Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name $EndpointName -LBSetName "$EndpointName-LB" -Protocol tcp -LocalPort $EndpointPort -PublicPort $EndpointPort -ProbePort 59999 -ProbeProtocol tcp -ProbeIntervalInSeconds 10 -InternalLoadBalancerName $ILBName -DirectServerReturn $true | Update-AzureVM

}

# Check the settings.

ForEach ($node in $VMNodes)

{

Get-AzureVM –ServiceName $ServiceName –Name $node | Get-AzureEndpoint | where-object {$_.name -eq "smb"}

}

 

Step 3: Update the parameters of VCO’s IP address with Probe Port.

 Run the following powershell commands inside one of the cluster nodes.

# Define variables

$ClusterNetworkName = "Cluster Network 1" # the cluster network name (Use Get-ClusterNetwork or GUI to find the name)

$IPResourceName = “IP Address 10.0.0.0" # the IP Address resource name (Use get-clusterresource | where-object {$_.resourcetype -eq "IP Address"} or GUI to find the name)

$ILBIP = “10.0.0.8” # the IP Address of the Internal Load Balancer (ILB)

# Update cluster resource parameters of VCO’s IP address to work with ILB.

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ILBIP";"ProbePort"="59999";"SubnetMask"="255.255.255.255";"Network"="$ClusterNetworkName";"OverrideAddressMatch"=1;"EnableDhcp"=0}

You should see this window:

 

Take the IP Address resource offline and bring it online again. Start the clustered role.

Now you have an Internal Load Balancer working with VCO’s IP. One last task you need do is Windows Firewall. You need at least open port 59999 on all nodes for probe port detection; or turn the firewall off. Then you should be all set. It may take about 10 seconds to establish the connection to VCO at the first time or after you failover the resource group to another node because of ProbeIntervalInSeconds we set up before.

 In this example, VCO has an Internal IP 10.0.0.8. If you want to make your VCO public-facing, you can use the cloud service’s IP address (VIP). The steps are similar and easier because you can skip Step 1 since this VIP is already an Azure load balancer. You just need add endpoint with regular port plus probe port to each VM (step 2); and then update the VCO’s IP in the cluster (step 3). Please be aware, your clustered resource group will be exposed to internet since VCO has a public IP. You may want to protect it by planning enhanced security methods. 

Great! Now you’ve completed all the steps of building Windows Server Failover Cluster on Azure IAAS VM. It is a bit longer journey. However, you’ll find it useful and worthwhile. Please leave me comments if you have question. Happy Clustering!

Mario Liu

Support Escalation Engineer

CSS Americas | WINDOWS | HIGH AVAILABILITY     


Supported IP Protocols for Azure Cloud Services

$
0
0

As of today, June 2015, the supported IP protocols for Azure cloud services are TCP (protocol number 6) and UDP (protocol number 17) only.  All TCP and UDP ports are supported.  This applies to both cloud service VIPs as well as instance level public IPs.  Additional IP protocols may work in some scenarios (for example ICMP), but there is no support for them or guarantee that they will continue to work in the future or in all deployment scenarios.

Customers are encouraged to use Network Security Groups and/or guest operating system firewalls when using instance level public IPs to ensure only desired traffic is allowed to reach the VM.

Azure SNAT

$
0
0

 

This post was contributed by Pedro Perez.

Azure’s network infrastructure is quite different than your usual on-premises network as there are different layers of software abstraction that work behind the curtains. I would like to talk today about one of these layers and why you may want to take it into account when troubleshooting a network issue with your application.

Possibly the biggest challenge for us and for our customers in the cloud is to build to scale. Our engineers have designed Azure to be able to work at hyper-scale and simultaneously to be able to accommodate our most technically demanding customers. As you can imagine, this adds complexity to the system. For example, there are multiple IPs associated with each virtual machine. In a non-Azure Resource Manager (ARM) scenario, when you deploy a VM inside a Cloud Service, the VM gets assigned an IP address known as the dynamic IP (DIP), which is not routable outside Azure. The Cloud Service gets assigned an IP address known as the virtual IP (VIP), which is a routable public IP address. Every VM inside the Cloud Service will hide behind the VIP when sending outgoing traffic and can only be accessed through the creation of an endpoint on the VIP that maps to that specific VM.

A VIP could be defined, in a parallelism with traditional on-premises networks, as a NAT IP address. The biggest particularity of the VIP is that it is shared among all the VMs in the same Cloud Service. You can easily control which ports redirect traffic to which VM by leveraging endpoints on the Cloud Service, but how does the translation work for outgoing traffic?

Source NAT

This is where Source NAT (SNAT from now on) comes into play. Any traffic leaving the Cloud Service (i.e. from a VM inside the Cloud Service and not going to another VM in the same Cloud Service) will go through a NAT layer, where it will get SNAT applied. As the name implies, SNAT only changes source information: Source IP address and source port.

Source IP address translation

The source IP address changes from the original DIP to the Cloud Service’s VIP, so traffic can be easily routed. This is a many-to-one mapping where all the VMs inside the Cloud Service will be translated to the Cloud Service’s VIP. At this point we already have a challenge. What would happen if two VMs inside the same Cloud Service create an outgoing connection to the same destination using also the same source port? Remember, a system will differentiate between different TCP (or UDP) connections by looking at the 4-tuple Source IP, Source Port, Destination IP, and Destination Port.

Changing any of the destination information will effectively break the connection, and we can’t change the source IP address as we only have one (the VIP). Therefore, we have to change the source port.

Source port translation

Azure pre-allocates 160 source ports on the VIP for the VMs connections. This pre-allocation is done to speed up establishing new communications and it’s limited to 160 ports to save resources. The initial port is randomly chosen among the high ports range, pre-allocating it and the next 159. We’ve found that these settings work for everyone as long as we follow some best practices when developing applications that talk over the network.

Azure will translate the source port of an outgoing connection to the first one available from those 160 pre-allocated ports.

The first question you might have already may be What happens if I simultaneously use the 160 ports? Well, if none of them has been freed yet the system will assign more ports on a best-effort basis, but as soon as one becomes free it will available again for use.

SNAT Table

All these translations must be stored somewhere, so we can keep track of them when the packets flow to and from our VM. The place where these are stored is called the SNAT Table and it’s the same concept of NAT table you could find in any other networking products like firewalls or routers.

The system will save the original 5-tuple (source IP, source port, destination IP, destination port, protocol - tcp/udp) and the translated 5-tuple where (source IP, source port) have been translated to the VIP and one of the pre-allocated ports.

Removing entries from the table

As in any other NAT table out there, you can’t store these entries forever and there should be rules to remove an entry, the most evident ones are:

  • If the connection has been closed with FIN, ACK we will wait a few minutes (2xMSL (Maximum Segment Lifetime) - http://www.rfc-editor.org/rfc/rfc793.txt) before removing the entry.

  • If the connection has been closed with a RST we will remove the entry straightaway. 

At this point, I'll bet you’ve already spotted an issue here. How do we decide to remove an UDP connection or a TCP connection where the peers just disappeared (e.g. crashed or just stopped responding)?

In that case we’ve got a hardcoded timeout value. Every time a packet for a specific connection goes through the SNAT process, we start a four minute countdown on that connection. If we reach zero before another packet goes through, we delete the SNAT entry from the table as we consider the connection to be finished. This is a very important point: If your application keeps a connection idle for 4 minutes, its entry in the connection table will get deleted. Most applications won’t handle losing a connection they thought was still active, so it is prudent that you manage your connection lifetime wisely and not let connections go idle.

Long-time idle connections considered harmful

Sometimes it’s easier just to show an example to help explaining a complex situation, so here’s an example of what could go wrong and why you shouldn’t keep TCP connections idle. This is how an active HTTP connection would look in the client’s (VM in a Cloud Service), SNAT and server’s connection tables:

 Client

SRC IP

SRC PORT

DST IP

DST PORT

TCP STATE

CLIENT DIP

12345

SERVER VIP

80

ESTABLISHED

 Source port is randomly chosen by the client OS.

 SNAT table

SRC IP

SRC PORT

DST IP

DST PORT

DIP-> VIP

12345 -> 54321

SERVER IP

80

 The DIP has been translated into the VIP and source port has been translated to the first available among the 160 pre-allocated ports.

 Server

SRC IP

SRC PORT

DST IP

DST PORT

TCP STATE

VIP

54321

SERVER IP

80

ESTABLISHED

The server doesn’t know the client’s DIP or the original source port as these are hidden behind the VIP because of the SNAT.

So far, so good.

Let’s now imagine that this connection has been idle for just a bit more than 4 minutes. How would the tables look?

 Client

SRC IP

SRC PORT

DST IP

DST PORT

TCP STATE

CLIENT DIP

12345

SERVER VIP

80

ESTABLISHED

 There are no changes here. The client has the connection ready for when more data is needed, but there’s no data pending from the server.

 SNAT table

SRC IP

SRC PORT

DST IP

DST PORT

REMOVED

REMOVED

REMOVED

REMOVED

 What happened here?!

The SNAT table entry has expired because it has been 4+ minutes idle, so it’s gone from the SNAT table.

 Server

SRC IP

SRC PORT

DST IP

DST PORT

TCP STATE

CLIENT VIP

54321

SERVER IP

80

ESTABLISHED

 As expected, nothing changed on the server. It has sent all the data that the client requested and it’s been 4+ minutes awaiting new requests on that TCP connection.

Now comes the problem. Let’s say the client resumes its operations and decides to request more data from the server using the same connection. Unfortunately, that won’t work because Azure will drop the traffic at the SNAT layer, because the packet does not meet any of these criteria:

  • It belongs to an existing connection (nope - doesn't meet this criteria because it had expired so was removed!)

  • It is a SYN packet (for new connections) (nope - doesn't meet this criteria since it isn't a SYN packet)

This means that the attempt to connection on this tuple will fail. Ok, this is a problem but not the end of the world, right? The client will just open a new TCP connection (i.e. a SYN packet will go through the SNAT) and send the HTTP request inside that one. That’s correct, but there are situations where we could face another consequence of that SNAT entry expiry.

If the client is opening new connections to the same server and port (SERVER IP:80) fast enough to cycle through the 160 assigned ports (or faster), but not explicitly closing them, port 54321 will be free to use again (Remember: the translation for port 12345->54321 has expired) and we would have run through the original 160 ports in a breeze. Rather sooner than later, port 54321 will be used again for a new translation with a source port other than 12345, but for the same source and destination IP addresses (and same destination port!). Here’s how it will look:

 Client

SRC IP

SRC PORT

DST IP

DST PORT

TCP STATE

CLIENT DIP

12346

SERVER VIP

80

SYN_SENT

 Client decides to create a new connection, so it sends a SYN packet to establish a new connection on SERVER IP:80.

 SNAT table

SRC IP

SRC PORT

DST IP

DST PORT

DIP-> VIP

12346-> 54321

SERVER IP

80

 Azure sees the packet and there’s no matching entry on the table, but accepts it as it’s a SYN packet (i.e. new connection). Translates 12346to 54321 since it’s again the first availablefrom the 160 pre-allocated ports.

 Server

SRC IP

SRC PORT

DST IP

DST PORT

TCP STATE

CLIENT VIP

54321

SERVER IP

80

ESTABLISHED

 The server has already an ESTABLISHED connection, so when it receives a SYN packet from CLIENT VIP:54321 it will ignore and drop it. At this point in time, we’ve ended up with two broken connections: The original that has been idle for 4+ minutes and the new one.

The best way to avoid this issue, and actually many issues on different kind of platforms is to have a sensible keep-alive at the application level (https://msdn.microsoft.com/en-us/library/windows/desktop/ee470551(v=vs.85).aspx). Sending a packet through an idle connection every 30 seconds or 1 minute should be considered as it’ll reset any idle timers both in Azure and in on-premises firewalls.

Need a quick workaround?

There’s a quick workaround you can use in Azure. You can avoid using the VIP for your outgoing traffic (and incoming too) by assigning an Instance-Level IP address, known as PIP. The PIP is assigned to only one instance, thus not needing to use SNAT to accommodate the requests of the different VMs. It still goes through the software load balancer (SNAT), but as there’s no SNAT applied, there’s no SNAT table and you can happily keep your connections idle… Until the SLB kills them (http://azure.microsoft.com/blog/2014/08/14/new-configurable-idle-timeout-for-azure-load-balancer/), but that’s another story. J

Before we go, we should probably also acknowledge another long-standing problem with this design. Since Azure allocates these outgoing ports in batches of 160, it is possible that the creation of a new batch of 160 may not happen fast enough and an outgoing connection attempt will fail. We typically only see this under very high load (almost always load testing), but if you fall victim to this, the solution is the same – use a PIP.

Building Windows Server Failover Cluster on Azure IAAS VM – Part 2 (Network)

$
0
0

Hello, cluster fans!

In my previous blog, Part 1, I talked about how to work around the storage blocker in order to implement Windows Server Failover Cluster on Azure IAAS VM. Now let’s discuss another important part – Network in cluster on Azure.

Before that, you should know some basic concepts of Azure networking. Here are a few Azure terms we need use to setup the Cluster.

VIP (Virtual IP address): A public IP address belongs to the cloud service. It also serves as an Azure Load Balancer which tells how network traffic should be directed before being routed to the VM.

DIP (Dynamic IP address): An internal IP assigned by Microsoft Azure DHCP to the VM.

Internal Load Balancer: It is configured to port-forward or load-balance traffic inside a VNET or cloud service to different VMs.

Endpoint: It associates a VIP/DIP + port combination on a VM with a port on either the Azure Load Balancer for public-facing traffic or the Internal Load Balancer for traffic inside a VNET (or cloud service).

You can refer to this blog for more details about those terms for Azure network:

VIPs, DIPs and PIPs in Microsoft Azure
http://blogs.msdn.com/b/cloud_solution_architect/archive/2014/11/08/vips-dips-and-pips-in-microsoft-azure.aspx

OK, enough reading, Storage is ready and we know the basics of Azure network, can we start to building the Cluster? Yes!

Instead of using Failover Cluster Manager, the preferred method is to use the New-Cluster PowerShell cmdlet and specify a static IP during Cluster creation. When doing it this way, you can add all the nodes and use the proper IP Address from the get go and not have to use the extra steps through Failover Cluster Manager.

Take the above environment as example:

New-Cluster -Name DEMOCLUSTER -Node node1,node2 -StaticAddress 10.0.0.7

Note:The Static IP Address that you appoint to the CNO is not for network communication. The only purpose is to bring the CNO online due to the dependency request. Therefore, you cannot ping that IP, cannot resolve DNS name, and cannot use the CNO for management since its IP is an unusable IP.

If for some reason you do not want to use PowerShell or you used Failover Cluster Manager instead, there are additional steps that you must take.  The difference with FCM versus PowerShell is that you need create the Cluster with one node and add the other nodes as the next step. This is because the Cluster Name Object (CNO) cannot be online since it cannot acquire a unique IP Address from the Azure DHCP service. Instead, the IP Address assigned to the CNO is a duplicate address of node who owns CNO. That IP fails as a duplicate and can never be brought online. This eventually causes the Cluster to lose quorum because the nodes cannot properly connect to each other. To prevent the Cluster from losing quorum, you start with a one node Cluster. Let the CNO’s IP Address fail and then manually set up the IP address.

Example:

The CNO DEMOCLUSTER is offline because the IP Address it is dependent on is failed. 10.0.0.4 is the VM’s DIP, which is where the CNO’s IP duplicates from.

 

 

In order to fix this, we will need go into the properties of the IP Address resource and change the address to another address in the same subnet that is not currently in use, for example, 10.0.0.7.

To change the IP address, right mouse click on the resource, choose the Properties of the IP Address, and specify the new 10.0.0.7 address.

 

Once the address is changed, right mouse click on the Cluster Name resource and tell it to come online.

 

 

Now that these two resources are online, you can add more nodes to the Cluster.

Now you’ve successfully created a Cluster. Let’s add a highly available role inside it. For the demo purpose, I’ll use the File Server role as an example since this is the most common role that lot of us can understand.

Note:In a production environment, we do not recommend File Server Cluster in Azure because of cost and performance. Take this example as a proof of concept.

Different than Cluster on-premises, I recommend you to pause all other nodes and keep only one node up. This is to prevent the new File Server role from moving among the nodes since the file server’s VCO (Virtual Computer Object) will have a duplicated IP Address automatically assigned as the IP on the node who owns this VCO. This IP Address fails and causes the VCO not to come online on any node. This is a similar scenario as for CNO we just talked about previously.

Screenshots are more intuitive.

The VCO DEMOFS won’t come online because of the failed status of IP Address. This is expected because the dynamic IP address duplicates the IP of owner node.

  

Manually editing the IP to a static unused 10.0.0.8, in this example, now the whole resource group is online.

 

But remember, that IP Address is the same unusable IP address as the CNO’s IP. You can use it to bring the resource online but that is not a real IP for network communication. If this is a File Server, none of the VMs except the owner node of this VCO can access the File Share.  The way Azure networking works is that it will loop the traffic back to the node it was originated from.

Show time starts. We need to utilize the Load Balancer in Azure so this IP Address is able to communicate with other machines in order to achieving the client-server traffic.

Load Balancer is an Azure IP resource that can route network traffic to different Azure VMs. The IP can be a public facing VIP, or internal only, like a DIP. Each VM needs have the endpoint(s) so the Load Balancer knows where the traffic should go. In the endpoint, there are two kinds of ports. The first is a Regular port and is used for normal client-server communications. For example, port 445 is for SMB file sharing, port 80 is HTTP, port 1433 is for MSSQL, etc. Another kind of port is a Probe port. The default port number for this is 59999. Probe port’s job is to find out which is the active node that hosts the VCO in the Cluster. Load Balancer sends the probe pings over TCP port 59999 to every node in the cluster, by default, every 10 seconds. When you configure a role in Cluster on an Azure VM, you need to know out what port(s) the application uses because you will need to add the port(s) to the endpoint. Then, you add the probe port to the same endpoint. After that, you need update the parameter of VCO’s IP address to have that probe port. Finally, Load Balancer will do the similar port forward task and route the traffic to the VM who owns the VCO. All the above settings need to be completed using PowerShell as the blog was written.

Note: At the time of this blog (written and posted), Microsoft only supports one resource group in cluster on Azure as an Active/Passive model only. This is because the VCO’s IP can only use the Cloud Service IP address (VIP) or the IP address of the Internal Load Balancer. This limitation is still in effect although Azure now supports the creation of multiple VIP addresses in a given Cloud Service.

Here is the diagram for Internal Load Balancer (ILB) in a Cluster which can explain the above theory better:

 

 

The application in this Cluster is a File Server. That’s why we have port 445 and the IP for VCO (10.0.0.8) the same as the ILB. There are three steps to configure this:

Step 1: Add the ILB to the Azure cloud service.

 Run the following PowerShell commands on your on-premises machine which can manage your Azure subscription.

# Define variables.

$ServiceName = “demovm1-3va468p3″ # the name of the cloud service that contains the VM nodes. Your cloud service name is unique. Use Azure portal to find out service name or use get-azurevm.

 

$ILBName = “DEMOILB” # newly chosen name for the new ILB

$SubnetName = “Subnet-1″ # subnet name that the VMs use in the VNet

$ILBStaticIP = “10.0.0.8” # static IP address for the ILB in the subnet

# Add Azure ILB using the above variables.

Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ILBName -SubnetName $SubnetName -ServiceName $ServiceName -StaticVNetIPAddress $ILBStaticIP

# Check the settings.

Get-AzureInternalLoadBalancer –servicename “$ServiceName

  

Step 2: Configure the load balanced endpoint for each node using ILB.

Run the following powershell commands on your on-premises machine which can manage your Azure subscription.

# Define variables.

$VMNodes = “DEMOVM1″, “DEMOVM2″ # cluster nodes’ names, separated by commas. Your nodes’ names will be different.

$EndpointName = “SMB” # newly chosen name of the endpoint

$EndpointPort = “445” # public port to use for the endpoint for SMB file sharing. If the cluster is used for other purpose, i.e., HTTP, the port number needs change to 80.

# Add endpoint with port 445 and probe port 59999 to each node. It will take a few minutes to complete. Please pay attention to ProbeIntervalInSeconds parameter. This tells how often the probe port detects which node is active.

ForEach ($node in $VMNodes)

{

Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name $EndpointName -LBSetName “$EndpointName-LB” -Protocol tcp -LocalPort $EndpointPort -PublicPort $EndpointPort -ProbePort 59999 -ProbeProtocol tcp -ProbeIntervalInSeconds 10 -InternalLoadBalancerName $ILBName -DirectServerReturn $true | Update-AzureVM

}

# Check the settings.

ForEach ($node in $VMNodes)

{

Get-AzureVM –ServiceName $ServiceName –Name $node | Get-AzureEndpoint | where-object {$_.name -eq “smb”}

}

 

Step 3: Update the parameters of VCO’s IP address with Probe Port.

 Run the following powershell commands inside one of the cluster nodes.

# Define variables

$ClusterNetworkName = “Cluster Network 1″ # the cluster network name (Use Get-ClusterNetwork or GUI to find the name)

$IPResourceName = “IP Address 10.0.0.0″ # the IP Address resource name (Use get-clusterresource | where-object {$_.resourcetype -eq “IP Address”} or GUI to find the name)

$ILBIP = “10.0.0.8” # the IP Address of the Internal Load Balancer (ILB)

# Update cluster resource parameters of VCO’s IP address to work with ILB.

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{“Address”=”$ILBIP”;

“ProbePort”=”59999″;”SubnetMask”=”255.255.255.255″;”Network”=”$ClusterNetworkName”;

Please note the above Powershell command has been split into three lines due to page restrictions.

“OverrideAddressMatch”=1;”EnableDhcp”=0}

You should see this window:

 

Take the IP Address resource offline and bring it online again. Start the clustered role.

Now you have an Internal Load Balancer working with the VCO’s IP. One last task you need do is with the Windows Firewall. You need to at least open port 59999 on all nodes for probe port detection; or turn the firewall off. Then you should be all set. It may take about 10 seconds to establish the connection to the VCO the first time or after you failover the resource group to another node because of the ProbeIntervalInSeconds we set up previously.

In this example, the VCO has an Internal IP of 10.0.0.8. If you want to make your VCO public-facing, you can use the Cloud Service’s IP Address (VIP). The steps are similar and easier because you can skip Step 1 since this VIP is already an Azure Load Balancer. You just need to add the endpoint with a regular port plus the probe port to each VM (Step 2). Then update the VCO’s IP in the Cluster (Step 3). Please be aware, your Clustered resource group will be exposed to the Internet since the VCO has a public IP. You may want to protect it by planning enhanced security methods.

Great! Now you’ve completed all the steps of building a Windows Server Failover Cluster on an Azure IAAS VM. It is a bit longer journey; however, you’ll find it useful and worthwhile. Please leave me comments if you have question.

Happy Clustering!

Mario Liu

Support Escalation Engineer

CSS Americas | WINDOWS | HIGH AVAILABILITY     

Supported IP Protocols for Azure Cloud Services

$
0
0

As of today, June 2015, the supported IP protocols for Azure cloud services are TCP (protocol number 6) and UDP (protocol number 17) only.  All TCP and UDP ports are supported.  This applies to both cloud service VIPs as well as instance level public IPs.  Additional IP protocols may work in some scenarios (for example ICMP), but there is no support for them or guarantee that they will continue to work in the future or in all deployment scenarios.

Customers are encouraged to use Network Security Groups and/or guest operating system firewalls when using instance level public IPs to ensure only desired traffic is allowed to reach the VM.

Azure SNAT

$
0
0

 

This post was contributed by Pedro Perez.

Azure’s network infrastructure is quite different than your usual on-premises network as there are different layers of software abstraction that work behind the curtains. I would like to talk today about one of these layers and why you may want to take it into account when troubleshooting a network issue with your application.

Possibly the biggest challenge for us and for our customers in the cloud is to build to scale. Our engineers have designed Azure to be able to work at hyper-scale and simultaneously to be able to accommodate our most technically demanding customers. As you can imagine, this adds complexity to the system. For example, there are multiple IPs associated with each virtual machine. In a non-Azure Resource Manager (ARM) scenario, when you deploy a VM inside a Cloud Service, the VM gets assigned an IP address known as the dynamic IP (DIP), which is not routable outside Azure. The Cloud Service gets assigned an IP address known as the virtual IP (VIP), which is a routable public IP address. Every VM inside the Cloud Service will hide behind the VIP when sending outgoing traffic and can only be accessed through the creation of an endpoint on the VIP that maps to that specific VM.

A VIP could be defined, in a parallelism with traditional on-premises networks, as a NAT IP address. The biggest particularity of the VIP is that it is shared among all the VMs in the same Cloud Service. You can easily control which ports redirect traffic to which VM by leveraging endpoints on the Cloud Service, but how does the translation work for outgoing traffic?

Source NAT

This is where Source NAT (SNAT from now on) comes into play. Any traffic leaving the Cloud Service (i.e. from a VM inside the Cloud Service and not going to another VM in the same Cloud Service) will go through a NAT layer, where it will get SNAT applied. As the name implies, SNAT only changes source information: Source IP address and source port.

Source IP address translation

The source IP address changes from the original DIP to the Cloud Service’s VIP, so traffic can be easily routed. This is a many-to-one mapping where all the VMs inside the Cloud Service will be translated to the Cloud Service’s VIP. At this point we already have a challenge. What would happen if two VMs inside the same Cloud Service create an outgoing connection to the same destination using also the same source port? Remember, a system will differentiate between different TCP (or UDP) connections by looking at the 4-tuple Source IP, Source Port, Destination IP, and Destination Port.

Changing any of the destination information will effectively break the connection, and we can’t change the source IP address as we only have one (the VIP). Therefore, we have to change the source port.

Source port translation

Azure pre-allocates 160 source ports on the VIP for the VMs connections. This pre-allocation is done to speed up establishing new communications and it’s limited to 160 ports to save resources. The initial port is randomly chosen among the high ports range, pre-allocating it and the next 159. We’ve found that these settings work for everyone as long as we follow some best practices when developing applications that talk over the network.

Azure will translate the source port of an outgoing connection to the first one available from those 160 pre-allocated ports.

The first question you might have already may be What happens if I simultaneously use the 160 ports? Well, if none of them has been freed yet the system will assign more ports on a best-effort basis, but as soon as one becomes free it will available again for use.

SNAT Table

All these translations must be stored somewhere, so we can keep track of them when the packets flow to and from our VM. The place where these are stored is called the SNAT Table and it’s the same concept of NAT table you could find in any other networking products like firewalls or routers.

The system will save the original 5-tuple (source IP, source port, destination IP, destination port, protocol – tcp/udp) and the translated 5-tuple where (source IP, source port) have been translated to the VIP and one of the pre-allocated ports.

Removing entries from the table

As in any other NAT table out there, you can’t store these entries forever and there should be rules to remove an entry, the most evident ones are:

  • If the connection has been closed with FIN, ACK we will wait a few minutes (2xMSL (Maximum Segment Lifetime) – http://www.rfc-editor.org/rfc/rfc793.txt) before removing the entry.

  • If the connection has been closed with a RST we will remove the entry straightaway. 

At this point, I’ll bet you’ve already spotted an issue here. How do we decide to remove an UDP connection or a TCP connection where the peers just disappeared (e.g. crashed or just stopped responding)?

In that case we’ve got a hardcoded timeout value. Every time a packet for a specific connection goes through the SNAT process, we start a four minute countdown on that connection. If we reach zero before another packet goes through, we delete the SNAT entry from the table as we consider the connection to be finished. This is a very important point: If your application keeps a connection idle for 4 minutes, its entry in the connection table will get deleted. Most applications won’t handle losing a connection they thought was still active, so it is prudent that you manage your connection lifetime wisely and not let connections go idle.

Long-time idle connections considered harmful

Sometimes it’s easier just to show an example to help explaining a complex situation, so here’s an example of what could go wrong and why you shouldn’t keep TCP connections idle. This is how an active HTTP connection would look in the client’s (VM in a Cloud Service), SNAT and server’s connection tables:

 Client

SRC IP

SRC PORT

DST IP

DST PORT

TCP STATE

CLIENT DIP

12345

SERVER VIP

80

ESTABLISHED

 Source port is randomly chosen by the client OS.

 SNAT table

SRC IP

SRC PORT

DST IP

DST PORT

DIP-> VIP

12345 -> 54321

SERVER IP

80

 The DIP has been translated into the VIP and source port has been translated to the first available among the 160 pre-allocated ports.

 Server

SRC IP

SRC PORT

DST IP

DST PORT

TCP STATE

VIP

54321

SERVER IP

80

ESTABLISHED

The server doesn’t know the client’s DIP or the original source port as these are hidden behind the VIP because of the SNAT.

So far, so good.

Let’s now imagine that this connection has been idle for just a bit more than 4 minutes. How would the tables look?

 Client

SRC IP

SRC PORT

DST IP

DST PORT

TCP STATE

CLIENT DIP

12345

SERVER VIP

80

ESTABLISHED

 There are no changes here. The client has the connection ready for when more data is needed, but there’s no data pending from the server.

 SNAT table

SRC IP

SRC PORT

DST IP

DST PORT

REMOVED

REMOVED

REMOVED

REMOVED

 What happened here?!

The SNAT table entry has expired because it has been 4+ minutes idle, so it’s gone from the SNAT table.

 Server

SRC IP

SRC PORT

DST IP

DST PORT

TCP STATE

CLIENT VIP

54321

SERVER IP

80

ESTABLISHED

 As expected, nothing changed on the server. It has sent all the data that the client requested and it’s been 4+ minutes awaiting new requests on that TCP connection.

Now comes the problem. Let’s say the client resumes its operations and decides to request more data from the server using the same connection. Unfortunately, that won’t work because Azure will drop the traffic at the SNAT layer, because the packet does not meet any of these criteria:

  • It belongs to an existing connection (nope – doesn’t meet this criteria because it had expired so was removed!)

  • It is a SYN packet (for new connections) (nope – doesn’t meet this criteria since it isn’t a SYN packet)

This means that the attempt to connection on this tuple will fail. Ok, this is a problem but not the end of the world, right? The client will just open a new TCP connection (i.e. a SYN packet will go through the SNAT) and send the HTTP request inside that one. That’s correct, but there are situations where we could face another consequence of that SNAT entry expiry.

If the client is opening new connections to the same server and port (SERVER IP:80) fast enough to cycle through the 160 assigned ports (or faster), but not explicitly closing them, port 54321 will be free to use again (Remember: the translation for port 12345->54321 has expired) and we would have run through the original 160 ports in a breeze. Rather sooner than later, port 54321 will be used again for a new translation with a source port other than 12345, but for the same source and destination IP addresses (and same destination port!). Here’s how it will look:

 Client

SRC IP

SRC PORT

DST IP

DST PORT

TCP STATE

CLIENT DIP

12346

SERVER VIP

80

SYN_SENT

 Client decides to create a new connection, so it sends a SYN packet to establish a new connection on SERVER IP:80.

 SNAT table

SRC IP

SRC PORT

DST IP

DST PORT

DIP-> VIP

12346-> 54321

SERVER IP

80

 Azure sees the packet and there’s no matching entry on the table, but accepts it as it’s a SYN packet (i.e. new connection). Translates 12346to 54321 since it’s again the first availablefrom the 160 pre-allocated ports.

 Server

SRC IP

SRC PORT

DST IP

DST PORT

TCP STATE

CLIENT VIP

54321

SERVER IP

80

ESTABLISHED

 The server has already an ESTABLISHED connection, so when it receives a SYN packet from CLIENT VIP:54321 it will ignore and drop it. At this point in time, we’ve ended up with two broken connections: The original that has been idle for 4+ minutes and the new one.

The best way to avoid this issue, and actually many issues on different kind of platforms is to have a sensible keep-alive at the application level (https://msdn.microsoft.com/en-us/library/windows/desktop/ee470551(v=vs.85).aspx). Sending a packet through an idle connection every 30 seconds or 1 minute should be considered as it’ll reset any idle timers both in Azure and in on-premises firewalls.

Need a quick workaround?

There’s a quick workaround you can use in Azure. You can avoid using the VIP for your outgoing traffic (and incoming too) by assigning an Instance-Level IP address, known as PIP. The PIP is assigned to only one instance, thus not needing to use SNAT to accommodate the requests of the different VMs. It still goes through the software load balancer (SNAT), but as there’s no SNAT applied, there’s no SNAT table and you can happily keep your connections idle… Until the SLB kills them (http://azure.microsoft.com/blog/2014/08/14/new-configurable-idle-timeout-for-azure-load-balancer/), but that’s another story. J

Before we go, we should probably also acknowledge another long-standing problem with this design. Since Azure allocates these outgoing ports in batches of 160, it is possible that the creation of a new batch of 160 may not happen fast enough and an outgoing connection attempt will fail. We typically only see this under very high load (almost always load testing), but if you fall victim to this, the solution is the same – use a PIP.

Azure VM may fail to activate over ExpressRoute

$
0
0

Customers can advertise a default route (also known as forced tunneling) over their ExpressRoute circuit to force all traffic destined for the internet destined traffic to be routed through their on-premises infrastructure and out their on-premises edge devices. This enables you to leverage your existing on-premises investments such as security appliances or WAN accelerators to manage traffic leaving their Azure virtual networks.

Azure VMs running a Windows guest need connectivity to kms.core.windows.net to activate. Activation requests coming from an Azure VM must have an Azure public IP address as the source IP in order to successfully activate against kms.core.windows.net. As a result of the forced tunneling, activation will fail because the activation request is seen as coming from the customer’s on-premises edge instead of from an Azure public IP. SUSE’s update servers use similar logic, so are also susceptible to this problem.

There are two ways to mitigate the activation problem. Customers can mitigate this by enabling public peering for their ExpressRoute circuit. With public peering, both on-premises and Azure VM traffic destined for Azure public services (e.g. Azure Storage, Azure SQL Database) is routed across the ExpressRoute circuit. Without forced tunneling, this traffic hairpins at the Azure side of the ExpressRoute circuit.  With forced tunneling enabled, this traffic hairpins at the customer’s side of the ExpressRoute circuit.

Due to the hairpinning, public peering isn’t ideal for customers who want to inspect all outgoing traffic without having to implement a network virtual appliance in each of their Azure VNETs. Therefore, some customers want a more granular solution. These customers can employ a User Defined Route (UDR) to route activation traffic directly to the Azure activation host IPs rather than through the on-premises infrastructure. This provides the best of both situations and does not change the security risk.

For Azure Resource Manager, it can be implemented as follows:

# First, we will get the virtual network. In this case, I’m getting virtual network ArmVNet-DM in Resource Group ArmVNet-DM
$vnet = Get-AzureRmVirtualNetwork -ResourceGroupName “ArmVNet-DM” -Name “ArmVNet-DM”

# Next, we create a route table and specify that traffic bound to the KMS IP (23.102.135.246) will go directly out
$RouteTable = New-AzureRmRouteTable -Name “ArmVNet-DM-KmsDirectRoute” -ResourceGroupName “ArmVNet-DM” -Location “centralus”
Add-AzureRmRouteConfig -Name “DirectRouteToKMS” -AddressPrefix 23.102.135.246/32 -NextHopType Internet -RouteTable $RouteTable
Set-AzureRmRouteTable -RouteTable $RouteTable

# Apply KMS direct route table to the subnet (in this case, I will apply it to the subnet named Subnet-1)
$forcedTunnelVNet = $vnet.Subnets | ? Name -eq “Subnet-1″
$forcedTunnelVNet.RouteTable = $RouteTable
Set-AzureRmVirtualNetwork -VirtualNetwork $vnet

For Classic Virtual Networks, it can be implemented as follows:

# First, we will create a new route table
New-AzureRouteTable -Name “VNet-DM-KmsRouteGroup” -Label “Route table for KMS” -Location “Central US”

# Next, get the routetable that was created
$rt = Get-AzureRouteTable -Name “VNet-DM-KmsRouteTable”

# Next, create a route
Set-AzureRoute -RouteTable $rt -RouteName “AzureKMS” -AddressPrefix “23.102.135.246/32″ -NextHopType Internet

# Apply KMS route table to the subnet (in this case, I will apply it to the subnet named Subnet-1)
Set-AzureSubnetRouteTable -VirtualNetworkName “VNet-DM” -SubnetName “Subnet-1″ -RouteTableName “VNet-DM-KmsRouteTable”

If you are using SUSE images, you will want to do something similar.  Instead of specifying 23.102.135.246/32, specify the SUSE update server IP for your region.  The IP addresses for each region is specified here: https://susepubliccloudinfo.suse.com/v1/microsoft/servers/smt.xml.  Note, you may want to specify both the Windows KMS as well as the SUSE update server.

For Azure Resource Manager:

Change:

Add-AzureRmRouteConfig -Name “DirectRouteToKMS” -AddressPrefix 23.102.135.246/32 -NextHopType Internet -RouteTable $RouteTable

To the following (assuming the Virtual Network was in Central US):

Add-AzureRmRouteConfig -Name “DirectRouteToKMS” -AddressPrefix 23.102.135.246/32 -NextHopType Internet -RouteTable $RouteTable
Add-AzureRmRouteConfig -Name “DirectRouteToSUSE1″ -AddressPrefix 23.101.123.131/32 -NextHopType Internet -RouteTable $RouteTable
Add-AzureRmRouteConfig -Name “DirectRouteToSUSE2″ -AddressPrefix 23.101.127.162/32 -NextHopType Internet -RouteTable $RouteTable

For Classic Virtual Networks:

Change:

Set-AzureRoute -RouteTable $rt -RouteName “AzureKMS” -AddressPrefix “23.102.135.246/32″ -NextHopType Internet

To the following (assuming the Virtual Network was in Central US):

Set-AzureRoute -RouteTable $rt -RouteName “AzureKMS” -AddressPrefix “23.102.135.246/32″ -NextHopType Internet
Set-AzureRoute -RouteTable $rt -RouteName “SUSEUpdate1″ -AddressPrefix “23.101.123.131/32″ -NextHopType Internet
Set-AzureRoute -RouteTable $rt -RouteName “SUSEUpdate2″ -AddressPrefix “23.101.127.162/32″ -NextHopType Internet

 

IaaSAntimalware Extension Status NotReady if Installed with no Configuration

$
0
0

The Microsoft Antimalware extension (IaaSAntimalware) requires a minimum configuration when installed, otherwise its status will be NotReady. When you add the IaaSAntimalware extension using the Azure management portal, that minimum configuration is included by default, but when you add the extension using PowerShell, you must remember to include it.

You can view extension status in the Azure management portal or with Azure PowerShell.

Get-AzureVM example for checking extension status of a classic (a.k.a. V1) VM:

(Get-AzureVM -ServiceName mycloudservice -Name myvm).ResourceExtensionStatusList

Get-AzureRmVM example for checking status of a resource manager (a.k.a. V2) VM:

Get-AzureRmVM -ResourceGroupName myresourcegroup -VMName myvm -Status

For V1 VMs, you can use either the Set-AzureVMMicrosoftAntimalwareExtension cmdlet or the Set-AzureVMExtension cmdlet to install the IaaSAntimalware extension in the VM.

Set-AzureVMMicrosoftAntimalwareExtension example:

Get-AzureVM -ServiceName mycloudservice -Name myvm | Set-AzureVMMicrosoftAntimalwareExtension -AntimalwareConfiguration ‘{“AntimalwareEnabled”: true}’ -Version * | Update-AzureVM

Set-AzureVMExtension example:

Get-AzureVM -ServiceName mycloudservice -Name myvm | Set-AzureVMExtension -Publisher Microsoft.Azure.Security -ExtensionName IaaSAntimalware -PublicConfiguration ‘{“AntimalwareEnabled”: true}’ -Version * | Update-AzureVM

For V2 VMs, you can use the Set-AzureRmVMExtension cmdlet:

Set-AzureRmVMExtension -ResourceGroupName myresourcegroup -VMName myvm -Name IaaSAntimalware -Publisher Microsoft.Azure.Security -ExtensionType IaaSAntimalware -TypeHandlerVersion 1.3 -SettingString ‘{“AntimalwareEnabled”:true}’ -Location westus

Both of the V1 cmdlets above let you specify an asterisk as a wildcard, e.g. -Version * to install the latest version of the extension, or you can specify an explicit version, e.g. -Version 1.3.

The V2 cmdlet requires an explicit version, e.g. -TypeHandlerVersion 1.3.

The above example uses IaaSAntimalware for both -Name and -ExtensionType, but you could use any string for -Name since that is a friendly name/display name you are giving that instance of the extension in the VM.

You can use the Get-AzureVMAvailableExtension V1 cmdlet to determine the latest version of an extension, because extension versions are the same for V1 and V2 VMs.

For example, to find out what is the latest version of the IaaSAntimalware extension:

PS C:\> Get-AzureVMAvailableExtension | where ExtensionName -eq IaaSAntimalware

Publisher : Microsoft.Azure.Security
ExtensionName : IaaSAntimalware
Version : 1.3
Label : Microsoft Antimalware
Description : Microsoft Antimalware
PublicConfigurationSchema :
PrivateConfigurationSchema :
IsInternalExtension : False
SampleConfig : {“PublicConfig”:”{\”AntimalwareEnabled\”:true}”,”PrivateConfig”:null}
ReplicationCompleted : True
Eula : http://azure.microsoft.com/en-us/support/legal/subscription-agreement/
PrivacyUri : http://azure.microsoft.com/en-us/support/legal/privacy-statement/
HomepageUri : http://go.microsoft.com/fwlink/?LinkId=398023
IsJsonExtension : True
DisallowMajorVersionUpgrade : False
SupportedOS :
PublishedDate : 10/26/2015 2:48:58 PM
CompanyName : Microsoft Corporation
Regions :

The IaaSAntimalware configuration is documented in the following locations:

  1. Set-AzureVMMicrosoftAntimalwareExtension – has examples for installing it on V1 VMs, including configuration examples.
     
  2. Azure Windows VM Extension Configuration Samples – useful page with config samples for not just IaaSAntimalware, but also CustomScriptExtension, VMAccessAgent, DSC, IaaSDiagnostics, MicrosoftMonitoringAgent, SymantecEndpointProtection, TrendMicroDSA, VormetricTransparentEncryptionAgent, PuppetEnterpriseAgent, McAfeeEndpointSecurity, ESET, DatadogWindowsAgent, ConferForAzure, CloudLinkSecureVMWindowsAgent, BarracudaConnectivityAgent, AlertLogicLM, and ChefClient.
     
  3. Create a Windows VM with Anti-Malware extension enabled – JSON template for creating V2 VM with IaaSAntimalware installed.
     
  4. Microsoft Antimalware Whitepaper – not yet updated with V2 VM examples, but the config hasn’t changed and that is documented in Appendix A.

AutoAdminLogon registry value reset to 0 after reboot

$
0
0

Recently a customer had an issue where they were setting some registry values in their Azure VM by running a script using the guest agent CustomScriptExtension.

After reboot, one of the values was reset to 0, and another was completely removed.

This behavior is by design for Windows for the specific registry values that were being set, and has nothing to do with Azure or the CustomScriptExtension.

The customer was setting AutoAdminLogon and DefaultPassword to have it automatically logon and perform some post-deployment tasks. 

But AutoAdminLogon is only effective until AutoAdminLogonCount reaches 0. So the customer needed to also set AutoAdminLogonCount to a value higher than 0 so AutoAdminLogon was effective for more than one restart of the VM.

From MSDN – https://msdn.microsoft.com/en-us/library/windows/desktop/aa378750(v=vs.85).aspx 

If the AutoAdminLogon key value is present and contains a one, and if the AutoLogonCount key value is present and is not zero, AutoLogonCount will determine the number of automatic logons that occur. Each time the system is restarted, the value of AutoLogonCount will be decremented by one, until it reaches zero. When AutoLogonCount reaches zero, no account will be logged on automatically, the AutoLogonCount key value and DefaultPassword key value, if used, will be deleted from the registry, and AutoAdminLogon will be set to zero.

Endpoint Load Balancing Heath Probe Configuration Details

$
0
0

Azure load balanced endpoints enable a port to be configured on multiple role instances or virtual machines in the same hosted service.  The Azure platform has the ability to add and remove role instances based upon the instance health to achieve high availability of the load balanced endpoint (VIP and port combination).

Customers can configure a probe to perform health detection on of a particular instance by probing a specified port.  There are two types of probes: TCP and HTTP.

Example configuring probe via the portal:

Configuring a probe via PowerShell example:

PS C:\> Set-AzureLoadBalancedEndpoint -ServiceName “ContosoService” -LBSetName “LBSet01″ -Protocol “TCP” -LocalPort 80 -ProbeProtocolTCP -ProbePort 8080 -ProbeIntervalInSeconds 40 -ProbeTimeoutInSeconds 80

Example via csdef (Paas): LoadBalancerProbes

The number of successful / failed probes required to mark an instance up or down is calculated for the user.  This is SuccessFailCount value is equal to the timeout divided by probe frequency. For portal, the timeout is set to two times the value of frequency (timeout = frequency * 2).

HTTP based probes perform an HTTP GET request against the specified (relative) URL. The probe marks the role instance down when:

  1. The HTTP application returns a HTTP response code other than 200 (i.e. 403, 404, 500, etc.). This is considered a positive acknowledgment that the application instance wants to be taken out of service right away.
  2. In the event the HTTP server does not respond at all after the timeout period. Note that depending on the timeout value set, this might mean multiple probe requests go unanswered before marking probe as down (i.e. SuccessFailCount probes are sent).
  3. When the server closes the connection via a TCP reset.

TCP based probes initiate a connection by performing a three way handshake. TCP based probes mark the role instance down when:

  1. In the event the TCP server does not respond at all after the timeout period. Note that depending on timeout value set, this might mean multiple probe requests go unanswered before marking probe as down (i.e. SuccessFailCount probes are sent).
  2. A TCP reset is received from the role instance. 

TCP and HTTP probes are considered healthy and mark the role instance as UP when:

  1. Upon the first time the VM boots and the LB gets a positive probe
  2. The number of successful probes meets the threshold required by the number SuccessFailCount (see above) to mark the role instance as healthy. If a role instance was removed, SuccessFailCount in a row successful probes are required to mark role instance as UP.

 If the health of a role instance is fluctuating, the Azure Load balancer is waits longer before putting the role instance back in the healthy state. This is done via policy to protect the user and the infrastructure.

 Additionally, the probe configuration of all load balanced instances for an endpoint (load balanced set) must be the same.  This means you cannot have a different probe configuration (i.e. local port, timeout, etc.) for each role instance or virtual machine in the same hosted service for a particular endpoint combination.

Azure networking – Public IP addresses in Classic vs ARM

$
0
0

This post was contributed by Stefano Gagliardi, Pedro Perez, Telma Oliveira, and Leonid Gagarin

As you know, we recently introduced the Azure Resource Manager deployment model as an enhancement of the previous Classic deployment model. Read here for more details on them https://azure.microsoft.com/en-us/documentation/articles/resource-manager-deployment-model/

There are important differences between the two models on several aspects spanning different technologies in Azure. In this article we wanted to clarify in particular what has changed when it comes to public IP addresses that you can assign to your resources.

Azure Service Management/ Classic deployment / v1

in ASM, we have the concept of Cloud Service.

The Cloud Service is a container of instances, either IaaS VMs or PaaS roles.
(more about cloud services here
http://blogs.msdn.com/b/plankytronixx/archive/2014/04/24/i-m-confused-between-azure-cloud-services-and-azure-vms-what-the.aspx )

Cloud Services are bound to a public IP address that is called VIP and have a name that is registered in the public DNS infrastructure with the cloudapp.net suffix.

For example my Cloud Service is called chicago and has 23.101.69.53 as a VIP.

 

Note: it is possible to assign multiple VIPs to the same cloud service
https://azure.microsoft.com/en-us/documentation/articles/load-balancer-multivip/

It is alsopossible to reserve a cloud service VIP so that you don’t risk that your VIP will change when VMs restart.
https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-reserved-public-ip/

In Azure Service Manager model, you deploy IaaS VMs inside Cloud Services.
You can reach resources located on the VM from the Internet only on specific TCP/UDP ports (no ICMP!) for which you have created
endpoints.

 

(read here for more info about endpoints https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-set-up-endpoints/ )

Endpoints are simply a mapping between a certain private port on the VM’s internal dedicated IP address (DIP) and a public port to be opened on the cloud service public IP (VIP). Azure will take care of all NATting, you will not need to worry about configuring anything else.

Notice that in ASM you don’t need to necessarily add the VM to a Virtual Network. If you do, the VM will have a DIP in the private address range of your choice. Else, Azure will assign the VM a random internal IP. In some datacenters if the VM is not in a Vnet it can receive a public IP address as a DIP, but the machine won’t be reachable from the internet on that IP! It will, again, be reachable only on the endpoints of the VIP.

Security is taken care by Azure for you as well: no connection will ever be possible from the outside on ports for which you haven’t defined an endpoint. Instead, traffic on opened ports can be filtered by means of the Endpoint ACLs https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-acl/ or Network Security Groups https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-nsg/

Now, you can deploy several IaaS VMs inside the same cloud service.
By default, all VMs in the same cloud service will inherit the same cloud service public IP address (VIP).

This has a consequence: you cannot expose different services on different VMs on the public internet using the same public port. You will need to create an endpoint on each VM referencing a different public port.

For example, in order to connect via RDP to vm1 and vm2 in my chicago Cloud Service, i have done the following.
Created an endpoint on
vm1 that maps internal port 3389 to public port 50001
Created an endpoint on
vm2 that maps internal port 3389 to public port 50002
And then i can connect to
vm1 via chicago.cloudapp.net:50001 and to vm2 via chicago.cloudapp.net:50002

it is worth noticing that the client you are starting RDP from does not need to have any knowledge of the hostname of the destination machine (vm1,vm2). Also, the destination machine is completely unaware of the Cloud Service, its public DNS name and its VIP. The machine is just listening on its private DIP on the appropriate ports. You will not be able to see the VIP on the Azure VM’s network interface.

You can however create load-balanced endpoints to expose the same port of different VMs to the same port on the VIP (think of an array of web servers to handle http requests for the same website).

There is a limit on the amount of 150 endpoints you can open on a cloud service.
(check back here for updated info on service limits
https://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/)

This means that you cannot open the whole range of TCP dynamic ports for a VM. If you have applications that require to be contacted on dynamic TCP ports (for example passive FTP), you may want to consider assigning your machine an Instance Level Public IP. ILPIPs are assigned exclusively to the VM and are not shared amongst other VMs in the same cloud service. Hence, the whole range of TCP/UDP ports is available with a 1to1 mapping between the public port on the ILPIP and the private port on the VM’s DIP – again, no ICMP!
(more about ILPIPs here
https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-instance-level-public-ip/)

The ILPIP does not substitute the cloud service VIP, it is just an additional public IP for the VM. However, the VM uses the ILPIP as its outbound IP address.

Note that you do not need to open endpoints for ILPIPs are there is no need to NAT. All TCP/UDP ports will be “opened” by default so make sure you take care of the Security with a proper firewall configuration on the Guest VM and/or by applying Network Security Groups.

As of January 2016, you can not reserve an ILPIP address as static for your classic VM. Check back on these official documents for future announcements.
https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-reserved-public-ip/
https://azure.microsoft.com/en-us/updates/static-public-ip-addresses-available-for-azure-virtual-machines/

Azure Resource Manager / ARM / v2

In this new deployment model, we have changed how Azure works under the covers. in ARM, there is no longer the Cloud Service concept, while instead we have the Resource Group. While you can still think as the Resource Group as a container for your VMs (and other resources), it is very different than the Cloud Service.

What is interesting to notice from a networking perspective is that the Resource Group doesn’t have a VIP bound to it by default. Also, in ARM Every VM must be deployed in a Virtual Network.

ARM has introduced the concept of the Public IP, an object that can be bound to VM NICs, load balancers and other PaaS instances like VPN or Application gateways.

As you create VMs, you will then assign them a NIC and a public IP. The public IP will be different for every VM. Simplifying, in the ARM model all VMs will have their own public IP (it’s like they were classic VMs with an ILPIP).

Hence, you no longer need to open endpoints as you do in ASM/Classic because all ports are potentially open and no longer NATted: there are no endpoints in ARM.
You need again to take care of your VM’s security with a proper firewall configuration on the Guest VM and/or by applying Network Security Groups.
Notice that as a security enhancement to classic model, an NSG is automatically assigned to every VM with the only rule to allow RDP traffic on port 3389. You will need to modify the NSG to fully open other TCP/UDP ports.

 

By default, all public IP addresses in ARM come as dynamic.
You can now change to static the public IP addresses that are assigned to a NIC bound to Virtual Machine.
https://azure.microsoft.com/en-us/updates/static-public-ip-addresses-available-for-azure-virtual-machines/

Note: you will need to stop/deallocate the VM to make this effective. it doesn’t work on a running VM. So plan some downtime ahead. Then you will have to perform something like the below:

#create a new static public IP

$PubIP = New-AzureRmPublicIpAddress –name $IPname –ResourceGroupName $rgname -AllocationMethod Static –Location $location


#fetch the current NIC of the VM

$NIC = Get-AzureRmNetworkInterface –name $NICname –ResourceGroupName $rgname


#assign the new public static IP to the NIC

$NIC.IpConfigurations.publicIPaddress.id = $PubIP.Id


#commit changes

Set-AzureRmNetworkInterface -NetworkInterface $NIC

This is a sample script: consider extensive testing before applying any kind of change in a production environment.

Now, there are circumstances in which we will still like to take advantage of the port forwarding/NATting in ARM, just like Endpoints in classic did. This is possible: You will have to resemble the V1 mechanism of traffic going through the load balancer.

However be aware of the requirements:

  • ·         You will have to stop/deallocate the VM. This will cause downtime.
  • ·         In order to add a VM to a load balancer it must be in an availability set.  
  • ·         You are going to use the loadbalancer’s IP address to connect to the VM on the NATted port, not the VM’s public IP.

Once you’re ok with the above, the procedure is “simple” and can be derived from here

https://azure.microsoft.com/en-us/documentation/articles/load-balancer-get-started-internet-arm-ps/

 

For your reference, here is the sample script I have used.

 #necessary variables

$vmname=”<the name of the VM>”

$rgname=”<the name of the Resource Group>”

$vnetname=”<the name of the Vnet where the VM is>”

$subnetname=” the name of the Subnet where the VM is>”

$location=”West Europe”

 

#This creates a new loadbalancer and creates a NAT rule from public port 50000 to private port 80

$publicIP = New-AzureRmPublicIpAddress -Name PublicIP -ResourceGroupName $rgname -Location $location –AllocationMethod Static

$externalIP = New-AzureRmLoadBalancerFrontendIpConfig -Name LBconfig -PublicIpAddress $publicIP

$internaladdresspool= New-AzureRmLoadBalancerBackendAddressPoolConfig -Name “LB-backend”

$inboundNATRule1= New-AzureRmLoadBalancerInboundNatRuleConfig -Name “natrule” -FrontendIpConfiguration $externalIP -Protocol TCP -FrontendPort 50000 -BackendPort 80

$NRPLB = New-AzureRmLoadBalancer -ResourceGroupName $rgname -Name IrinaLB -Location $location -FrontendIpConfiguration $externalIP -InboundNatRule $inboundNATRule1 -BackendAddressPool $internaladdresspool

 

#These retrieve the vnet and VM settings (necessary for later steps)

$vm= Get-AzureRmVM -name $vmname -ResourceGroupName $rgname

$vnet = Get-AzureRmVirtualNetwork -Name $vnetname -ResourceGroupName $rgname

$internalSubnet = Get-AzureRmVirtualNetworkSubnetConfig -Name $subnetname -VirtualNetwork $vnet

 

#This creates a new NIC with the LB settings

$lbNic= New-AzureRmNetworkInterface -ResourceGroupName $rgname -Name LBnic -Location $location -Subnet $internalSubnet -LoadBalancerBackendAddressPool $nrplb.BackendAddressPools[0] -LoadBalancerInboundNatRule $nrplb.InboundNatRules[0]

 

#This removes the old NIC from the VM

Remove-AzureRmVMNetworkInterface -vm $vm -NetworkInterfaceIDs $vm.NetworkInterfaceIDs[0]

 

#This adds the new NIC we just created to the VM

Add-AzureRmVMNetworkInterface -vm $vm -id $lbNic.id -Primary

 

#This Stops the VM

Stop-AzureRmVM -Name $vmname -ResourceGroupName $rgname

 

#This commits changes to the Fabric

Update-AzureRmVM -vm $vm -ResourceGroupName $rgname

 

#This restarts the VM

Start-AzureRmVM -Name $vmname -ResourceGroupName $rgname

 

After this, you can access port 80 on the VM by accessing port 50000 on the load balancer’s $publicIP.

Again, this is a sample script: consider extensive testing before applying any kind of change in a production environment.

 

Impact of Cisco March 2016 Vulnerabilities on Azure

$
0
0

Microsoft evaluates the security of its infrastructure on an ongoing basis and part of this evaluation includes working with our vendors, the open source community and internal test labs to identify and mitigate critical security issues. On March 2nd, Cisco released its bi-annual security bulletin which included advisories affecting equipment used by many Cloud Service Providers, including Microsoft. We are currently reviewing all the details of this bulletin and then deciding what, if any, actions need to be taken to remediate the risk.

If you need more details on the specific vulnerabilities disclosed, please see https://tools.cisco.com/security/center/publicationListing.x.

UPDATE: We have completed our evaluation and updated all impacted hardware.

Sending E-mail from Azure Compute Resource to External Domains

$
0
0

Sending outbound e-mail to external domains (such as outlook.com, gmail.com, etc) directly from an e-mail server hosted in Azure compute services is not supported due to the elastic nature of public cloud service IPs and the potential for abuse.  As such, the Azure compute IP address blocks are added to public block lists (such as the Spamhaus PBL).  There are no exceptions to this policy.

 

The supported way to send e-mails to external domains from Azure compute resources is via a SMTP relay (otherwise known as a SMTP smart host).  The Azure compute resource sends the e-mail to the SMTP relay and then the SMTP relay provider delivers the e-mail to the external domain.  Microsoft Exchange Online Protection is one provider of a SMTP relay, but there are a number of 3rd party providers as well.  We list some pointers to SMTP relay services below, but it is not a complete list.  Please note that you need to setup an account with the SMTP relay provider first and then configure your Azure server or application to send outbound e-mail via the SMTP relay.

 

For customers running an e-mail service for their organization in Azure, Exchange Online Protection is an ideal solution as it provides message hygiene both inbound and outbound.  For more information on Exchange Online Protection, go here.

 

For customers running applications that generate newsletters, marketing materials, and other bulk e-mail, we recommend a service such as SendGrid that specialize in that type of message delivery.

 

Below is documentation for how to configure popular e-mail server products you may be running in Azure to send mail via a SMTP relay.  These instructions are how to configure e-mail to be sent via the SMTP relay instead of directly to the external domain.

Product Configuration Document
Microsoft Exchange Server https://technet.microsoft.com/en-us/library/jj673059(v=exchg.160).aspx
Sendmail https://sendgrid.com/docs/Integrate/Mail_Servers/sendmail.html
Postfix https://sendgrid.com/docs/Integrate/Mail_Servers/postfix.html

 

Additionally, many applications that send e-mail allow custom SMTP server settings.  They can also send mail to the SMTP relay provider in the same fashion and that is a supported scenario.

VM stuck in “Updating” when NSG rule restricts outbound internet connectivity

$
0
0

An Azure VM may experience the following symptoms if you have a network security group (NSG) rule configured to deny outbound internet connectivity:

  1. When you create a new VM the status remains on Updating.
  2. When you update a VM agent extension on an existing VM, the VM status remains on Updating.
  3. When you update a VM agent extension from Azure Powershell, after 60 minutes the command fails with error Long running operation failed with status ‘Failed’. ErrorCode: VMExtensionProvisioningError ErrorMessage: Multiple VM extensions failed to be provisioned on the VM. Please see the VM extension instance view for details.
  4. When you check the VM’s instance view by running Get-AzureRmVM -resourcegroupname <resourcegroupname> -name <name> -status, you see VMAgent shows message: VM Agent is unresponsive.

The VM agent requires internet connectivity to connect to Azure storage to update extension status (in a .status file in the VM’s storage account) as well as to download the extensions themselves into the VM.

To restrict internet connectivity while still allowing the required VM agent connectivity, add NSG rules permitting internet connectivity to only the Azure public IP address ranges for the region where the VM resides.

See the following blog post for steps to configure an NSG to allow traffic to Azure public IP ranges:

Step-by-Step: Automate Building Outbound Network Security Groups Rules via Azure Resource Manager (ARM) and PowerShell

VM status stuck on “Updating”

NSG01

 VM extensions show status “unavailable”

NSG2

 

Network security group outbound security rule is configured to deny all outbound internet connectivity

NSG3

NSG4

 

Cross-subscription circuit links that cross the ARM/classic boundary

$
0
0

While enabling an ARM circuit for use with classic deployments is fairly straightforward on its own, it can be confusing to do so as part of creating cross-subscription circuit links with classic deployments. To do the circuit links, you have to switch between ARM and classic mode while simultaneously switching subscriptions. This can lead to some serious confusion, so I thought it would be worthwhile to document the exact steps and the corresponding PowerShell commands.

 This walkthrough assumes you have two subscriptions named as follows:

 1)      Subscription A that contains your ARM circuit

2)      Subscription B that contains your ASM VNET

 First, let’s log into ARM under Subscription A so we can enable the circuit for classic operations.

 # Sign in to your Azure Resource Manager environment

$SubscriptionA=”GUID for Subscription A”

$SubscriptionB=”GUID for Subscription B”

 Login-AzureRmAccount

 # Select the appropriate Azure subscription

Get-AzureRmSubscription -SubscriptionId $SubscriptionA | Select-AzureRmSubscription

 # Get details of the ExpressRoute circuit

$ckt = Get-AzureRmExpressRouteCircuit -Name “DemoCkt” -ResourceGroupName “DemoRG”

 # Set “Allow Classic Operations” to TRUE

$ckt.AllowClassicOperations = $true

 # Update circuit

Set-AzureRmExpressRouteCircuit -ExpressRouteCircuit $ckt

Now, let’s create the circuit link authorization for the classic VNET. Since we are creating an authorization for use by a classic deployment in Subscription B, you will use the classic commands. Note that since Subscription A owns the circuit, we are still logging in as Subscription A.

# Sign in to your classic environment

Add-AzureAccount

# Select the appropriate Azure subscription

Set-AzureSubscription -SubscriptionId $SubscriptionA

# Create the classic authorization

New-AzureDedicatedCircuitLinkAuthorization -ServiceKey $ckt.ServiceKey -Description “Dev-Test Links” -Limit 2 -MicrosoftIds ‘devtest@contoso.com’

Description         : Dev-Test Links

Limit               : 2

LinkAuthorizationId : **********************************

MicrosoftIds        : devtest@contoso.com

Used                : 0

Next, log into Subscription B. Because you are wanting to work with a classic VNET, you need to log in using the classic mode. Since you already logged in under classic mode, all you need to do is change to Subscription B.

# Select the appropriate Azure subscription

Set-AzureSubscription -SubscriptionId $SubscriptionB

Now, use the authorization. Again, since we are linking to a classic VNET, we will continue to use the classic commands.

# Use the classic authorization

New-AzureDedicatedCircuitLink –servicekey $ckt.ServiceKey –VnetName ‘ClassicVNET1′

State VnetName

—– ——–

Provisioned ClassicVNET1


Microsoft Azure: How to execute a synchronous Azure PowerShell cmdlet multiple times at once, using a single PowerShell session

$
0
0

 

Overview

Often times, Microsoft Azure customers have requirements to create multiple resources of the same type, and they wish to have these resources created as quickly as possible in a scripted solution.

Many of the Azure PowerShell cmdlets are synchronous in nature, where the cmdlet will not return until provisioning is complete.

Synchronous operations in PowerShell can significantly slow down a scripted deployment, and the purpose of this post is to help Azure customers script synchronous cmdlets in multiple threads within a single PowerShell session to speed up deployment times.

The example we will demonstrate is the creation of PublicIpAddress resources in Azure Resource Manager (ARM). The cmdlet used is: New-AzureRmPublicIpAddress

 

Disclaimer

This post contains a link to a PowerShell code sample. This sample is being provided only as a sample, and is not intended to serve as a solution to any technical issue. Microsoft provides this code sample without support, warranties, and confers no rights. By executing any portion of this code sample, you are agreeing to do so at your own risk.

 

Sample Scenario

  • You have a requirement to create 15 PublicIpAddress resources in Azure
  • You have written a simple for loop like this:for($i=0;$i -lt 15;$i++){New-AzureRmPublicIpAddress -Name “piptest$($i)” -ResourceGroupName contosoRg -Location “East US” -AllocationMethod Dynamic}
  • Result: 15 PublicIpAddress resources are created in roughly 8 minutes, 30 seconds. This is not fast enough, so you decide to try this in Powershell jobs, like this:Start-Job -Name createPip -ScriptBlock {New-AzureRmPublicIpAddress -Name “piptest1” -ResourceGroupName contosoRg -Location “East US” -AllocationMethod Dynamic}
  • Result: The Powershell job fails because you need to execute Login-AzureRmAccount. This happens because Start-Job does not execute within the context of your existing PowerShell session. The Azure PowerShell module will expect you to authenticate each job.

Final Solution

There is a trick to making this work with a single Azure Powershell session. The key is the use of [Powershell]::Create() and then passing in and invoking a script block with arguments.

  1. Save a .ps1 file containing the following code sample:

TechNet Gallery Code Sample

     2. Execute the code sample by calling the saved .ps1 file using arguments for the following:

a. Count – Number of resources you need to create

b. NamePrefix – Resources will be created as “NamePrefixY” where Y is an integer counter for the count of resources created (i.e. – “PipTest1” – “PipTest15” where “PipTest” is the NamePrefix value)

c. RgName – Name of the Azure Resource Group within your Azure subscription (this must already exist, as the script sample does not create a new Resource Group)

d. Location – A valid Azure location (i.e. – “East US” or “North Europe”)

Sample Syntax

C:\PipTest.ps1 -Count 15 -NamePrefix “PipTest” -RgName “ContosoRg” -Location “East US”

 

Sample Output

Job for PublicIpAddress PipTest1 – started

Job for PublicIpAddress PipTest2 – started

Job for PublicIpAddress PipTest3 – started

Job for PublicIpAddress PipTest4 – started

Job for PublicIpAddress PipTest5 – started

Job for PublicIpAddress PipTest6 – started

Job for PublicIpAddress PipTest7 – started

Job for PublicIpAddress PipTest8 – started

Job for PublicIpAddress PipTest9 – started

Job for PublicIpAddress PipTest10 – started

Job for PublicIpAddress PipTest11 – started

Job for PublicIpAddress PipTest12 – started

Job for PublicIpAddress PipTest13 – started

Job for PublicIpAddress PipTest14 – started

Job for PublicIpAddress PipTest15 – started

Completed in 0 minutes, 39 seconds

 

To conclude, we can execute synchronous Azure PowerShell cmdlets in an asynchronous fashion while sharing Azure authentication across script blocks in a single PowerShell session. This sample demonstrates a time advantage of nearly 8 minutes for deployment of 15 resources of a single resource type (PublicIpAddress). This sample could be modified to utilize other Azure PowerShell cmdlets to suit your deployment needs.

Enjoy!
Adam Conkle – MSFT

Viewing all 76 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>