I recently spent some time working out if using the NBN is a viable solution for IPSec connectivity to MS Azure here in Australia. This would be in place of the more expensive Microsoft Express Route / Peering.
OK so to demo this up I am using a Palo Alto 220 appliance on the campus edge with a 100/40 NBN circuit (approx 70mbit of bandwidth). On the Azure side we have a standard vNet and the basic SKU virtual network gateway which offers up to 100mbit of bandwidth and 10 IPsec tunnels.
Since I am in Australia I am use the Microsoft Azure Southeast zone. PA uses route based VPNs which means you point the subnets to the logical tunnel interface for routing. The concept of “Proxy ID” is essentially an ACL that allows traffic through the tunnel. On the Azure side this is called the local network gateway and both the PA and the LNG subnets need to match for succesful tunnel establishment.
Here are the guides I followed:
Palo Alto: Configuring IKEv2 IPsec VPN for Microsoft Azure Environment
Azure Site to Site IPsec
Some of the challenges I faced was with the configuration on the PA side:
1. Under the IKE Gateway advanced settings do not tick passive mode (If you use passive mode the PA can only respond to a IPSec initiator. Only the Azure side will be able to bring the IPSec tunnel up).
2. If your ISP dynamically assigns your static IP address that is fine, just leave the local IP to none and the PA will use the Interface IP address. In Australia the NBN RSPs (Retail Service Providers) use DHCP to assign a static IP address if you have paid for one (iiNet / TPG).
Setting up the NAT to allow campus networks to access Azure vNet:
The most difficult part is getting the NAT logic configured correctly on the PA side.
For traffic from campus 10.170.0.0/16 use DNAT rule:
As you can see above traffic coming into the interface for campus address 10.170.13.4 is destination translated for the Azure VM 10.0.100.4
For the traffic from Azure vNet 10.0.100.0/24:
In the above example the traffic that is initated from the Azure vNet is SNAT to the interface IP of PA side.
For both NATs I just used the one Azure vNet host as this was just for the demo purpose. You would nat the to the entire subnet in production.
Of course if you had just a single site and not a multi branch / campus scenario you wouldn’t need the NATs just routing and security policy to permit the tunnel traffic.
In the above example I tested an Ubuntu VM (10.170.13.4 -> 10.0.100.4) running in the Azure vNet at around 50-55ms latency to Azure Southeast zone. A cost effective, viable solution for connectivity from on prem to Azure, use cases could be web server workloads or training environments. It was a rewarding learning experience on how to set it all up 🙂