Windows clusters


CAPZ enables you to create Windows Kubernetes clusters on Microsoft Azure.

To deploy a cluster using Windows, use the Windows flavor template.

Deploy a workload

After you Windows VM is up and running you can deploy a workload. Using the deployment file below:

apiVersion: apps/v1
kind: Deployment
  name: iis-1809
    app: iis-1809
  replicas: 1
      name: iis-1809
        app: iis-1809
      - name: iis
            cpu: 1
            memory: 800m
            cpu: .1
            memory: 300m
          - containerPort: 80
        "": windows
      app: iis-1809
apiVersion: v1
kind: Service
  name: iis
  type: LoadBalancer
  - protocol: TCP
    port: 80
    app: iis-1809

Save this file to iis.yaml then deploy it:

kubectl apply -f .\iis.yaml

Get the Service endpoint and curl the website:

kubectl get services
NAME         TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
iis          LoadBalancer    <pending>     80:31240/TCP   1m
kubernetes   ClusterIP     <none>        443/TCP        46m



See the CAPI proposal for implementation details:

VM and VMSS naming

Azure does not support creating Windows VM’s with names longer than 15 characters (see additional details historical restrictions).

When creating a cluster with AzureMachine if the AzureMachine is longer than 15 characters then the first 9 characters of the cluster name and appends the last 5 characters of the machine to create a unique machine name.

When creating a cluster with Machinepool if the Machine Pool name is longer than 9 characters then the Machine pool uses the prefix win and appends the last 5 characters of the machine pool name.

VM password and access

The VM password is random generated by Cloudbase-init during provisioning of the VM. For Access to the VM you can use ssh which will be configured with SSH public key you provided during deployment.


ssh -t -i .sshkey -o 'ProxyCommand ssh -i .sshkey -W %h:%p capi@<api-server-ip>' capi@<windows-ip> powershell.exe

There is also a CAPZ kubectl plugin that automates the ssh connection using the Management cluster

To RDP you can proxy through the api server:

ssh -L 5555: capi@

And then open an RDP client on your local machine to localhost:5555

Image creation

The images are built using image-builder and published the the Azure Market place. They use Cloudbase-init to bootstrap the machines via Kubeadm.

Find the latest published images:

az vm image list --publisher cncf-upstream --offer capi-windows -o table --all  
Offer         Publisher      Sku                           Urn                                                                 Version
------------  -------------  ----------------------------  ------------------------------------------------------------------  ----------
capi-windows  cncf-upstream  k8s-1dot18dot13-windows-2019  cncf-upstream:capi-windows:k8s-1dot18dot13-windows-2019:2020.12.11  2020.12.11
capi-windows  cncf-upstream  k8s-1dot19dot5-windows-2019   cncf-upstream:capi-windows:k8s-1dot19dot5-windows-2019:2020.12.11   2020.12.11
capi-windows  cncf-upstream  k8s-1dot20dot0-windows-2019   cncf-upstream:capi-windows:k8s-1dot20dot0-windows-2019:2020.12.11   2020.12.11

If you would like customize your images please refer to the documentation on building your own custom images.

Kube-proxy and CNIs

Kube-proxy and Windows CNIs are deployed via Cluster Resource Sets. Windows does not have a kube-proxy image due to not having Privileged containers which would provide access to the host. The current solution is using wins.exe as demonstrated in the Kubeadm support for Windows.

Windows HostProcess Container support is in KEP form with plans to implement in 1.22. Kube-proxy and other CNI’s will then be replaced with the HostProcess containers.

Flannel is being used as the default CNI. An important note for Flannel vxlan deployments is that the MTU for the linux nodes must be set to 1400.
This is because Azure’s VNET MTU is 1400 which can cause fragmentation on packets sent from the Linux node to Windows node resulting in dropped packets. To mitigate this we set the Linux eth0 port match 1400 and Flannel will automatically pick this up and subtract 50 for the flannel network created.