Kubernetes is distributed, but the networking layer is not. The nodes of a Kubernetes cluster must be able to talk to each other directly and securely. For this reason, Kubernetes clusters are almost always deployed to a single, local subnet.
This is very limiting. For scenarios involving edge computing, hybrid/multi-cloud, and IoT, it would be easiest if you could deploy a single cluster with nodes that span these environments. Sadly, there are almost no resources that will show you how to do this.
Instead, operators are left with "multi-cluster" models, which are overkill in many scenarios and involve too much resource overhead.
In this blog post, we demonstrate a distributed kubernetes cluster using k3s and Netmaker. A similar approach will work with other k8s distributions, provided you have a compatible CNI (flannel, calico, canal). The commands may differ, but the approach is the same.
The Setup
All you need are a few virtual machines and a Netmaker server. The virtual machines can live anywhere: a data center, the cloud, on your laptop. They should all work. However, they do need to run Linux. For our example, they're running Ubuntu 20.04.
For the Netmaker server, if you don't have one, you can follow the quickstart guide in the readme.
Once you have a Netmaker server and a few VM's with Ubuntu 20.04, you're ready to go.
1. Create the Netmaker Network
On your Netmaker server, create a network. We'll call it k8s for this demonstration and give it a subnet of 10.1.1.0/24. (It can be anything as long as it doesn't conflict with the pod/service network on the yet-to-be-created cluster).

2. Add VMs to the network
On the "Access Key" tab, create an access key. Store the "Linux Install" command that gets generated, and run it on each Ubuntu VM. It will look something like this:
Check to make sure all the machines can ping each other over their WireGuard addresses before moving on. If you have 3 machines, they should have the addresses 10.1.1.1, 10.1.1.2, and 10.1.1.3. If you are unfamiliar with WireGuard, run "wg show" on any of the nodes. It should display the interface and the peers (byte transfer and hanshakes are good signs. No transfer and no handshake usually mean something has gone wrong).
3. Install the Control Plane Node
Choose a machine to act as the control plane. We're not going to bother with an HA setup for this demo, but if you're doing this in production, definitely set it up as HA.
Run the following command, replacing 10.1.1.1 with the private WireGuard address of your machine ("nm-k8s" corresponds to the name of your network. If you named your network something else, use that interface name instead):
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=”server --node-ip 10.1.1.1 --node-external-ip 10.1.1.1 --flannel-iface nm-k8s” sh -
Check to make sure Kubernetes appears to be installed properly before proceeding.
Install K3S Worker Plane Nodes
Assuming this was successful, all your other nodes can be "workers." From the control plane node grab the node token value from /var/lib/rancher/k3s/server/node-token.
On each worker node, run the following, replacing 10.1.1.X with the private IP of the node, and NODE_TOKEN_VALUE with the value above. Again, this assumes the control plane is running on 10.1.1.1 and the interface is nm-k8s, so adjust accordingly.
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="agent --server https://10.1.1.1:6443 --token NODE_TOKEN_VALUE --node-ip 10.11.11.X --node-external-ip 10.11.11.X --flannel-iface nm-k8s" sh -
If all has gone well, within a couple minutes, you should be all set!
Test it out
You will want to run some tests to make sure everything is up and running. Specifically, you need to make sure the pod and service networks are functioning. We have a "pingtest" deployment and an "nginx" deployment + service which you can deploy to your cluster to test. Once deployed, exec into a "pingtest" pod and see if you can ping the other "pingtest" pod ip addresses. Also, attempt to "wget" the index.html file from the nginx service using its service name (should be nginx.default.svc.cluster.local by default).
Pingtest: https://github.com/gravitl/netmaker/blob/master/kube/example/pingtest.yaml
Nginx: https://github.com/gravitl/netmaker/blob/master/kube/example/nginx-example.yaml
If these tests succeed, congrats! You've deployed a truly distributed Kubernetes cluster.
Conclusion
While this was a simple demonstration, we actually created something quite powerful. The Kubernetes node subnet is a fundamental limiting factor for clusters. Clusters cannot spread beyond the subnet they are deployed into. This is why people are always using multi-cluster architectures across environments. While that is often the best approach for many scenarios, sometimes it is not! And now you have a way of achieving this, using WireGuard and Netmaker.