2

I am learning K8S using MicroK8S. I have a three-node cluster, each node having 16G of RAM. The cluster has entered HA mode automatically. The cluster sits on my home LAN.

Here are my nodes:

name IP colour role
arran 192.168.50.251 yellow leader
nikka 192.168.50.74 blue worker
yamazaki 192.168.50.135 green worker

Set-up

I have a web app running on a pod in the cluster. It responds on port 9090. Here is how I got it running.

I have a image on a development laptop that I turn into a tarball:

docker save k8s-workload > k8s-workload.docker.tar

I then send that tarball to the leader of the cluster:

scp k8s-workload.docker.tar 192.168.50.251:/home/myuser/

I then sideload this image into all nodes on the cluster:

root@arran:/home/myuser# microk8s images import < k8s-workload.docker.tar
Pushing OCI images to 192.168.50.251:25000
Pushing OCI images to 192.168.50.135:25000
Pushing OCI images to 192.168.50.74:25000

I then verify the MIME type and the checksum of the image, on every node, as I had some problems with that:

root@arran:/home/myuser# microk8s ctr images list | grep workload
docker.io/library/k8s-workload:latest   application/vnd.docker.distribution.manifest.v2+json    sha256:725b...582b 103.5 MiB linux/amd64

Finally I run the workload, ensuring that K8S does not try to pull an image (it is unnecessary, but the default policy is to try anyway):

root@arran:/home/myuser# microk8s kubectl run k8s-workload --image=k8s-workload --image-pull-policy='Never' --port=9090
pod/k8s-workload created

I then confirm that this was successful, from the leader node:

root@arran:/home/myuser# microk8s kubectl get pods -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP             NODE       NOMINATED NODE   READINESS GATES
k8s-workload   1/1     Running   0          83m   10.1.134.216   yamazaki   <none>           <none>

Running the app

In order to access the web app from my development laptop, I expose the app to start with on just one node. The pod is running on the Yamazaki node, so initially I run this from that node:

root@yamazaki:/home/myuser# microk8s kubectl port-forward pod/k8s-workload 9090 --address='0.0.0.0'
Forwarding from 0.0.0.0:9090 -> 9090

This works fine.

Problem

I would like to access the app by making a request to any node in the cluster, and not just this one. Currently the app only runs on one node and I would like it to work even if I make a web request to another node.

I know that K8S has the internal networking to do what I want. For example, if I run the port-forward command on Arran (and kill the same on Yamazaki) then the app will still work, even though the pod is running on Yamazaki only. But I can still only access the app from one IP (Arran, where the port forwarder is running).

Of course, I could do what I want by running the port-forwarder on an SSH session on every node. But I'd like to run something that survives after all SSH sessions are killed.

Ideally I would like to do this with a console command, but I wonder if I will need a YAML manifest for this. From my research so far, I think I need a ClusterIP.


Update/research 1

I'm reading the K8S manual page on bare-metal load balancing. Part of the page recommended that the user apply a 646-line config file, which would be pretty counterproductive for a learning scenario.

This sample seems more sensible, but it is not clear there how the LB is being instructed to run on all hosts.

Update/research 2

I also found this resource specifically for MicroK8S, which recommends an ingress addon. Unfortunately this requires my workload be set up as a Service, and for now I only have a Pod, so think this is out.

halfer
  • 259

1 Answers1

2

Lets use a NodePort service, it is an abstract way to expose an application running on a set of Pods as a network service, it will route incoming traffic on a port of each Node to your Service.

We can define it like that:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort
  ports:
    - port: 9090
      targetPort: 9090
      nodePort: 30090
  selector:
    run: k8s-workload

Then we apply the manifest and it should work microk8s kubectl apply -f my-service.yaml

Saxtheowl
  • 1,262