29

Currently I'm working on a small hobby project which I'll make open source once it's ready. This service is running on Google Container Engine. I chose GCE to avoid configuration hassle, the costs are affordable and to learn new stuff.

My pods are running fine and I created a service with type LoadBalancer to expose the service on port 80 and 443. This works perfectly.

However, I discovered that for each LoadBalancer service, a new Google Compute Engine load balancer is created. This load balancer pretty expensive and really over done for a hobby project on a single instance.

To cut the costs I'm looking for a way to expose the ports without the load balancer.

What i've tried so far:

Is there a way to expose port 80 and 443 for a single instance on Google Container Engine without a load balancer?

5 Answers5

12

Yep, through externalIPs on the service. Example service I've used:

apiVersion: v1
kind: Service
metadata:
  name: bind
  labels:
    app: bind
    version: 3.0.0
spec:
  ports:
    - port: 53
      protocol: UDP
  selector:
    app: bind
    version: 3.0.0
  externalIPs:
    - a.b.c.d
    - a.b.c.e

Please be aware that the IPs listed in the config file must be the internal IP on GCE.

ConnorJC
  • 959
4

In addition to ConnorJC's great and working solution: The same solution is also described in this question: Kubernetes - can I avoid using the GCE Load Balancer to reduce cost?

The "internalIp" refers to the compute instance's (a.k.a. the node's) internal ip (as seen on Google Cloud Platform -> Google Compute Engine -> VM Instances)

This comment gives a hint at why the internal and not the external ip should be configured.

Furthermore, after having configured the service for ports 80 and 443, I had to create a firewall rule allowing traffic to my instance node:

gcloud compute firewall-rules create your-name-for-this-fw-rule --allow tcp:80,tcp:443 --source-ranges=0.0.0.0/0

After this setup, I could access my service through http(s)://externalIp

3

If you only have exactly one pod, you can use hostNetwork: true to achieve this:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: caddy
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: caddy
    spec:
      hostNetwork: true # <---------
      containers:
      - name: caddy
        image: your_image
        env:
        - name: STATIC_BACKEND # example env in my custom image
          value: $(STATIC_SERVICE_HOST):80

Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the static service at http://static. You still can access services by their cluster IP, which are injected by environment variables.

This solution is better than using service's externalIP as it bypass kube-proxy, and you will receive the correct source IP.

willwill
  • 151
2

To synthesize @ConnorJC @derMikey's answers into exactly what worked for me:

Given a cluster pool running on the Compute Engine Instance:

# gcloud compute instances list
gce vm name: gke-my-app-cluster-pool-blah`
internal ip: 10.123.0.1
external ip: 34.56.7.001 # will be publically exposed

I made the service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-app
  name: my-app-service
spec:
  clusterIP: 10.22.222.222
  externalIPs:
  - 10.123.0.1 # the instance internal ip
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: my-app
  type: ClusterIP

and then opened the firewall for all(?) ips in the project:

gcloud compute firewall-rules create open-my-app --allow tcp:80,tcp:443 --source-ranges=0.0.0.0/0

and then my-app was accessible via the GCE Instance Public IP 34.56.7.001 (not the cluster ip)

micimize
  • 121
0

I prefer not to use the cloud load balancers, until necessary, because of cost and vendor lock-in.

Instead I use this: https://kubernetes.github.io/ingress-nginx/deploy/

It's a pod that runs a load balancer for you. That page has GKE specific installation notes.