1

I have setup minikube with the docker driver, but the problem is the same with the virtualbox driver.

I'm on fedora 39 and i followed the explanation here: https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/

nslookup hello-jane.test $(minikube ip)                         
Server:     192.168.49.2
Address:    192.168.49.2#53

Non-authoritative answer: Name: hello-jane.test Address: 192.168.49.2 Name: hello-jane.test Address: 192.168.49.2

That works!

from within a container as well:

root@debian2:/# ping hello-jane.test
PING hello-jane.test (192.168.49.2) 56(84) bytes of data.
64 bytes from 192-168-49-2.kubernetes.default.svc.cluster.local (192.168.49.2): icmp_seq=1 ttl=64 time=0.015 ms

however from the host, without specifying the cluster IP

nslookup hello-jane.test  
Server:     127.0.0.53
Address:    127.0.0.53#53

** server can't find hello-jane.test: NXDOMAIN

ofc the ping does not work.

I guess its because of the systemd-resolve which may interfer...

systemd-resolve --status                                                          
Global
         Protocols: LLMNR=resolve -mDNS -DNSOverTLS DNSSEC=no/unsupported
  resolv.conf mode: stub

Link 2 (enp0s31f6) Current Scopes: none Protocols: -DefaultRoute LLMNR=resolve -mDNS -DNSOverTLS DNSSEC=no/unsupported

Link 3 (wlp0s20f3) Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6 Protocols: +DefaultRoute LLMNR=resolve -mDNS -DNSOverTLS DNSSEC=no/unsupported Current DNS Server: 192.168.15.153 DNS Servers: 192.168.15.153

Link 4 (br-236f66aa6691) Current Scopes: none Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported

Link 5 (docker0) Current Scopes: none Protocols: -DefaultRoute LLMNR=resolve -mDNS -DNSOverTLS DNSSEC=no/unsupported

Link 16 (ipsec0) Current Scopes: LLMNR/IPv4 LLMNR/IPv6 Protocols: -DefaultRoute LLMNR=resolve -mDNS -DNSOverTLS DNSSEC=no/unsupported

Link 17 (vboxnet0) Current Scopes: none Protocols: -DefaultRoute LLMNR=resolve -mDNS -DNSOverTLS DNSSEC=no/unsupported

Link 18 (br-288fd084c2f1) Current Scopes: LLMNR/IPv4 LLMNR/IPv6 Protocols: -DefaultRoute LLMNR=resolve -mDNS -DNSOverTLS DNSSEC=no/unsupported

Link 22 (veth9f99f5c) Current Scopes: LLMNR/IPv6 Protocols: -DefaultRoute LLMNR=resolve -mDNS -DNSOverTLS DNSSEC=no/unsupported

its configuration is unchanged. but if I add minikube IP in /etc/systemd/resolved.conf as DNS, it works! but ofc the other dns are not resolving anymore.

I should mention that I just did not do the Linux OS with resolvconf section as Fedora is using systemd-resolved. Is that the reason why?

2 Answers2

0

Configure CoreDNS to forward "." to your DNS server outside of Kubernetes, then change the host's DNS to use CoreDNS using resolvconf. This will allow the host resolve the cluster.local addresses and then the normal stuff outside of K8 via the "." forwarder.

You do this by editing the configmap of CoreDNS. In the below example 10.254.24.4, 10.254.24.2 and 10.254.24.3 are the three CoreDNS pods from the deployment, one per node. 10.254.12.8 and 10.254.142.8 are the external DNS outside of Kubernetes.

Corefile: |
. {
    bind 10.254.24.4 10.254.24.2 10.254.24.3 
    log
    errors
    health 127.0.0.1:8082
    kubernetes cluster.local in-addr.arpa ip6.arpa {
       pods insecure
       fallthrough in-addr.arpa ip6.arpa
    }
    forward . 10.254.12.8 10.254.142.8
    cache 30
    loop
    reload
    loadbalance
}

Another route would be to do the inverse, i.e. set your external DNS server to conditionally forward cluster.local back to CoreDNS. You then have both normal DNS and a way to resolve cluster.local. There are a number of ways to do this, one is to expose the CoreDNS service via type LoadBalancer, another depending on environment is to forward to the pod addresses, assuming they are routable - which then are with CNI antrea-nsx for example.

Another route is to configure systemd-resolved so have interface specific DNS servers and suffixes, i.e. place the cluster interface with DNS of CoreDNS with a cluster.local suffix and the normal management NIC (or whatever you call it) with DNS external to Kubernetes.

All three of those methods have worked for me before.

agill
  • 1
0

Just for reference:

The docs cover your case now: https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/#installation

With fedora, you use likely systemd-resolved. The docs recommend:

sudo mkdir -p /etc/systemd/resolved.conf.d
sudo tee /etc/systemd/resolved.conf.d/minikube.conf << EOF
[Resolve]
DNS=$(minikube ip)
Domains=~test
EOF
sudo systemctl restart systemd-resolved
rriemann
  • 109