NRE Labs Community

Explanation needed about networking, Ingress inside the platform

I’m struggling with the Ingress in another thread and would have like to get more details about the current setup of networking and the Ingress setup in the current (selfmedicate) platform.

I’m typically curious of differences wrt to “basic” k8s cluster obtained with Minikube.

AFAIU, multus provides multiple interfaces on pods, and requires CNI

The Ingress is setup using a manifest “nginx-controller.yaml”, which seems to use an old version of nginx-ingress-controller (0.9.0-beta.5) whereas nowadays, Minikube typically deploys versions >= 0.20
Is this old version related to multus, etc. ?

Thanks in advance.

The ingress configuration has nothing to do with multus. And if you didn’t want to use multus, you really don’t have to know it exists. Multus operates transparently by defaulting to another CNI if you don’t add any annotations, which in this case is Weave. When a pod is started, the kubelet sends the network creation request to multus, which forwards it to weave as the default delegate CNI. As long as you haven’t added any network annotations (which is only done in lessons that use connections) then you would not even need to know that either multus or weave exist. Provided they’re both installed (which they are in SM) they should both just work.

And the reason we’re using such an old version of the ingress controller isn’t for any really good reason, just that I haven’t had time to spend getting an updated deployment together, as the syntax for some things have changed in more recent versions. It is highly unlikely that problems you’re running into are due to the version of the ingress controller.

OK, thx for these details. It makes me reassured that it’s possible to mix Che and Antidote, then.

I just need to dig deeper in how the old-style config of Ingresses and the new style are different, and what needs to be adjusted. I’ll chek your response in the other thread, then.

One more question : the VM has 2 interfaces, eth0 and eth1. Why ? What’s the rationale behind having these ? Would just one work ? Could the cluster be bound to either one ?

Alice Zhen found out that the following commands would help making sure the minikube ingress can be bound to listen to ports 80 and 443 :ok_hand:
sudo minikube addons enable ingress
kubectl port-forward deployment.apps/ingress-nginx-controller -n kube-system --address 80:80 &
kubectl port-forward deployment.apps/ingress-nginx-controller -n kube-system --address 443:443 &

It also seems that by default minikube ip doesn’t reflect the public address and/or minikube won’t use it unless provided:
–extra-config kubelet.node-ip=
I’m currently trying to make a reliable patch for the startup scripts…

I’ve found a better way, which is to use:
kubectl patch deployment nginx-ingress-controller -n kube-system --patch ‘{“spec”:{“template”:{“spec”:{“hostNetwork”:true}}}}’
kubectl patch configmap -n kube-system tcp-services --patch ‘{“data”:{“443”:“deployment.apps/ingress-nginx-controller:443”}}’
kubectl patch configmap -n kube-system tcp-services --patch ‘{“data”:{“80”:“deployment.apps/ingress-nginx-controller:80”}}’