NRE Labs Community

Mixing Eclipse Che and Antidote on same cluster

I’m trying to evaluate the compatibility of Eclipse Che and Antidote for running on the same k8s cluster.

I’ll try and document things in this thread (and/or suggest interested parties to do the same: hint my student :wink:

AFAIU, Eclipse Che provides authentication through Keycloak, and workspaces, which could be helpful in case of interest for authenticated users of Antidote being able to save work inside labs between sessions.

FYI, I’ve worked a bit more on the path to integrating these 2 envts on the same cluster.

So far my understanding is that the main blocker is that Antidote is deployed on HTTP, so the Ingress config is pretty simple.
In comparison, Che integrates apps through HTTPS, in particular regarding the auth to the keycloak service.
This requires tuning the Ingress deployment to have both HTTPS and HTTP (or only the former).
Also, I’m not sure about the addresses and domain to be used during che’s deployment. Will have to test more.

Otherwise, most of Che’s services are running inside the Vagrant selfmedicate VM on the minikube (with null driver) cluster:

  • postgres
  • keycloak
  • workspaces
  • che

  • PotsgreSQL and workspaces rely on PV, and the PVC is honored by the storage-provisioner using the default’s minikube hostpath storage.
    So I’m quite close to having both apps running on the same VM… but not quite there yet :wink:

Stay tuned.

Btw, it may be easier to deploy Antidote on a running Che VM, btw… will have to think about this :wink:

From what I understand, the Ingress config is a bit different : Che deploys everything on TLS, and uses subhosts for 4 services, each on their respedtive subdomains.

Example:

Whereas for Antidote, it goes this way : different ports

  • acore-ingress : * -> /acore acore:8086
  • aweb : * -> / aweb:80

So, I think both could mix rather well, provided I understand a bit more of k8s Ingress magic (who said dark ? ;-)…

It’s just a matter of figuring how to mix selfmedicate.sh contents and chectl options, then :wink:

Stay tuned.

I’ve continued to do some testing.
It seems that the chectl installation was relying on a quite basic minikube network setup.
It worked with other VMs but not in selfmedicate’s Vagrant one.

I think this may be related to the presence of CNI + multus.

The che apps need to connect to https://$(minilube IP).nip.io/… but it seems the default minikube ingress wouldn’ bind to that IP in selfmedicate

In a classical minikube cluster, the VM’s IP (minilube IP) is used for the nodePort and for the default interface to which the cluster’s ingress will bind 80 and 443:
docker ps reports:
2b5b000bd828 k8s.gcr.io/pause:3.1 “/pause” 10 minutes ago Up 10 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:18080->18080/tcp k8s_POD_nginx-ingress-controller-7f5749f54b-vtfnb_kube-system_8fb463b5-8d81-11ea-9906-5254008e1ea8_0

I guess with multus, and the custom ingress in selfmedicate, the topology / setup is a but different…

Need to figure out what’s different, and adjust.

I’m coming, not far, stay tuned :wink:

Following on my quest, if I understand correctly, the minikube ingress is configured, by default, to act as a LoadBalancer, which means that the services can be bound to externaIP of the cluster, i.e. the VM, thus being accessible on port 80 of the VM from the outside.

From what I understand, the Ingress setup in selfmedicate doesn’t come with that LoadBalancer, so services on nodePorts must be reached on ports like 30001, 30010, etc.

I’m testing the addition of metallb to act as that LoadBalancer, which will hopefully manage the firewalling and stuff to allo services to be exposted on port 80 or 443 of the external IP, allowing che URLs on https://$(minilube IP).nip.io/ to work, eventually

Stay tuned again

FWIW, I’ve setup a Vagrant VM (minikube with none driver), only tested with libvirt/kvm, to test an Eclipse Che deployment (and see its Ingress work ;):


I may now try to install Antidote in there :wink:

Following on my quest, if I understand correctly, the minikube ingress is configured, by default, to act as a LoadBalancer, which means that the services can be bound to externaIP of the cluster, i.e. the VM, thus being accessible on port 80 of the VM from the outside.

What do you mean by minikube ingress? Is there a default ingress controller provided automatically by minikube these days? If so, what is it? In any case, read below for the ingress controller we’re using.

From what I understand, the Ingress setup in selfmedicate doesn’t come with that LoadBalancer, so services on nodePorts must be reached on ports like 30001, 30010, etc.

Selfmedicate deploys an ingress controller based on nginx. This allows our Ingress definitions to actually do something - without an ingress controller, ingresses do nothing.

FWIW, nearly all traffic in/out of selfmedicate goes through the ingress controller on port 30001. To be honest, this could easily be reconfigured to run in a mode that has direct access to the host networking, to run on more standard ports. I’m not sure why this is necessary for your efforts, but it could be done.

I am :-1: for the inclusion of metallb. I don’t see any reason why we would need it, especially at the cost of introducing another fragile dependency.

The “default ingress” of minikube is actually activated through “minikube addons enable ingress”, which activales a recent (0.26.x at the moment) version of the nginx-ingress-controller.

So it’s supposedly the same tool, nginx-ingress-controller, but in a more recent version, that comes “standard” with minikube.

The main difference I see is that the declaration of ingress rules may be done through “Ingress” resources which seems to be the standard way nowadays, whereas in selfmedicate, those were deployment options, somehow. I’m not yet sure how this mixes with deployment specs, like nodePort vs LoadBalancer, vs clusterPort, etc. IIUC, it’s the LoadBalancer part that we’re missing, which “automagically” redirects traffic from the cluster’s external IP to the internal services, that wouldn’t work on selfmedicate.

+1 for not needing metallb if it’s not necessary, but that’s a tool that often pops up in docs about such configuration of Ingress resources, for instance in https://www.eclipse.org/che/docs/che-7/running-che-locally/#installing-che-on-kind-using-chectl_running-che-locally and as KinD and Vagrant VM seemed somehow related, that’s why I was investigating this option. I’m still so bad at networking that I may be confusing elements of topology/protocols… pardon my ignorance (and a nice room for more curricula ?)

I’ll continue to test how we could deploy antidote-core, webssh2 and stuff we need on port 443, in addition to what I now have in the aforementioned vagrant-minikube-eclipse-che setup. All these tests are just so slow to perform :wink:

The main difference I see is that the declaration of ingress rules may be done through “Ingress” resources which seems to be the standard way nowadays, whereas in selfmedicate, those were deployment options, somehow. I’m not yet sure how this mixes with deployment specs, like nodePort vs LoadBalancer, vs clusterPort, etc. IIUC, it’s the LoadBalancer part that we’re missing, which “automagically” redirects traffic from the cluster’s external IP to the internal services, that wouldn’t work on selfmedicate.

I think you’re conflating a few things. Ingress definitions specify rules that ingress controllers then enforce. This is all entirely separate from the activity of actually deploying the ingress controller in the first place, which is what you’re currently talking about. I think some reading on the basics of Services and Ingresses may help clear things up for you. In short, we’re not really doing anything special in Selfmedicate. There’s no custom or secret sauce here, we’re just using standard k8s objects to specify how traffic should get into the cluster. We do not need services of type LoadBalancer.

I’ll continue to test how we could deploy antidote-core, webssh2 and stuff we need on port 443,

What I would consider useful is a PR that:

  • updates the nginx ingress controller Deployment to use host networking. This would allow the pods running nginx to use the standard ports 80 and 443. In this case no Service would need to be defined.
  • Serving webssh2 (and verifies this still works) from behind nginx. The webssh2 service will need its nodeport removed, and an ingress will need to be created.

I’m not sure about the LoadBalancer stuff… but then there’s somethong strange, I don’t understand. As I see it selfmedicate already runs an ingress controller… but then, how comes the Ingress definitions made during Che’s deployment won’t apply…

I fear this is a bit a difficult discussion without seeing stuff running, and being able to point at pods, services, etc. :-/

And I’m probably not experienced enough in k8s, for sure, to figure out the differences… :-/

I’ve made some progress. I now am running both Eclipse Che pods and acore and aweb on the same cluster, on a later revision of k8s (1.16.3) with the “standard” ingress (nginx-ingress-controller) as provided by minikube, over Multus + Weave.

So the platform seems fairly similar, except that I have setup the network interfaces of the Vagrant guest VM a bit differently.

I’m thus able to browse http://$(minikube ip).nip.io/ to see Antidote’s web interface :slight_smile:

But that won’t go past the initial static web page :-/

What happens now is that aweb renders correctly, until I’m browsing the lesson catalog. In which case, the invocation of /acore/exp/lesson responds with 200, but an empty application/json payload :-/

May this be related to the fact that I’m not browsing http://antidote-local:…/, but http://192.168.121.69.nip.io/acore/exp/lesson instead ?

Thanks for your help.

My current state of Vagrant-ish is in https://gitlab.com/olberger/vagrant-minikube-eclipse-che/-/commits/antidote for the curious ones

I found the culprit. The Ingress definition for acore wouldn’t handle the redirect properly, and redirected everything to / instead of /exp/… :wink:

I’ve fixed this in https://gitlab.com/olberger/vagrant-minikube-eclipse-che/-/commit/79fb05637d897949f3735eeaaeefcc89fb3fc5de

Most probably a change on default config or behavious of nging-ingress-controller between versions :-/

Everything now seems to be working. Che and Antidote are just peacefully running alongside. See https://gitlab.com/olberger/vagrant-minikube-eclipse-che/-/tree/antidote for the Vagrant setup.

Next steps will be to really integrate both environments more.

:ok_hand:

See the installation process on https://asciinema.org/a/334052

And finally, a demo of the running Eclipse Che: https://vimeo.com/422938817

The workspace in which I’m hacking in PHP could well be integrated with an Antidote lab, in the future, if I’m ale to pursue that goal :wink:

For the curious, what I have in mind with Antidote and Eclipse Che takes part in my general attempt to evaluate such tools for teaching labs on the Cloud : https://www-public.imtbs-tsp.eu/~berger_o/weblog/2020/06/02/experimenting-on-distant-labs-and-labs-on-the-cloud/

And here’s, at last, a branch that allows running Che with the ingress more or less operational on the antidote-selfmedicate: https://github.com/olberger/antidote-selfmedicate/tree/che

I’ve had to set minikube to v1.8.2 because of isses with latest for provisionning PVC for Che workspaces.

At last :slight_smile:

And many thanks to Alice Zhen for her precious help.