
Recently we updated our test jobs so that all of them use the `deploy-env` Ansible role which utilizes the Kubeadm to deploy the test Kubernetes cluster. The role works for both multi-node and single-node environments. Although the deployment of Kubernetes itself is out of scope of Openstack-Helm, we recommen using this role to deploy test and development Kubernetes clusters. So at the moment there is no need to provide different sets of tools single-node and multi-node test envs. Now this is a matter of the Ansible inventory file. Also the deployment procedure of OpenStack on top of Kubernetes using Helm is the same for multi-node and single-node clusters because it only relies on the Kubernetes API. We will be improving the `deploy-env` role even futher and we will be cleaning up the deployment scripts and the documentation so to provide a clear experience for the Openstack-Helm users. Change-Id: I70236c4a2b870b52d2b01f65b1ef9b9518646964
1.7 KiB
Setup OpenStack client
The OpenStack client software is a crucial tool for interacting with OpenStack services. In certain OpenStack-Helm deployment scripts, the OpenStack client software is utilized to conduct essential checks during deployment. Therefore, installing the OpenStack client on the developer's machine is a vital step.
The script setup-client.sh can be used to setup the OpenStack client.
cd ~/osh/openstack-helm
./tools/deployment/common/setup-client.sh
At this point you have to keep in mind that the above script configures OpenStack client so it uses internal Kubernetes FQDNs like keystone.openstack.svc.cluster.local. In order to be able to resolve these internal names you have to configure the Kubernetes authoritative DNS server (CoreDNS) to work as a recursive resolver and then add its IP (10.96.0.10 by default) to /etc/resolv.conf. This is only going to work when you try to access to OpenStack services from one of Kubernetes nodes because IPs from the Kubernetes service network are routed only between Kubernetes nodes.
If you wish to access OpenStack services from outside the Kubernetes cluster, you need to expose the OpenStack Ingress controller using an IP address accessible from outside the Kubernetes cluster, typically achieved through solutions like MetalLB or similar tools. In this scenario, you should also ensure that you have set up proper FQDN resolution to map to the external IP address and create the necessary Ingress objects for the associated FQDN.