Merging all branches
This commit is contained in:
parent
85dc6a3f64
commit
1b514a153a
43
README.md
43
README.md
@ -4,12 +4,12 @@ This repo contains the code for Airship HostConfig Application using Ansible Ope
|
||||
## How to Run
|
||||
|
||||
## Approach 1
|
||||
If Kubernetes setup is not available please refer to README.md in kubernetes folder to bring up the kubernetes setup. It uses Vagrant and Virtual Box to bring up 1 master and 2 worker node VMs
|
||||
If Kubernetes setup is not available please refer to README.md in kubernetes folder to bring up the kubernetes setup. It uses Vagrant and Virtual Box to bring up 3 master and 5 worker node VMs
|
||||
|
||||
After the VMs are up and running, connect to master node
|
||||
|
||||
```
|
||||
vagrant ssh k8-master
|
||||
vagrant ssh k8-master-1
|
||||
```
|
||||
|
||||
Navigate to airship-host-config folder
|
||||
@ -24,6 +24,8 @@ Execute the create_labels.sh file so that the Kubernetes nodes are labelled acco
|
||||
./create_labels.sh
|
||||
```
|
||||
|
||||
Please note: As part of the tasks executed whenever we are creating a Hostconfig CR object, we are checking a "hello" file in the $HOME directory of the ansible ssh user. This file is created as part of the ./setup.sh script please feel free to comment the task if not needed before builing the image.
|
||||
|
||||
Execute the setup.sh script to build and copy the Airship Hostconfig Ansible Operator Image to worker nodes. It also deploys the application on the Kubernetes setup as deployment kind. The below script configures Airship HostConfig Ansible Operator to use "vagrant" as both username and password when it tries connecting to the Kubernetes Nodes. So when we create a HostConfig Kubernetes CR object the application tries to execute the hostconfig ansible role on the Kubernetes Nodes specified in the CR object by connecting using the "vagrant" username and password.
|
||||
|
||||
```
|
||||
@ -36,6 +38,19 @@ If you want to execute the ansible playbook in the hostconfig example with a dif
|
||||
./setup.sh <username> <password>
|
||||
```
|
||||
|
||||
If you are planning for the ansible-operator to use username and private key when connecting to the kubernetes node. You can use the script available that creates the private and public keys, copy the public key to kubernetes nodes, creates the secret and attach the secret as annotation.
|
||||
```
|
||||
./install_ssh_private_key.sh
|
||||
```
|
||||
|
||||
To try you own custom keys or custom names, follow the below commands to generate the private and public keys. Use this private key and username to generate the kuberenetes secret. Once the secret is available attach this secret name as annotation to the kubernetes node. Also copy the public key to the node.
|
||||
```
|
||||
ssh-keygen -q -t rsa -N '' -f <key_file_name>
|
||||
ssh-copy-id -i <key_file_name> <username>@<node_ip>
|
||||
kubectl create secret generic <secret_name> --from-literal=username=<username> --from-file=ssh_private_key=<key_file_name>
|
||||
kubectl annotate node <node_name> secret=<secret_name>
|
||||
```
|
||||
|
||||
## Approach 2
|
||||
If Kubernetes setup is already available, please follow the below procedure
|
||||
|
||||
@ -49,7 +64,7 @@ export KUBECONFIG=~/.kube/config
|
||||
Clone the repository
|
||||
|
||||
```
|
||||
git clone https://github.com/SirishaGopigiri/airship-host-config.git
|
||||
git clone https://github.com/SirishaGopigiri/airship-host-config.git -b june_29
|
||||
```
|
||||
|
||||
Navigate to airship-host-config folder
|
||||
@ -58,6 +73,8 @@ Navigate to airship-host-config folder
|
||||
cd airship-host-config/airship-host-config/
|
||||
```
|
||||
|
||||
Please note: As part of the tasks executed whenever we are creating a Hostconfig CR object, we are checking a "hello" file in the $HOME directory of the ansible ssh user. This file is created as part of the ./setup.sh script please feel free to comment the task if not needed before builing the image.
|
||||
|
||||
Execute the setup.sh script to build and copy the Airship Hostconfig Ansible Operator Image to worker nodes. It also deploys the application on the Kubernetes setup as deployment kind. The below script configures Airship HostConfig Ansible Operator to use "vagrant" as both username and password when it tries connecting to the Kubernetes Nodes. So when we create a HostConfig Kubernetes CR object the application tries to execute the hostconfig ansible role on the Kubernetes Nodes specified in the CR object by connecting using the "vagrant" username and password.
|
||||
|
||||
```
|
||||
@ -70,6 +87,20 @@ If you want to execute the ansible playbook in the hostconfig example with a dif
|
||||
./setup.sh <username> <password>
|
||||
```
|
||||
|
||||
If you are planning for the ansible-operator to use username and private key when connecting to the kubernetes node. You can use the script available that creates the private and public keys, copy the public key to kubernetes nodes, creates the secret and attach the secret as annotation.
|
||||
```
|
||||
./install_ssh_private_key.sh
|
||||
```
|
||||
|
||||
To try you own custom keys or custom names, follow the below commands to generate the private and public keys. Use this private key and username to generate the kuberenetes secret. Once the secret is available attach this secret name as annotation to the kubernetes node. Also copy the public key to the node.
|
||||
```
|
||||
ssh-keygen -q -t rsa -N '' -f <key_file_name>
|
||||
ssh-copy-id -i <key_file_name> <username>@<node_ip>
|
||||
kubectl create secret generic <secret_name> --from-literal=username=<username> --from-file=ssh_private_key=<key_file_name>
|
||||
kubectl annotate node <node_name> secret=<secret_name>
|
||||
```
|
||||
|
||||
|
||||
## Run Examples
|
||||
|
||||
After the setup.sh file executed successfully, navigate to demo_examples and execute the desired examples
|
||||
@ -85,9 +116,9 @@ Executing examples
|
||||
|
||||
```
|
||||
cd demo_examples
|
||||
kubectl apply -f example.yaml
|
||||
kubectl apply -f example1.yaml
|
||||
kubectl apply -f example2.yaml
|
||||
kubectl apply -f example_host_groups.yaml
|
||||
kubectl apply -f example_match_host_groups.yaml
|
||||
kubectl apply -f example_parallel.yaml
|
||||
```
|
||||
|
||||
Apart from the logs on the pod when we execute the hostconfig role we are creating a "tetsing" file on the kubernetes nodes, please check the contents in that file which states the time of execution of the hostconfig role by the HostConfig Ansible Operator Pod.
|
||||
|
37
airship-host-config/README.md
Normal file
37
airship-host-config/README.md
Normal file
@ -0,0 +1,37 @@
|
||||
# Airship HostConfig Using Ansible Operator
|
||||
|
||||
Here we discuss about the various variable that are used in the HostConfig CR Object to control the execution flow of the kubernetes nodes
|
||||
|
||||
host_groups: Dictionary specifying the key/value labels of the Kubernetes nodes on which the playbook should be executed
|
||||
|
||||
sequential: When set to true executes the host_groups labels sequentially
|
||||
|
||||
match_host_groups: Performs an AND operation of the host_group labels and executes the playbook on the hosts which have all the labels matched, when set to true
|
||||
|
||||
max_hosts_parallel: Caps the numbers of hosts that are executed in each iteration
|
||||
|
||||
stop_on_failure: When set to true stops the playbook execution on that host and subsequent hosts whenever a task fails on a node
|
||||
|
||||
max_failure_percenatge: Sets the Maximum failure percenatge of hosts that are allowed to fail on a every iteration
|
||||
|
||||
reexecute: Executes the playbook again on the successful hosts as well
|
||||
|
||||
ulimit, sysctl: Array objects specifiying the configuration of ulimit and sysctl on the kubernetes nodes
|
||||
|
||||
The demo_examples folder has some examples listed which can be used to initially to play with the above variables
|
||||
|
||||
1. example_host_groups.yaml - Gives example on how to use host_groups
|
||||
|
||||
2. example_sequential.yaml - In this example the host_groups specified goes by sequence and in the first iteration the master nodes get executed and then the worker nodes get executed
|
||||
|
||||
3. example_match_host_groups.yaml - In this example the playbook will be executed on all the hosts matching "us-east-1a" zone and are master nodes, "us-east-1a" and are worker nodes, "us-east-1b" and are "master" nodes, "us-east-1b" and are worker nodes. All the hosts matching the condition will be executed in parallel.
|
||||
|
||||
4. example_sequential_match_host_groups.yaml - This is the same example as above but just the execution goes in sequence
|
||||
|
||||
5. example_parallel.yaml - In this example we will be executing 2 hosts for every iteration
|
||||
|
||||
6. example_stop_on_failure.yaml - This example shows that the execution stops whenever a task fails on any kubernetes hosts
|
||||
|
||||
7. example_max_percentage.yaml - In this example the execution stops only when the hosts failing exceeds 30% at a given iteration.
|
||||
|
||||
8. example_sysctl_ulimit.yaml - In this example we configure the kubernetes nodes with the values specified for ulimit and sysclt in the CR object.
|
@ -1,17 +1,22 @@
|
||||
FROM quay.io/operator-framework/ansible-operator:v0.17.0
|
||||
|
||||
USER root
|
||||
RUN dnf install openssh-clients -y
|
||||
RUN yum install -y wget && wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm && rpm -ivh epel-release-6-8.noarch.rpm && yum --enablerepo=epel -y install sshpass
|
||||
USER ansible-operator
|
||||
|
||||
COPY requirements.yml ${HOME}/requirements.yml
|
||||
RUN ansible-galaxy collection install -r ${HOME}/requirements.yml \
|
||||
&& chmod -R ug+rwx ${HOME}/.ansible
|
||||
COPY build/ansible.cfg /etc/ansible/ansible.cfg
|
||||
COPY watches.yaml ${HOME}/watches.yaml
|
||||
|
||||
USER root
|
||||
RUN dnf install openssh-clients -y
|
||||
RUN yum install -y wget && wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm && rpm -ivh epel-release-6-8.noarch.rpm && yum --enablerepo=epel -y install sshpass
|
||||
USER ansible-operator
|
||||
|
||||
COPY roles/ ${HOME}/roles/
|
||||
COPY playbook.yaml ${HOME}/
|
||||
COPY playbooks/ ${HOME}/playbooks/
|
||||
COPY inventory/ ${HOME}/inventory/
|
||||
COPY plugins/ ${HOME}/plugins/
|
||||
# ansible-runner unable to pick custom callback plugins specified in any other directory other than /usr/local/lib/python3.6/site-packages/ansible/plugins/callback
|
||||
# ansible-runner is overriding the ANSIBLE_CALLBACK_PLUGINS Environment variable
|
||||
# https://github.com/ansible/ansible-runner/blob/stable/1.3.x/ansible_runner/runner_config.py#L178
|
||||
COPY plugins/callback/hostconfig_k8_cr_status.py /usr/local/lib/python3.6/site-packages/ansible/plugins/callback/
|
||||
RUN mkdir ${HOME}/.ssh
|
||||
|
@ -1,7 +1,8 @@
|
||||
[defaults]
|
||||
inventory_plugins = /opt/ansible/plugins/inventory
|
||||
callback_plugins = /opt/ansible/plugins/callback
|
||||
stdout_callback = yaml
|
||||
callback_whitelist = profile_tasks,timer
|
||||
callback_whitelist = profile_tasks,timer,hostconfig_k8_cr_status
|
||||
module_utils = /opt/ansible/module_utils
|
||||
roles_path = /opt/ansible/roles
|
||||
library = /opt/ansible/library
|
||||
|
@ -1,13 +1,28 @@
|
||||
#!/bin/bash
|
||||
|
||||
kubectl label node k8s-master kubernetes.io/role=master
|
||||
kubectl label node k8s-master-1 kubernetes.io/role=master
|
||||
kubectl label node k8s-master-2 kubernetes.io/role=master
|
||||
kubectl label node k8s-master-3 kubernetes.io/role=master
|
||||
kubectl label node k8s-node-1 kubernetes.io/role=worker
|
||||
kubectl label node k8s-node-2 kubernetes.io/role=worker
|
||||
kubectl label node k8s-node-3 kubernetes.io/role=worker
|
||||
kubectl label node k8s-node-4 kubernetes.io/role=worker
|
||||
kubectl label node k8s-node-5 kubernetes.io/role=worker
|
||||
|
||||
kubectl label node k8s-master topology.kubernetes.io/region=us-east
|
||||
kubectl label node k8s-node-1 topology.kubernetes.io/region=us-west
|
||||
kubectl label node k8s-master-1 topology.kubernetes.io/region=us-east
|
||||
kubectl label node k8s-master-2 topology.kubernetes.io/region=us-west
|
||||
kubectl label node k8s-master-3 topology.kubernetes.io/region=us-east
|
||||
kubectl label node k8s-node-1 topology.kubernetes.io/region=us-east
|
||||
kubectl label node k8s-node-2 topology.kubernetes.io/region=us-east
|
||||
kubectl label node k8s-node-3 topology.kubernetes.io/region=us-east
|
||||
kubectl label node k8s-node-4 topology.kubernetes.io/region=us-west
|
||||
kubectl label node k8s-node-5 topology.kubernetes.io/region=us-west
|
||||
|
||||
kubectl label node k8s-master topology.kubernetes.io/zone=us-east-1a
|
||||
kubectl label node k8s-node-1 topology.kubernetes.io/zone=us-east-1b
|
||||
kubectl label node k8s-node-2 topology.kubernetes.io/zone=us-east-1c
|
||||
kubectl label node k8s-master-1 topology.kubernetes.io/zone=us-east-1a
|
||||
kubectl label node k8s-master-2 topology.kubernetes.io/zone=us-west-1a
|
||||
kubectl label node k8s-master-3 topology.kubernetes.io/zone=us-east-1b
|
||||
kubectl label node k8s-node-1 topology.kubernetes.io/zone=us-east-1a
|
||||
kubectl label node k8s-node-2 topology.kubernetes.io/zone=us-east-1a
|
||||
kubectl label node k8s-node-3 topology.kubernetes.io/zone=us-east-1b
|
||||
kubectl label node k8s-node-4 topology.kubernetes.io/zone=us-west-1a
|
||||
kubectl label node k8s-node-5 topology.kubernetes.io/zone=us-west-1a
|
||||
|
@ -1,6 +0,0 @@
|
||||
apiVersion: hostconfig.airshipit.org/v1alpha1
|
||||
kind: HostConfig
|
||||
metadata:
|
||||
name: example
|
||||
spec:
|
||||
message: "Its a big world"
|
@ -1,11 +0,0 @@
|
||||
apiVersion: hostconfig.airshipit.org/v1alpha1
|
||||
kind: HostConfig
|
||||
metadata:
|
||||
name: example3
|
||||
spec:
|
||||
# Add fields here
|
||||
message: "Its a big world"
|
||||
host_groups:
|
||||
- "us-east"
|
||||
- "us-west"
|
||||
execution_order: true
|
@ -1,12 +0,0 @@
|
||||
apiVersion: hostconfig.airshipit.org/v1alpha1
|
||||
kind: HostConfig
|
||||
metadata:
|
||||
name: example4
|
||||
spec:
|
||||
# Add fields here
|
||||
message: "Its a big world"
|
||||
host_groups:
|
||||
- "worker"
|
||||
- "master"
|
||||
execution_strategy: 1
|
||||
execution_order: true
|
@ -1,12 +0,0 @@
|
||||
apiVersion: hostconfig.airshipit.org/v1alpha1
|
||||
kind: HostConfig
|
||||
metadata:
|
||||
name: example5
|
||||
spec:
|
||||
# Add fields here
|
||||
message: "Its a big world"
|
||||
host_groups:
|
||||
- "us-east-1a"
|
||||
- "us-east-1c"
|
||||
- "us-east-1b"
|
||||
execution_order: true
|
@ -4,8 +4,7 @@ metadata:
|
||||
name: example1
|
||||
spec:
|
||||
# Add fields here
|
||||
message: "Its a big world"
|
||||
host_groups:
|
||||
- "master"
|
||||
- "worker"
|
||||
execution_order: false
|
||||
- name: "kubernetes.io/role"
|
||||
values:
|
||||
- "master"
|
@ -0,0 +1,17 @@
|
||||
apiVersion: hostconfig.airshipit.org/v1alpha1
|
||||
kind: HostConfig
|
||||
metadata:
|
||||
name: example3
|
||||
spec:
|
||||
# Add fields here
|
||||
host_groups:
|
||||
- name: "topology.kubernetes.io/zone"
|
||||
values:
|
||||
- "us-east-1a"
|
||||
- "us-east-1b"
|
||||
- name: "kubernetes.io/role"
|
||||
values:
|
||||
- "master"
|
||||
- "worker"
|
||||
sequential: false
|
||||
match_host_groups: true
|
@ -0,0 +1,14 @@
|
||||
apiVersion: hostconfig.airshipit.org/v1alpha1
|
||||
kind: HostConfig
|
||||
metadata:
|
||||
name: example5
|
||||
spec:
|
||||
# Add fields here
|
||||
host_groups:
|
||||
- name: "kubernetes.io/role"
|
||||
values:
|
||||
- "master"
|
||||
- "worker"
|
||||
sequential: true
|
||||
stop_on_failure: false
|
||||
max_failure_percentage: 30
|
12
airship-host-config/demo_examples/example_parallel.yaml
Normal file
12
airship-host-config/demo_examples/example_parallel.yaml
Normal file
@ -0,0 +1,12 @@
|
||||
apiVersion: hostconfig.airshipit.org/v1alpha1
|
||||
kind: HostConfig
|
||||
metadata:
|
||||
name: example7
|
||||
spec:
|
||||
# Add fields here
|
||||
host_groups:
|
||||
- name: "kubernetes.io/role"
|
||||
values:
|
||||
- "master"
|
||||
- "worker"
|
||||
max_hosts_parallel: 2
|
@ -4,8 +4,9 @@ metadata:
|
||||
name: example2
|
||||
spec:
|
||||
# Add fields here
|
||||
message: "Its a big world"
|
||||
host_groups:
|
||||
- "master"
|
||||
- "worker"
|
||||
execution_order: true
|
||||
- name: "kubernetes.io/role"
|
||||
values:
|
||||
- "master"
|
||||
- "worker"
|
||||
sequential: true
|
@ -0,0 +1,17 @@
|
||||
apiVersion: hostconfig.airshipit.org/v1alpha1
|
||||
kind: HostConfig
|
||||
metadata:
|
||||
name: example4
|
||||
spec:
|
||||
# Add fields here
|
||||
host_groups:
|
||||
- name: "topology.kubernetes.io/zone"
|
||||
values:
|
||||
- "us-east-1a"
|
||||
- "us-east-1b"
|
||||
- name: "kubernetes.io/role"
|
||||
values:
|
||||
- "master"
|
||||
- "worker"
|
||||
sequential: true
|
||||
match_host_groups: true
|
@ -0,0 +1,13 @@
|
||||
apiVersion: hostconfig.airshipit.org/v1alpha1
|
||||
kind: HostConfig
|
||||
metadata:
|
||||
name: example6
|
||||
spec:
|
||||
# Add fields here
|
||||
host_groups:
|
||||
- name: "kubernetes.io/role"
|
||||
values:
|
||||
- "master"
|
||||
- "worker"
|
||||
sequential: true
|
||||
stop_on_failure: true
|
23
airship-host-config/demo_examples/example_sysctl_ulimit.yaml
Normal file
23
airship-host-config/demo_examples/example_sysctl_ulimit.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
apiVersion: hostconfig.airshipit.org/v1alpha1
|
||||
kind: HostConfig
|
||||
metadata:
|
||||
name: example8
|
||||
spec:
|
||||
# Add fields here
|
||||
host_groups:
|
||||
- name: "kubernetes.io/role"
|
||||
values:
|
||||
- "master"
|
||||
sequential: false
|
||||
reexecute: false
|
||||
config:
|
||||
sysctl:
|
||||
- name: "net.ipv6.route.gc_interval"
|
||||
value: "30"
|
||||
- name: "net.netfilter.nf_conntrack_frag6_timeout"
|
||||
value: "120"
|
||||
ulimit:
|
||||
- user: "sirisha"
|
||||
type: "hard"
|
||||
item: "cpu"
|
||||
value: "unlimited"
|
@ -9,4 +9,4 @@ roleRef:
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: airship-host-config
|
||||
namespace: default
|
||||
namespace: default
|
||||
|
@ -9,6 +9,8 @@ spec:
|
||||
listKind: HostConfigList
|
||||
plural: hostconfigs
|
||||
singular: hostconfig
|
||||
shortNames:
|
||||
- hc
|
||||
scope: Namespaced
|
||||
subresources:
|
||||
status: {}
|
||||
@ -16,7 +18,84 @@ spec:
|
||||
openAPIV3Schema:
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
properties:
|
||||
spec:
|
||||
description: "HostConfig Spec to perform hostconfig Opertaions."
|
||||
type: object
|
||||
properties:
|
||||
host_groups:
|
||||
description: "Array of host_groups to select hosts on which to perform host configuration."
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
description: "Node labels to be given as key value pairs. Values can be given as list."
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
description: "Node label key values for host selection."
|
||||
values:
|
||||
type: array
|
||||
description: "Node label values for host selection."
|
||||
items:
|
||||
type: string
|
||||
required:
|
||||
- name
|
||||
- values
|
||||
match_host_groups:
|
||||
type: boolean
|
||||
description: "Set to true to perform an AND opertion of all the host_groups specified."
|
||||
sequential:
|
||||
type: boolean
|
||||
description: "Set to true if the host_groups execution needs to happen in sequence."
|
||||
reexecute:
|
||||
type: boolean
|
||||
description: "Set to true if execution needs to happen on the success nodes as well. Is applicable only when atleast one of the node fails. The execution repeats for all the nodes."
|
||||
stop_on_failure:
|
||||
type: boolean
|
||||
description: "Set to true if any one node configuration fails, to stop the execution of the other nodes as well."
|
||||
max_hosts_parallel:
|
||||
type: integer
|
||||
description: "Set to integer number, stating max how many hosts can execute at the same time."
|
||||
max_failure_percentage:
|
||||
type: integer
|
||||
description: "Set the integer percentage value, to state how much max percentage of hosts can fail for every iteration before stoping the execution."
|
||||
config:
|
||||
type: object
|
||||
description: "The configuration details that needs to be performed on the targeted kubernetes nodes."
|
||||
properties:
|
||||
ulimit:
|
||||
description: "An array of ulimit configuration to be performed on the target nodes."
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
user:
|
||||
type: string
|
||||
type:
|
||||
type: string
|
||||
item:
|
||||
type: string
|
||||
value:
|
||||
type: string
|
||||
required:
|
||||
- user
|
||||
- value
|
||||
- type
|
||||
- item
|
||||
sysctl:
|
||||
description: "An array of sysctl configuration to be performed on the target nodes."
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
value:
|
||||
type: string
|
||||
required:
|
||||
- name
|
||||
- value
|
||||
versions:
|
||||
- name: v1alpha1
|
||||
served: true
|
||||
storage: true
|
||||
- name: v1alpha1
|
||||
served: true
|
||||
storage: true
|
||||
|
@ -1,7 +0,0 @@
|
||||
apiVersion: hostconfig.airshipit.org/v1alpha1
|
||||
kind: HostConfig
|
||||
metadata:
|
||||
name: example-hostconfig
|
||||
spec:
|
||||
# Add fields here
|
||||
message: "Its a big world"
|
@ -17,13 +17,13 @@ spec:
|
||||
containers:
|
||||
- name: airship-host-config
|
||||
# Replace this with the built image name
|
||||
image: "AIRSHIP_HOSTCONFIG_IMAGE"
|
||||
imagePullPolicy: "PULL_POLICY"
|
||||
securityContext:
|
||||
privileged: true
|
||||
image: "quay.io/sirishagopigiri/airship-host-config"
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
volumeMounts:
|
||||
- mountPath: /tmp/ansible-operator/runner
|
||||
name: runner
|
||||
- mountPath: /opt/ansible/data
|
||||
name: data
|
||||
env:
|
||||
- name: WATCH_NAMESPACE
|
||||
valueFrom:
|
||||
@ -35,6 +35,10 @@ spec:
|
||||
fieldPath: metadata.name
|
||||
- name: OPERATOR_NAME
|
||||
value: "airship-host-config"
|
||||
- name: ANSIBLE_FILTER_PLUGINS
|
||||
value: /opt/ansible/plugins/filter
|
||||
- name: ANSIBLE_FORKS
|
||||
value: "100"
|
||||
- name: ANSIBLE_GATHERING
|
||||
value: explicit
|
||||
- name: ANSIBLE_INVENTORY
|
||||
@ -43,6 +47,10 @@ spec:
|
||||
value: "USERNAME"
|
||||
- name: PASS
|
||||
value: "PASSWORD"
|
||||
- name: SECRET_NAMESPACE
|
||||
value: "default"
|
||||
volumes:
|
||||
- name: runner
|
||||
emptyDir: {}
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
|
@ -8,6 +8,8 @@ rules:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- pods/exec
|
||||
- pods/log
|
||||
- services
|
||||
- services/finalizers
|
||||
- endpoints
|
||||
@ -70,6 +72,7 @@ rules:
|
||||
- hostconfig.airshipit.org
|
||||
resources:
|
||||
- '*'
|
||||
- inventories
|
||||
verbs:
|
||||
- create
|
||||
- delete
|
||||
|
24
airship-host-config/install_ssh_private_key.sh
Executable file
24
airship-host-config/install_ssh_private_key.sh
Executable file
@ -0,0 +1,24 @@
|
||||
#!/bin/bash
|
||||
|
||||
hosts=(`kubectl get nodes -o wide | awk '{print $1}' | sed -e '1d'`)
|
||||
hosts_ips=(`kubectl get nodes -o wide | awk '{print $6}' | sed -e '1d'`)
|
||||
|
||||
get_username_password(){
|
||||
if [ -z "$1" ]; then USERNAME="vagrant"; else USERNAME=$1; fi
|
||||
if [ -z "$2" ]; then PASSWORD="vagrant"; else PASSWORD=$2; fi
|
||||
echo $USERNAME $PASSWORD
|
||||
}
|
||||
|
||||
copy_ssh_keys(){
|
||||
read USERNAME PASSWORD < <(get_username_password $1 $2)
|
||||
for i in "${!hosts[@]}"
|
||||
do
|
||||
printf 'Working on host %s with Index %s and having IP %s\n' "${hosts[i]}" "$i" "${hosts_ips[i]}"
|
||||
ssh-keygen -q -t rsa -N '' -f ${hosts[i]}
|
||||
sshpass -p $PASSWORD ssh-copy-id -o StrictHostKeyChecking=no -i ${hosts[i]} $USERNAME@${hosts_ips[i]}
|
||||
kubectl create secret generic ${hosts[i]} --from-literal=username=$USERNAME --from-file=ssh_private_key=${hosts[i]}
|
||||
kubectl annotate node ${hosts[i]} secret=${hosts[i]}
|
||||
done
|
||||
}
|
||||
|
||||
copy_ssh_keys $1 $2
|
@ -4,6 +4,7 @@ import os
|
||||
import sys
|
||||
import argparse
|
||||
import time
|
||||
import base64
|
||||
import kubernetes.client
|
||||
from kubernetes.client.rest import ApiException
|
||||
import yaml
|
||||
@ -33,25 +34,62 @@ class KubeInventory(object):
|
||||
# Kube driven inventory
|
||||
def kube_inventory(self):
|
||||
self.inventory = {"group": {"hosts": [], "vars": {}}, "_meta": {"hostvars": {}}}
|
||||
self.set_ssh_keys()
|
||||
self.get_nodes()
|
||||
|
||||
# Sets the ssh username and password using the pod environment variables
|
||||
def set_ssh_keys(self):
|
||||
self.inventory["group"]["vars"]["ansible_ssh_user"] = os.environ.get("USER") if "USER" in os.environ else "kubernetes"
|
||||
if "PASS" in os.environ:
|
||||
self.inventory["group"]["vars"]["ansible_ssh_pass"] = os.environ.get("PASS")
|
||||
# Sets the ssh username and password using the secret name given in the label
|
||||
def _set_ssh_keys(self, labels, node_internalip, node_name):
|
||||
namespace = ""
|
||||
if "SECRET_NAMESPACE" in os.environ:
|
||||
namespace = os.environ.get("SECRET_NAMESPACE")
|
||||
else:
|
||||
self.inventory["group"]["vars"][
|
||||
"ansible_ssh_private_key_file"
|
||||
] = "~/.ssh/id_rsa"
|
||||
namespace = "default"
|
||||
if "secret" in labels.keys():
|
||||
try:
|
||||
secret_value = self.api_instance.read_namespaced_secret(labels["secret"], namespace)
|
||||
except ApiException as e:
|
||||
return False
|
||||
if "username" in secret_value.data.keys():
|
||||
username = (base64.b64decode(secret_value.data['username'])).decode("utf-8")
|
||||
self.inventory["_meta"]["hostvars"][node_internalip]["ansible_ssh_user"] = username
|
||||
elif "USER" in os.environ:
|
||||
self.inventory["_meta"]["hostvars"][node_internalip]["ansible_ssh_user"] = os.environ.get("USER")
|
||||
else:
|
||||
self.inventory["_meta"]["hostvars"][node_internalip]["ansible_ssh_user"] = 'kubernetes'
|
||||
if "password" in secret_value.data.keys():
|
||||
password = (base64.b64decode(secret_value.data['password'])).decode("utf-8")
|
||||
self.inventory["_meta"]["hostvars"][node_internalip]["ansible_ssh_pass"] = password
|
||||
elif "ssh_private_key" in secret_value.data.keys():
|
||||
private_key = (base64.b64decode(secret_value.data['ssh_private_key'])).decode("utf-8")
|
||||
fileName = "/opt/ansible/.ssh/"+node_name
|
||||
with open(os.open(fileName, os.O_CREAT | os.O_WRONLY, 0o644), 'w') as f:
|
||||
f.write(private_key)
|
||||
f.close()
|
||||
os.chmod(fileName, 0o600)
|
||||
self.inventory["_meta"]["hostvars"][node_internalip][
|
||||
"ansible_ssh_private_key_file"] = fileName
|
||||
elif "PASS" in os.environ:
|
||||
self.inventory["_meta"]["hostvars"][node_internalip]["ansible_ssh_pass"] = os.environ.get("PASS")
|
||||
else:
|
||||
self.inventory["_meta"]["hostvars"][node_internalip]["ansible_ssh_pass"] = 'kubernetes'
|
||||
else:
|
||||
return False
|
||||
return True
|
||||
|
||||
# Sets default username and password from environment variables or some default username/password
|
||||
def _set_default_ssh_keys(self, node_internalip):
|
||||
if "USER" in os.environ:
|
||||
self.inventory["_meta"]["hostvars"][node_internalip]["ansible_ssh_user"] = os.environ.get("USER")
|
||||
else:
|
||||
self.inventory["_meta"]["hostvars"][node_internalip]["ansible_ssh_user"] = 'kubernetes'
|
||||
if "PASS" in os.environ:
|
||||
self.inventory["_meta"]["hostvars"][node_internalip]["ansible_ssh_pass"] = os.environ.get("PASS")
|
||||
else:
|
||||
self.inventory["_meta"]["hostvars"][node_internalip]["ansible_ssh_pass"] = 'kubernetes'
|
||||
return
|
||||
|
||||
# Gets the Kubernetes nodes labels and annotations and build the inventory
|
||||
# Also groups the kubernetes nodes based on the labels and annotations
|
||||
def get_nodes(self):
|
||||
#label_selector = "kubernetes.io/role="+role
|
||||
|
||||
try:
|
||||
nodes = self.api_instance.list_node().to_dict()[
|
||||
"items"
|
||||
@ -70,25 +108,31 @@ class KubeInventory(object):
|
||||
self.inventory["group"]["hosts"].append(node_internalip)
|
||||
|
||||
self.inventory["_meta"]["hostvars"][node_internalip] = {}
|
||||
node_name = node["metadata"]["name"]
|
||||
self.inventory["_meta"]["hostvars"][node_internalip][
|
||||
"kube_node_name"] = node_name
|
||||
if not self._set_ssh_keys(node["metadata"]["annotations"], node_internalip, node_name):
|
||||
self._set_default_ssh_keys(node_internalip)
|
||||
# As the annotations are not of interest so not adding them to ansible host groups
|
||||
# Only updating the host variable with annotations
|
||||
for key, value in node["metadata"]["annotations"].items():
|
||||
self.inventory["_meta"]["hostvars"][node_internalip][key] = value
|
||||
# Add groups based on labels and also updates the host variables
|
||||
for key, value in node["metadata"]["labels"].items():
|
||||
self.inventory["_meta"]["hostvars"][node_internalip][key] = value
|
||||
if key in interested_labels_annotations:
|
||||
if value not in self.inventory.keys():
|
||||
self.inventory[value] = {"hosts": [], "vars": {}}
|
||||
if node_internalip not in self.inventory[value]["hosts"]:
|
||||
self.inventory[value]["hosts"].append(node_internalip)
|
||||
if key+'_'+value not in self.inventory.keys():
|
||||
self.inventory[key+'_'+value] = {"hosts": [], "vars": {}}
|
||||
if node_internalip not in self.inventory[key+'_'+value]["hosts"]:
|
||||
self.inventory[key+'_'+value]["hosts"].append(node_internalip)
|
||||
# Add groups based on node info and also updates the host variables
|
||||
for key, value in node['status']['node_info'].items():
|
||||
self.inventory["_meta"]["hostvars"][node_internalip][key] = value
|
||||
if key in interested_labels_annotations:
|
||||
if value not in self.inventory.keys():
|
||||
self.inventory[value] = {"hosts": [], "vars": {}}
|
||||
if node_internalip not in self.inventory[value]["hosts"]:
|
||||
self.inventory[value]["hosts"].append(node_internalip)
|
||||
self.inventory["_meta"]["hostvars"][node_internalip][
|
||||
"kube_node_name"
|
||||
] = node["metadata"]["name"]
|
||||
if key+'_'+value not in self.inventory.keys():
|
||||
self.inventory[key+'_'+value] = {"hosts": [], "vars": {}}
|
||||
if node_internalip not in self.inventory[key+'_'+value]["hosts"]:
|
||||
self.inventory[key+'_'+value]["hosts"].append(node_internalip)
|
||||
return
|
||||
|
||||
def empty_inventory(self):
|
||||
|
@ -1,24 +0,0 @@
|
||||
---
|
||||
- name: Converge
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: no
|
||||
collections:
|
||||
- community.kubernetes
|
||||
|
||||
tasks:
|
||||
- name: Ensure operator image is set
|
||||
fail:
|
||||
msg: |
|
||||
You must specify the OPERATOR_IMAGE environment variable in order to run the
|
||||
'cluster' scenario
|
||||
when: not operator_image
|
||||
|
||||
- name: Create the Operator Deployment
|
||||
k8s:
|
||||
namespace: '{{ namespace }}'
|
||||
definition: "{{ lookup('template', '/'.join([template_dir, 'operator.yaml.j2'])) }}"
|
||||
wait: yes
|
||||
vars:
|
||||
image: '{{ operator_image }}'
|
||||
pull_policy: '{{ operator_pull_policy }}'
|
@ -1,6 +0,0 @@
|
||||
---
|
||||
- name: Create
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: false
|
||||
tasks: []
|
@ -1,34 +0,0 @@
|
||||
---
|
||||
- name: Destroy
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: false
|
||||
no_log: "{{ molecule_no_log }}"
|
||||
collections:
|
||||
- community.kubernetes
|
||||
|
||||
tasks:
|
||||
- name: Delete namespace
|
||||
k8s:
|
||||
api_version: v1
|
||||
kind: Namespace
|
||||
name: '{{ namespace }}'
|
||||
state: absent
|
||||
wait: yes
|
||||
|
||||
- name: Delete RBAC resources
|
||||
k8s:
|
||||
definition: "{{ lookup('template', '/'.join([deploy_dir, item])) }}"
|
||||
namespace: '{{ namespace }}'
|
||||
state: absent
|
||||
wait: yes
|
||||
with_items:
|
||||
- role.yaml
|
||||
- role_binding.yaml
|
||||
- service_account.yaml
|
||||
|
||||
- name: Delete Custom Resource Definition
|
||||
k8s:
|
||||
definition: "{{ lookup('file', '/'.join([deploy_dir, 'crds/hostconfig.airshipit.org_hostconfigs_crd.yaml'])) }}"
|
||||
state: absent
|
||||
wait: yes
|
@ -1,35 +0,0 @@
|
||||
---
|
||||
dependency:
|
||||
name: galaxy
|
||||
driver:
|
||||
name: delegated
|
||||
lint: |
|
||||
set -e
|
||||
yamllint -d "{extends: relaxed, rules: {line-length: {max: 120}}}" .
|
||||
platforms:
|
||||
- name: cluster
|
||||
groups:
|
||||
- k8s
|
||||
provisioner:
|
||||
name: ansible
|
||||
lint: |
|
||||
set -e
|
||||
ansible-lint
|
||||
inventory:
|
||||
group_vars:
|
||||
all:
|
||||
namespace: ${TEST_OPERATOR_NAMESPACE:-osdk-test}
|
||||
host_vars:
|
||||
localhost:
|
||||
ansible_python_interpreter: '{{ ansible_playbook_python }}'
|
||||
deploy_dir: ${MOLECULE_PROJECT_DIRECTORY}/deploy
|
||||
template_dir: ${MOLECULE_PROJECT_DIRECTORY}/molecule/templates
|
||||
operator_image: ${OPERATOR_IMAGE:-""}
|
||||
operator_pull_policy: ${OPERATOR_PULL_POLICY:-"Always"}
|
||||
env:
|
||||
K8S_AUTH_KUBECONFIG: ${KUBECONFIG:-"~/.kube/config"}
|
||||
verifier:
|
||||
name: ansible
|
||||
lint: |
|
||||
set -e
|
||||
ansible-lint
|
@ -1,31 +0,0 @@
|
||||
---
|
||||
- name: Prepare
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: false
|
||||
no_log: "{{ molecule_no_log }}"
|
||||
collections:
|
||||
- community.kubernetes
|
||||
|
||||
vars:
|
||||
deploy_dir: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') }}/deploy"
|
||||
|
||||
tasks:
|
||||
- name: Create Custom Resource Definition
|
||||
k8s:
|
||||
definition: "{{ lookup('file', '/'.join([deploy_dir, 'crds/hostconfig.airshipit.org_hostconfigs_crd.yaml'])) }}"
|
||||
|
||||
- name: Create namespace
|
||||
k8s:
|
||||
api_version: v1
|
||||
kind: Namespace
|
||||
name: '{{ namespace }}'
|
||||
|
||||
- name: Create RBAC resources
|
||||
k8s:
|
||||
definition: "{{ lookup('template', '/'.join([deploy_dir, item])) }}"
|
||||
namespace: '{{ namespace }}'
|
||||
with_items:
|
||||
- role.yaml
|
||||
- role_binding.yaml
|
||||
- service_account.yaml
|
@ -1,35 +0,0 @@
|
||||
---
|
||||
# This is an example playbook to execute Ansible tests.
|
||||
- name: Verify
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: no
|
||||
collections:
|
||||
- community.kubernetes
|
||||
|
||||
vars:
|
||||
custom_resource: "{{ lookup('template', '/'.join([deploy_dir, 'crds/hostconfig.airshipit.org_v1alpha1_hostconfig_cr.yaml'])) | from_yaml }}"
|
||||
|
||||
tasks:
|
||||
- name: Create the hostconfig.airshipit.org/v1alpha1.HostConfig and wait for reconciliation to complete
|
||||
k8s:
|
||||
state: present
|
||||
namespace: '{{ namespace }}'
|
||||
definition: '{{ custom_resource }}'
|
||||
wait: yes
|
||||
wait_timeout: 300
|
||||
wait_condition:
|
||||
type: Running
|
||||
reason: Successful
|
||||
status: "True"
|
||||
|
||||
- name: Get Pods
|
||||
k8s_info:
|
||||
api_version: v1
|
||||
kind: Pod
|
||||
namespace: '{{ namespace }}'
|
||||
register: pods
|
||||
|
||||
- name: Example assertion
|
||||
assert:
|
||||
that: (pods | length) > 0
|
@ -1,6 +0,0 @@
|
||||
---
|
||||
- name: Converge
|
||||
hosts: localhost
|
||||
connection: local
|
||||
roles:
|
||||
- hostconfig
|
@ -1,45 +0,0 @@
|
||||
---
|
||||
dependency:
|
||||
name: galaxy
|
||||
driver:
|
||||
name: docker
|
||||
lint: |
|
||||
set -e
|
||||
yamllint -d "{extends: relaxed, rules: {line-length: {max: 120}}}" .
|
||||
platforms:
|
||||
- name: kind-default
|
||||
groups:
|
||||
- k8s
|
||||
image: bsycorp/kind:latest-${KUBE_VERSION:-1.17}
|
||||
privileged: True
|
||||
override_command: no
|
||||
exposed_ports:
|
||||
- 8443/tcp
|
||||
- 10080/tcp
|
||||
published_ports:
|
||||
- 0.0.0.0:${TEST_CLUSTER_PORT:-9443}:8443/tcp
|
||||
pre_build_image: yes
|
||||
provisioner:
|
||||
name: ansible
|
||||
log: True
|
||||
lint: |
|
||||
set -e
|
||||
ansible-lint
|
||||
inventory:
|
||||
group_vars:
|
||||
all:
|
||||
namespace: ${TEST_OPERATOR_NAMESPACE:-osdk-test}
|
||||
kubeconfig_file: ${MOLECULE_EPHEMERAL_DIRECTORY}/kubeconfig
|
||||
host_vars:
|
||||
localhost:
|
||||
ansible_python_interpreter: '{{ ansible_playbook_python }}'
|
||||
env:
|
||||
K8S_AUTH_KUBECONFIG: ${MOLECULE_EPHEMERAL_DIRECTORY}/kubeconfig
|
||||
KUBECONFIG: ${MOLECULE_EPHEMERAL_DIRECTORY}/kubeconfig
|
||||
ANSIBLE_ROLES_PATH: ${MOLECULE_PROJECT_DIRECTORY}/roles
|
||||
KIND_PORT: '${TEST_CLUSTER_PORT:-9443}'
|
||||
verifier:
|
||||
name: ansible
|
||||
lint: |
|
||||
set -e
|
||||
ansible-lint
|
@ -1,27 +0,0 @@
|
||||
---
|
||||
- name: Prepare
|
||||
hosts: k8s
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Fetch the kubeconfig
|
||||
fetch:
|
||||
dest: '{{ kubeconfig_file }}'
|
||||
flat: yes
|
||||
src: /root/.kube/config
|
||||
|
||||
- name: Change the kubeconfig port to the proper value
|
||||
replace:
|
||||
regexp: '8443'
|
||||
replace: "{{ lookup('env', 'KIND_PORT') }}"
|
||||
path: '{{ kubeconfig_file }}'
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Wait for the Kubernetes API to become available (this could take a minute)
|
||||
uri:
|
||||
url: "http://localhost:10080/kubernetes-ready"
|
||||
status_code: 200
|
||||
validate_certs: no
|
||||
register: result
|
||||
until: (result.status|default(-1)) == 200
|
||||
retries: 60
|
||||
delay: 5
|
@ -1,18 +0,0 @@
|
||||
---
|
||||
- name: Verify
|
||||
hosts: localhost
|
||||
connection: local
|
||||
tasks:
|
||||
- name: Get all pods in {{ namespace }}
|
||||
k8s_info:
|
||||
api_version: v1
|
||||
kind: Pod
|
||||
namespace: '{{ namespace }}'
|
||||
register: pods
|
||||
|
||||
- name: Output pods
|
||||
debug: var=pods
|
||||
|
||||
- name: Example assertion
|
||||
assert:
|
||||
that: true
|
@ -1,40 +0,0 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: airship-host-config
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
name: airship-host-config
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: airship-host-config
|
||||
spec:
|
||||
serviceAccountName: airship-host-config
|
||||
containers:
|
||||
- name: airship-host-config
|
||||
# Replace this with the built image name
|
||||
image: "{{ image }}"
|
||||
imagePullPolicy: "{{ pull_policy }}"
|
||||
volumeMounts:
|
||||
- mountPath: /tmp/ansible-operator/runner
|
||||
name: runner
|
||||
env:
|
||||
- name: WATCH_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: OPERATOR_NAME
|
||||
value: "airship-host-config"
|
||||
- name: ANSIBLE_GATHERING
|
||||
value: explicit
|
||||
volumes:
|
||||
- name: runner
|
||||
emptyDir: {}
|
@ -1,42 +0,0 @@
|
||||
---
|
||||
- name: Build Operator in Kubernetes docker container
|
||||
hosts: k8s
|
||||
collections:
|
||||
- community.kubernetes
|
||||
|
||||
vars:
|
||||
image: hostconfig.airshipit.org/airship-host-config:testing
|
||||
|
||||
tasks:
|
||||
# using command so we don't need to install any dependencies
|
||||
- name: Get existing image hash
|
||||
command: docker images -q {{ image }}
|
||||
register: prev_hash_raw
|
||||
changed_when: false
|
||||
|
||||
- name: Build Operator Image
|
||||
command: docker build -f /build/build/Dockerfile -t {{ image }} /build
|
||||
register: build_cmd
|
||||
changed_when: not hash or (hash and hash not in cmd_out)
|
||||
vars:
|
||||
hash: '{{ prev_hash_raw.stdout }}'
|
||||
cmd_out: '{{ "".join(build_cmd.stdout_lines[-2:]) }}'
|
||||
|
||||
- name: Converge
|
||||
hosts: localhost
|
||||
connection: local
|
||||
collections:
|
||||
- community.kubernetes
|
||||
|
||||
vars:
|
||||
image: hostconfig.airshipit.org/airship-host-config:testing
|
||||
operator_template: "{{ '/'.join([template_dir, 'operator.yaml.j2']) }}"
|
||||
|
||||
tasks:
|
||||
- name: Create the Operator Deployment
|
||||
k8s:
|
||||
namespace: '{{ namespace }}'
|
||||
definition: "{{ lookup('template', operator_template) }}"
|
||||
wait: yes
|
||||
vars:
|
||||
pull_policy: Never
|
@ -1,47 +0,0 @@
|
||||
---
|
||||
dependency:
|
||||
name: galaxy
|
||||
driver:
|
||||
name: docker
|
||||
lint: |
|
||||
set -e
|
||||
yamllint -d "{extends: relaxed, rules: {line-length: {max: 120}}}" .
|
||||
platforms:
|
||||
- name: kind-test-local
|
||||
groups:
|
||||
- k8s
|
||||
image: bsycorp/kind:latest-${KUBE_VERSION:-1.17}
|
||||
privileged: true
|
||||
override_command: false
|
||||
exposed_ports:
|
||||
- 8443/tcp
|
||||
- 10080/tcp
|
||||
published_ports:
|
||||
- 0.0.0.0:${TEST_CLUSTER_PORT:-10443}:8443/tcp
|
||||
pre_build_image: true
|
||||
volumes:
|
||||
- ${MOLECULE_PROJECT_DIRECTORY}:/build:Z
|
||||
provisioner:
|
||||
name: ansible
|
||||
log: true
|
||||
lint:
|
||||
name: ansible-lint
|
||||
inventory:
|
||||
group_vars:
|
||||
all:
|
||||
namespace: ${TEST_OPERATOR_NAMESPACE:-osdk-test}
|
||||
kubeconfig_file: ${MOLECULE_EPHEMERAL_DIRECTORY}/kubeconfig
|
||||
host_vars:
|
||||
localhost:
|
||||
ansible_python_interpreter: '{{ ansible_playbook_python }}'
|
||||
template_dir: ${MOLECULE_PROJECT_DIRECTORY}/molecule/templates
|
||||
deploy_dir: ${MOLECULE_PROJECT_DIRECTORY}/deploy
|
||||
env:
|
||||
K8S_AUTH_KUBECONFIG: ${MOLECULE_EPHEMERAL_DIRECTORY}/kubeconfig
|
||||
KUBECONFIG: ${MOLECULE_EPHEMERAL_DIRECTORY}/kubeconfig
|
||||
ANSIBLE_ROLES_PATH: ${MOLECULE_PROJECT_DIRECTORY}/roles
|
||||
KIND_PORT: '${TEST_CLUSTER_PORT:-10443}'
|
||||
verifier:
|
||||
name: ansible
|
||||
lint:
|
||||
name: ansible-lint
|
@ -1,3 +0,0 @@
|
||||
---
|
||||
- import_playbook: ../default/prepare.yml
|
||||
- import_playbook: ../cluster/prepare.yml
|
@ -1,2 +0,0 @@
|
||||
---
|
||||
- import_playbook: ../cluster/verify.yml
|
@ -1,62 +0,0 @@
|
||||
---
|
||||
#playbook.yaml
|
||||
|
||||
# Ansible play to initialize custom variables
|
||||
# The below block of sequence executes only when the execution order is set to true
|
||||
# Which tells the ansible to execute the host_groups in sequential
|
||||
- name: DISPLAY THE INVENTORY VARS
|
||||
hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Set Serial variable
|
||||
block:
|
||||
## Calculates the serial variable based on the host_groups defined in the Kubernetes Hostconfig CR object
|
||||
## Uses the custom host_config_serial plugin and returns a list of integers
|
||||
## These integer values corresponds to the number of hosts in each host group given un Kubernetes Hostconfig CR object
|
||||
## If we have 3 master and 5 worker nodes setup. And in the Kubernetes Hostconfig CR object we pass the
|
||||
## host_groups as master and worker, then using the host_config_serial plugin the variable returned
|
||||
## would be list of 3, 5 i.e [3, 5] so that all the 3 master execute in first iteration and
|
||||
## next the 5 workers execute in second iteration
|
||||
## This takes the groups parameters set by the dynamic_inventory.py as argument
|
||||
- set_fact:
|
||||
host_config_serial_variable: "{{ host_groups|host_config_serial(groups) }}"
|
||||
## This custom filter plugin is used to futher break the host_config_serial variable into equal length
|
||||
## as specified in the Kubernetes Hostconfig CR object
|
||||
## If we have 3 master and 5 worker nodes setup. And in the Kubernetes Hostconfig CR object we pass the
|
||||
## host_groups as master and worker, also the serial_strategy is set to 2. Then this custom filter returns
|
||||
## the following list of integers where the [3, 5] list is further split based on the
|
||||
## serial_strategy given here it is 2
|
||||
## host_config_serial_variable is [2, 1, 2, 2, 1]
|
||||
## This task is executed only when the execution_strategy is defined in the hostconfig CR object
|
||||
## When executed it overrides the previous task value for host_config_serial_variable variable
|
||||
## This takes host_groups and groups as parameters
|
||||
- set_fact:
|
||||
host_config_serial_variable: "{{ execution_strategy|host_config_serial_strategy(host_groups, groups) }}"
|
||||
when: execution_strategy is defined
|
||||
- debug:
|
||||
msg: "Serial Variable {{ host_config_serial_variable }}"
|
||||
when: execution_order is true and host_groups is defined
|
||||
|
||||
# The tasks below gets executed when execution_order is set to true and order of execution should be
|
||||
# considered while executing
|
||||
# The below tasks considers the host_config_serial_variable variable value set from the previous block
|
||||
# Executes the number of hosts set in the host_config_serial_variable at every iteration
|
||||
- name: Execute Roles based on hosts
|
||||
hosts: "{{ host_groups | default('all')}}"
|
||||
serial: "{{ hostvars['localhost']['host_config_serial_variable'] | default('100%') }}"
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- import_role:
|
||||
name: hostconfig
|
||||
when: execution_order is true
|
||||
|
||||
# Executed when the execution_order is set to false or not set
|
||||
# This is the default execution flow where ansible gets all the host available in host_groups
|
||||
# and executes them in parallel
|
||||
- name: Execute Roles based on hosts
|
||||
hosts: "{{ host_groups | default('all')}}"
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- import_role:
|
||||
name: hostconfig
|
||||
when: execution_order is undefined or execution_order is false
|
87
airship-host-config/playbooks/create_playbook.yaml
Normal file
87
airship-host-config/playbooks/create_playbook.yaml
Normal file
@ -0,0 +1,87 @@
|
||||
---
|
||||
#playbook.yaml
|
||||
|
||||
# Ansible play to initialize custom variables
|
||||
# The below blocks of helps in setting the ansible variables
|
||||
# according to the CR object passed
|
||||
- name: DISPLAY THE INVENTORY VARS
|
||||
collections:
|
||||
- community.kubernetes
|
||||
- operator_sdk.util
|
||||
hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Set Local Variables
|
||||
block:
|
||||
- import_role:
|
||||
name: setvariables
|
||||
|
||||
# The play gets executed when the stop_on_failure is undefined or set to false
|
||||
# stating that the play book execution shouldn't stop even if the tasks fail on the hosts
|
||||
# The below tasks considers the host_config_serial_variable variable value set from the previous block
|
||||
# Executes the number of hosts set in the host_config_serial_variable at every iteration
|
||||
- name: Execute Roles based on hosts and based on the Failure condition
|
||||
collections:
|
||||
- community.kubernetes
|
||||
- operator_sdk.util
|
||||
hosts: "{{ hostvars['localhost']['hostconfig_host_groups'] | default('all')}}"
|
||||
serial: "{{ hostvars['localhost']['hostconfig_serial_variable'] | default('100%') }}"
|
||||
any_errors_fatal: "{{ stop_on_failure|default(false) }}"
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: HostConfig Block
|
||||
block:
|
||||
- import_role:
|
||||
name: sysctl
|
||||
when: config.sysctl is defined
|
||||
- import_role:
|
||||
name: ulimit
|
||||
when: config.ulimit is defined
|
||||
- name: Update the file for success hosts
|
||||
local_action: lineinfile line={{ inventory_hostname }} create=yes dest=/opt/ansible/data/hostconfig/{{ meta.name }}/success_hosts
|
||||
throttle: 1
|
||||
rescue:
|
||||
- name: Update the file for Failed hosts
|
||||
local_action: lineinfile line={{ inventory_hostname }} create=yes dest=/opt/ansible/data/hostconfig/{{ meta.name }}/failed_hosts
|
||||
throttle: 1
|
||||
when: ((stop_on_failure is undefined or stop_on_failure is defined) and max_failure_percentage is undefined) or (stop_on_failure is true and max_failure_percentage is defined)
|
||||
|
||||
# The below play executes with hostconfig role only when the stop_failure is false
|
||||
# and when the max_failure_percentage variable is defined.
|
||||
# The below tasks considers the host_config_serial_variable variable value set from the previous block
|
||||
# Executes the number of hosts set in the host_config_serial_variable at every iteration
|
||||
- name: Execute Roles based on hosts and based on percentage of Failure
|
||||
hosts: "{{ hostvars['localhost']['hostconfig_host_groups'] | default('all')}}"
|
||||
serial: "{{ hostvars['localhost']['hostconfig_serial_variable'] | default('100%') }}"
|
||||
max_fail_percentage: "{{ hostvars['localhost']['max_failure_percentage'] }}"
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Max Percetage Block
|
||||
block:
|
||||
- import_role:
|
||||
name: sysctl
|
||||
when: config.sysctl is defined
|
||||
- import_role:
|
||||
name: ulimit
|
||||
when: config.ulimit is defined
|
||||
when: (stop_on_failure is false or stop_on_failure is undefined) and (max_failure_percentage is defined)
|
||||
|
||||
# Update K8 CR Status
|
||||
- name: Update CR Status
|
||||
collections:
|
||||
- community.kubernetes
|
||||
- operator_sdk.util
|
||||
hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Update CR Status
|
||||
block:
|
||||
- name: Write results to resource status
|
||||
k8s_status:
|
||||
api_version: hostconfig.airshipit.org/v1alpha1
|
||||
kind: HostConfig
|
||||
name: '{{ meta.name }}'
|
||||
namespace: '{{ meta.namespace }}'
|
||||
status:
|
||||
hostConfigStatus: "{{ hostConfigStatus }}"
|
||||
when: hostConfigStatus is defined
|
10
airship-host-config/playbooks/delete_playbook.yaml
Normal file
10
airship-host-config/playbooks/delete_playbook.yaml
Normal file
@ -0,0 +1,10 @@
|
||||
---
|
||||
- name: Delete LocalHosts
|
||||
hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: delete the files
|
||||
file:
|
||||
path: "/opt/ansible/data/hostconfig/{{ meta.name }}"
|
||||
state: absent
|
||||
register: output
|
112
airship-host-config/plugins/callback/hostconfig_k8_cr_status.py
Normal file
112
airship-host-config/plugins/callback/hostconfig_k8_cr_status.py
Normal file
@ -0,0 +1,112 @@
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
DOCUMENTATION = '''
|
||||
callback: hostconfig_k8_cr_status
|
||||
callback_type: aggregate
|
||||
requirements:
|
||||
- whitelist in configuration
|
||||
short_description: Adds time to play stats
|
||||
version_added: "2.0"
|
||||
description:
|
||||
- This callback just adds total play duration to the play stats.
|
||||
'''
|
||||
|
||||
from ansible.plugins.callback import CallbackBase
|
||||
|
||||
|
||||
class CallbackModule(CallbackBase):
|
||||
"""
|
||||
This callback module tells you how long your plays ran for.
|
||||
"""
|
||||
CALLBACK_VERSION = 2.0
|
||||
CALLBACK_TYPE = 'aggregate'
|
||||
CALLBACK_NAME = 'hostconfig_k8_cr_status'
|
||||
CALLBACK_NEEDS_WHITELIST = True
|
||||
|
||||
def __init__(self):
|
||||
super(CallbackModule, self).__init__()
|
||||
|
||||
def v2_playbook_on_play_start(self, play):
|
||||
self.vm = play.get_variable_manager()
|
||||
self.skip_status_tasks = ["debug", "k8s_status", "local_action", "set_fact", "k8s_info", "lineinfile"]
|
||||
|
||||
def runner_on_failed(self, host, result, ignore_errors=False):
|
||||
self.v2_runner_on_failed(result, ignore_errors=False)
|
||||
|
||||
def runner_on_ok(self, host, res):
|
||||
self.v2_runner_on_ok(result)
|
||||
|
||||
def v2_runner_on_failed(self, result, ignore_errors=False):
|
||||
self.set_host_config_status(result, True)
|
||||
return
|
||||
|
||||
def v2_runner_on_ok(self, result):
|
||||
hostname = result._host.name
|
||||
if result._task_fields["action"] in self.skip_status_tasks:
|
||||
return
|
||||
self.set_host_config_status(result)
|
||||
return
|
||||
|
||||
def set_host_config_status(self, result, failed=False):
|
||||
hostname = result._host.name
|
||||
task_name = result.task_name
|
||||
task_result = result._result
|
||||
status = dict()
|
||||
hostConfigStatus = dict()
|
||||
host_vars = self.vm.get_vars()['hostvars'][hostname]
|
||||
k8_hostname = ''
|
||||
if 'kubernetes.io/hostname' in host_vars.keys():
|
||||
k8_hostname = host_vars['kubernetes.io/hostname']
|
||||
else:
|
||||
k8_hostname = hostname
|
||||
if 'hostConfigStatus' in self.vm.get_vars()['hostvars']['localhost'].keys():
|
||||
hostConfigStatus = self.vm.get_vars()['hostvars']['localhost']['hostConfigStatus']
|
||||
if k8_hostname not in hostConfigStatus.keys():
|
||||
hostConfigStatus[k8_hostname] = dict()
|
||||
if task_name in hostConfigStatus[k8_hostname].keys():
|
||||
status[task_name] = hostConfigStatus[k8_hostname][task_name]
|
||||
status[task_name] = dict()
|
||||
if 'stdout' in task_result.keys() and task_result['stdout'] != '':
|
||||
status[task_name]['stdout'] = task_result['stdout']
|
||||
if 'stderr' in task_result.keys() and task_result['stderr'] != '':
|
||||
status[task_name]['stderr'] = task_result['stderr']
|
||||
if 'msg' in task_result.keys() and task_result['msg'] != '':
|
||||
status['msg'] = task_result['msg'].replace('\n', ' ')
|
||||
if 'results' in task_result.keys() and len(task_result['results']) != 0:
|
||||
status[task_name]['results'] = list()
|
||||
for res in task_result['results']:
|
||||
stat = dict()
|
||||
if 'stdout' in res.keys() and res['stdout']:
|
||||
stat['stdout'] = res['stdout']
|
||||
if 'stderr' in res.keys() and res['stderr']:
|
||||
stat['stderr'] = res['stderr']
|
||||
if 'module_stdout' in res.keys() and res['module_stdout']:
|
||||
stat['module_stdout'] = res['module_stdout']
|
||||
if 'module_stderr' in res.keys() and res['module_stderr']:
|
||||
stat['module_stderr'] = res['module_stderr']
|
||||
if 'msg' in res.keys() and res['msg']:
|
||||
stat['msg'] = res['msg'].replace('\n', ' ')
|
||||
if 'item' in res.keys() and res['item']:
|
||||
stat['item'] = res['item']
|
||||
if res['failed']:
|
||||
stat['status'] = "Failed"
|
||||
else:
|
||||
stat['status'] = "Successful"
|
||||
stat['stderr'] = ""
|
||||
stat['module_stderr'] = ""
|
||||
if "msg" not in stat.keys():
|
||||
stat['msg'] = ""
|
||||
status[task_name]['results'].append(stat)
|
||||
if failed:
|
||||
status[task_name]['status'] = "Failed"
|
||||
else:
|
||||
status[task_name]['status'] = "Successful"
|
||||
# As the k8s_status module is merging the current and previous status, if there are any previous failure messages overriding them https://github.com/fabianvf/ansible-k8s-status-module/blob/master/k8s_status.py#L322
|
||||
status[task_name]['stderr'] = ""
|
||||
if "msg" not in status[task_name].keys():
|
||||
status[task_name]['msg'] = ""
|
||||
hostConfigStatus[k8_hostname].update(status)
|
||||
self.vm.set_host_variable('localhost', 'hostConfigStatus', hostConfigStatus)
|
||||
self._display.display(str(status))
|
||||
return
|
@ -1,27 +0,0 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import json
|
||||
|
||||
# Calculates the length of hosts in each groups
|
||||
# Interested Groups are defined using the host_groups
|
||||
# Returns a list of integers
|
||||
def host_config_serial(host_groups, groups):
|
||||
serial_list = list()
|
||||
if type(host_groups) != list:
|
||||
return ''
|
||||
for i in host_groups:
|
||||
if i in groups.keys():
|
||||
serial_list.append(str(len(groups[i])))
|
||||
return str(serial_list)
|
||||
|
||||
|
||||
class FilterModule(object):
|
||||
''' HostConfig Serial plugin for ansible-operator '''
|
||||
|
||||
def filters(self):
|
||||
return {
|
||||
'host_config_serial': host_config_serial
|
||||
}
|
@ -1,30 +0,0 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import json
|
||||
|
||||
# Futher divides the host_config_serial variable into a new list
|
||||
# so that for each iteration there will be not more than the
|
||||
# strategy(int variable) number of hosts executing
|
||||
def host_config_serial_strategy(strategy, host_groups, groups):
|
||||
serial_list = list()
|
||||
if type(strategy) != int and type(host_groups) != list:
|
||||
return ''
|
||||
for i in host_groups:
|
||||
if i in groups.keys():
|
||||
length = len(groups[i])
|
||||
serial_list += int(length/strategy) * [strategy]
|
||||
if length%strategy != 0:
|
||||
serial_list.append(length%strategy)
|
||||
return str(serial_list)
|
||||
|
||||
|
||||
class FilterModule(object):
|
||||
''' HostConfig Serial Startegy plugin for ansible-operator to calucate the serial variable '''
|
||||
|
||||
def filters(self):
|
||||
return {
|
||||
'host_config_serial_strategy': host_config_serial_strategy
|
||||
}
|
88
airship-host-config/plugins/filter/hostconfig_host_groups.py
Normal file
88
airship-host-config/plugins/filter/hostconfig_host_groups.py
Normal file
@ -0,0 +1,88 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import itertools
|
||||
import os
|
||||
|
||||
# This plugin calculates the list of list of hosts that need to be executed in
|
||||
# sequence as given by the host_groups variable. The AND and OR conditions on the
|
||||
# host_groups variable is calculated based on the match_host_groups variable
|
||||
# This returns the list of list of hosts that the ansible should execute the playbook on
|
||||
# Returns: [[192.168.1.12, 192.168.1.11], [192.168.1.14], [192.168.1.5]]
|
||||
|
||||
def host_groups_get_keys(host_groups):
|
||||
keys = list()
|
||||
values = list()
|
||||
for hg in host_groups:
|
||||
keys.append(hg['name'])
|
||||
values.append(hg['values'])
|
||||
print(keys)
|
||||
print(values)
|
||||
return keys, values
|
||||
|
||||
def host_groups_combinations(host_groups):
|
||||
keys, values = host_groups_get_keys(host_groups)
|
||||
for instance in itertools.product(*values):
|
||||
yield dict(zip(keys, instance))
|
||||
|
||||
def removeSuccessHosts(hostGroups, hostConfigName):
|
||||
filename = '/opt/ansible/data/hostconfig/'+hostConfigName+'/success_hosts'
|
||||
print(filename)
|
||||
if os.path.isfile(filename):
|
||||
hosts = list()
|
||||
with open(filename) as f:
|
||||
hosts = [line.rstrip() for line in f]
|
||||
print(hosts)
|
||||
for host in hosts:
|
||||
for hostGroup in hostGroups:
|
||||
if host in hostGroup:
|
||||
hostGroup.remove(host)
|
||||
print(hostGroups)
|
||||
return hostGroups
|
||||
|
||||
def hostconfig_host_groups(host_groups, groups, hostConfigName, match_host_groups, reexecute):
|
||||
host_groups_list = list()
|
||||
host_group_list = list()
|
||||
if type(host_groups) != list:
|
||||
return ''
|
||||
if match_host_groups:
|
||||
hgs_list = list()
|
||||
for host_group in host_groups_combinations(host_groups):
|
||||
hg = list()
|
||||
for k,v in host_group.items():
|
||||
hg.append(k+'_'+v)
|
||||
hgs_list.append(hg)
|
||||
for hgs in hgs_list:
|
||||
host_group = groups[hgs[0]]
|
||||
for i in range(1, len(hgs)):
|
||||
host_group = list(set(host_group) & set(groups[hgs[i]]))
|
||||
host_groups_list.append(host_group)
|
||||
else:
|
||||
for host_group in host_groups:
|
||||
for value in host_group["values"]:
|
||||
key = host_group["name"]
|
||||
hg = list()
|
||||
if key+'_'+value in groups.keys():
|
||||
if not host_group_list:
|
||||
hg = groups[key+'_'+value]
|
||||
host_group_list = hg.copy()
|
||||
else:
|
||||
hg = list((set(groups[key+'_'+value])) - (set(host_group_list) & set(groups[key+'_'+value])))
|
||||
host_group_list.extend(hg)
|
||||
host_groups_list.append(hg)
|
||||
else:
|
||||
return "Invalid Host Groups "+key+" and "+value
|
||||
if not reexecute:
|
||||
return str(removeSuccessHosts(host_groups_list, hostConfigName))
|
||||
return str(host_groups_list)
|
||||
|
||||
|
||||
class FilterModule(object):
|
||||
''' HostConfig Host Groups filter plugin for ansible-operator '''
|
||||
|
||||
def filters(self):
|
||||
return {
|
||||
'hostconfig_host_groups': hostconfig_host_groups
|
||||
}
|
@ -0,0 +1,25 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
# Converts the list of list of hosts to only a list of hosts
|
||||
# that is accepted by the ansible playbook for execution
|
||||
# Returns: [192.168.1.12, 192.168.1.11, 192.168.1.14, 192.168.1.5]
|
||||
|
||||
def hostconfig_host_groups_to_list(hostconfig_host_groups):
|
||||
host_groups_list = list()
|
||||
if type(hostconfig_host_groups) != list:
|
||||
return ''
|
||||
for hg in hostconfig_host_groups:
|
||||
host_groups_list.extend(hg)
|
||||
return str(host_groups_list)
|
||||
|
||||
|
||||
class FilterModule(object):
|
||||
''' Fake test plugin for ansible-operator '''
|
||||
|
||||
def filters(self):
|
||||
return {
|
||||
'hostconfig_host_groups_to_list': hostconfig_host_groups_to_list
|
||||
}
|
@ -0,0 +1,42 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
# Futher divides the host_config_serial variable into a new list
|
||||
# so that for each iteration there will be not more than the
|
||||
# max_hosts_parallel(int variable) number of hosts executing
|
||||
# If we have 3 masters and 5 worker and labels sent are masters and workers
|
||||
# and the max_hosts_parallel is 2
|
||||
# Returns: [2, 2, 2, 2] if sequential is false
|
||||
# Returns: [2, 1, 2, 2, 1] if the sequential is true
|
||||
|
||||
def hostconfig_max_hosts_parallel(max_hosts_parallel, hostconfig_host_groups, sequential=False):
|
||||
parallel_list = list()
|
||||
if type(max_hosts_parallel) != int and type(hostconfig_host_groups) != list and (sequential) != bool:
|
||||
return ''
|
||||
if sequential:
|
||||
for hg in hostconfig_host_groups:
|
||||
length = len(hg)
|
||||
parallel_list += int(length/max_hosts_parallel) * [max_hosts_parallel]
|
||||
if length%max_hosts_parallel != 0:
|
||||
parallel_list.append(length%max_hosts_parallel)
|
||||
else:
|
||||
hgs = list()
|
||||
for hg in hostconfig_host_groups:
|
||||
hgs.extend(hg)
|
||||
length = len(hgs)
|
||||
parallel_list += int(length/max_hosts_parallel) * [max_hosts_parallel]
|
||||
if length%max_hosts_parallel != 0:
|
||||
parallel_list.append(length%max_hosts_parallel)
|
||||
return str(parallel_list)
|
||||
|
||||
|
||||
class FilterModule(object):
|
||||
''' HostConfig Max Hosts in Parallel plugin for ansible-operator to calucate the ansible serial variable '''
|
||||
|
||||
def filters(self):
|
||||
return {
|
||||
'hostconfig_max_hosts_parallel': hostconfig_max_hosts_parallel
|
||||
}
|
||||
|
26
airship-host-config/plugins/filter/hostconfig_sequential.py
Normal file
26
airship-host-config/plugins/filter/hostconfig_sequential.py
Normal file
@ -0,0 +1,26 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
# Calculates the length of hosts in each groups
|
||||
# Interested Groups are defined using the host_groups
|
||||
# Returns a list of integers [2, 1, 3] based on the host_groups variables
|
||||
|
||||
def hostconfig_sequential(hostconfig_host_groups, groups):
|
||||
seq_list = list()
|
||||
if type(hostconfig_host_groups) != list:
|
||||
return ''
|
||||
for host_group in hostconfig_host_groups:
|
||||
if len(host_group) != 0:
|
||||
seq_list.append(len(host_group))
|
||||
return str(seq_list)
|
||||
|
||||
|
||||
class FilterModule(object):
|
||||
''' HostConfig Sequential plugin for ansible-operator '''
|
||||
|
||||
def filters(self):
|
||||
return {
|
||||
'hostconfig_sequential': hostconfig_sequential
|
||||
}
|
@ -1,5 +1,3 @@
|
||||
---
|
||||
collections:
|
||||
- name: community.kubernetes
|
||||
version: "<1.0.0"
|
||||
- community.kubernetes
|
||||
- operator_sdk.util
|
||||
|
@ -1,43 +0,0 @@
|
||||
Role Name
|
||||
=========
|
||||
|
||||
A brief description of the role goes here.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance,
|
||||
if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
|
||||
|
||||
Role Variables
|
||||
--------------
|
||||
|
||||
A description of the settable variables for this role should go here, including any variables that are in
|
||||
defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables
|
||||
that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set
|
||||
for other roles, or variables that are used from other roles.
|
||||
|
||||
Example Playbook
|
||||
----------------
|
||||
|
||||
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for
|
||||
users too:
|
||||
|
||||
- hosts: servers
|
||||
roles:
|
||||
- { role: username.rolename, x: 42 }
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
BSD
|
||||
|
||||
Author Information
|
||||
------------------
|
||||
|
||||
An optional section for the role authors to include contact information, or a website (HTML is not allowed).
|
@ -1,3 +0,0 @@
|
||||
---
|
||||
# defaults file for hostconfig
|
||||
message: "Hello"
|
@ -1,2 +0,0 @@
|
||||
---
|
||||
# handlers file for hostconfig
|
@ -1,64 +0,0 @@
|
||||
---
|
||||
galaxy_info:
|
||||
author: your name
|
||||
description: your description
|
||||
company: your company (optional)
|
||||
|
||||
# If the issue tracker for your role is not on github, uncomment the
|
||||
# next line and provide a value
|
||||
# issue_tracker_url: http://example.com/issue/tracker
|
||||
|
||||
# Some suggested licenses:
|
||||
# - BSD (default)
|
||||
# - MIT
|
||||
# - GPLv2
|
||||
# - GPLv3
|
||||
# - Apache
|
||||
# - CC-BY
|
||||
license: license (GPLv2, CC-BY, etc)
|
||||
|
||||
min_ansible_version: 2.9
|
||||
|
||||
# If this a Container Enabled role, provide the minimum Ansible Container version.
|
||||
# min_ansible_container_version:
|
||||
|
||||
# Optionally specify the branch Galaxy will use when accessing the GitHub
|
||||
# repo for this role. During role install, if no tags are available,
|
||||
# Galaxy will use this branch. During import Galaxy will access files on
|
||||
# this branch. If Travis integration is configured, only notifications for this
|
||||
# branch will be accepted. Otherwise, in all cases, the repo's default branch
|
||||
# (usually master) will be used.
|
||||
#github_branch:
|
||||
|
||||
#
|
||||
# Provide a list of supported platforms, and for each platform a list of versions.
|
||||
# If you don't wish to enumerate all versions for a particular platform, use 'all'.
|
||||
# To view available platforms and versions (or releases), visit:
|
||||
# https://galaxy.ansible.com/api/v1/platforms/
|
||||
#
|
||||
# platforms:
|
||||
# - name: Fedora
|
||||
# versions:
|
||||
# - all
|
||||
# - 25
|
||||
# - name: SomePlatform
|
||||
# versions:
|
||||
# - all
|
||||
# - 1.0
|
||||
# - 7
|
||||
# - 99.99
|
||||
|
||||
galaxy_tags: []
|
||||
# List tags for your role here, one per line. A tag is a keyword that describes
|
||||
# and categorizes the role. Users find roles by searching for tags. Be sure to
|
||||
# remove the '[]' above, if you add tags to this list.
|
||||
#
|
||||
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
|
||||
# Maximum 20 tags per role.
|
||||
|
||||
dependencies: []
|
||||
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
|
||||
# if you add dependencies to this list.
|
||||
collections:
|
||||
- operator_sdk.util
|
||||
- community.kubernetes
|
@ -1,25 +0,0 @@
|
||||
---
|
||||
# tasks file for hostconfig
|
||||
- name: Hello World
|
||||
debug:
|
||||
msg: "Hello world from {{ meta.name }} in the {{ meta.namespace }} namespace."
|
||||
|
||||
- name: Message
|
||||
debug:
|
||||
msg: "Message: {{ message }}"
|
||||
|
||||
- name: DISPLAY HOST DETAILS
|
||||
debug:
|
||||
msg: "And the kubernetes node name is {{ kube_node_name }}, architecture is {{ architecture }} and kernel version is {{ kernel_version }}"
|
||||
|
||||
- name: CREATING A FILE
|
||||
shell: "hostname > ~/testing; date >> ~/testing;cat ~/testing;sleep 5"
|
||||
register: output
|
||||
|
||||
- debug: msg={{ output.stdout }}
|
||||
|
||||
- name: ECHO HOSTNAME
|
||||
shell: hostname
|
||||
register: hostname
|
||||
|
||||
- debug: msg={{ hostname.stdout }}
|
@ -1,2 +0,0 @@
|
||||
---
|
||||
# vars file for hostconfig
|
74
airship-host-config/roles/setvariables/tasks/main.yml
Normal file
74
airship-host-config/roles/setvariables/tasks/main.yml
Normal file
@ -0,0 +1,74 @@
|
||||
---
|
||||
# The below blocak of code helps in intializing the hosts, serial variables
|
||||
# that would be used by the ansible playbook to control sequential or parallel execution
|
||||
- name: Host Groups
|
||||
block:
|
||||
- set_fact:
|
||||
reexecute: false
|
||||
when: reexecute is undefined
|
||||
- set_fact:
|
||||
match_host_groups: false
|
||||
when: match_host_groups is undefined
|
||||
- set_fact:
|
||||
sequential: false
|
||||
when: sequential is undefined
|
||||
# This hostconfig_host_groups custom filter plugin helps in computing the AND or OR
|
||||
# operation on the host_groups labels passed through the CR object.
|
||||
# The AND and OR operation is controlled using the match_host_groups variable.
|
||||
# The function returns a list of list of hosts that need to be exexuted for
|
||||
# every iteration.
|
||||
# Returns: [[192.168.1.5, 192.168.1.3], [192.168.1.4]]
|
||||
- set_fact:
|
||||
hostconfig_host_groups: "{{ host_groups|hostconfig_host_groups(groups, meta.name, match_host_groups, reexecute) }}"
|
||||
- debug:
|
||||
msg: "Host Groups Variable {{ hostconfig_host_groups }}"
|
||||
# The hostconfig_serial custom filter plugin helps in calculating the list of hosts
|
||||
# that need to be executed for every iteration. The plugin uses the match_host_groups
|
||||
# and sequential varaibale based on which the calculation is done.
|
||||
# It returns a list of intergers having the number of hosts to be executed for each iteration
|
||||
# Returns: [3, 4, 2]
|
||||
- set_fact:
|
||||
hostconfig_serial_variable: "{{ hostconfig_host_groups|hostconfig_sequential(groups) }}"
|
||||
when: sequential is true
|
||||
# Futher refining the hostconfig_serial_variable is done using this hostconfig_serial_strategy
|
||||
# filter plugin in case the sequential flow is selected.
|
||||
# Returns: [2, 1, 2, 2, 1] in case of sequential with 3 masters and 5 workers and max_hosts_parallel is 2
|
||||
- set_fact:
|
||||
hostconfig_serial_variable: "{{ max_hosts_parallel|hostconfig_max_hosts_parallel(hostconfig_host_groups, sequential) }}"
|
||||
when: max_hosts_parallel is defined
|
||||
# The hostconfig_host_groups_to_list converts the hostconfig_serial_strategy
|
||||
# to only a list of variables as accepted by the hosts field in ansible.
|
||||
# This conversion is done here as the serial calculation would be easy if the
|
||||
# variable is list of list earlier.
|
||||
# Returns: [192.168.1.5, 192.168.1.3, 192.168.1.4]
|
||||
- set_fact:
|
||||
hostconfig_host_groups: "{{ hostconfig_host_groups|hostconfig_host_groups_to_list }}"
|
||||
- debug:
|
||||
msg: "Host Groups Variable {{ hostconfig_host_groups }}"
|
||||
- debug:
|
||||
msg: "Serial Variable {{ hostconfig_serial_variable }}"
|
||||
when: hostconfig_serial_variable is defined
|
||||
when: host_groups is defined
|
||||
- name: Serial Strategy for parallel
|
||||
block:
|
||||
# This hostconfig_serial_strategy filter plugin helps to set the serial variable with
|
||||
# fixed number of hosts for every execution in case of parallel execution.
|
||||
# Returns: [2, 2, 2, 2] in case of parallel with 3 masters and 5 workers and max_hosts_parallel is 2
|
||||
- set_fact:
|
||||
hostconfig_serial_variable: "{{ max_hosts_parallel|hostconfig_max_hosts_parallel([groups['all']]) }}"
|
||||
- debug:
|
||||
msg: "{{ hostconfig_serial_variable }}"
|
||||
when: host_groups is undefined and max_hosts_parallel is defined
|
||||
- name: Failure Testing
|
||||
block:
|
||||
# This block of max_failure_percentage helps in intaializing default value
|
||||
# to the max_failure_percentage variable so that the below play would be selected
|
||||
# appropriately
|
||||
- set_fact:
|
||||
max_failure_percentage: "{{ max_failure_percentage }}"
|
||||
when: max_failure_percentage is defined
|
||||
# Please note we are just setting some default value to the max_failure_percentage
|
||||
# so that we can check the conditions below
|
||||
- set_fact:
|
||||
max_failure_percentage: 100
|
||||
when: max_failure_percentage is undefined
|
15
airship-host-config/roles/sysctl/tasks/main.yml
Normal file
15
airship-host-config/roles/sysctl/tasks/main.yml
Normal file
@ -0,0 +1,15 @@
|
||||
---
|
||||
- name: sysctl configuration
|
||||
sysctl:
|
||||
name: "{{ item.name }}"
|
||||
value: "{{ item.value }}"
|
||||
sysctl_set: yes
|
||||
state: present
|
||||
reload: yes
|
||||
with_items: "{{ config.sysctl }}"
|
||||
become: yes
|
||||
register: sysctl_output
|
||||
|
||||
- name: sysctl output
|
||||
debug: msg={{ sysctl_output }}
|
||||
when: sysctl_output is defined
|
14
airship-host-config/roles/ulimit/tasks/main.yml
Normal file
14
airship-host-config/roles/ulimit/tasks/main.yml
Normal file
@ -0,0 +1,14 @@
|
||||
---
|
||||
- name: ulimit configuration
|
||||
pam_limits:
|
||||
domain: "{{ item.user }}"
|
||||
limit_type: "{{ item.type }}"
|
||||
limit_item: "{{ item.item }}"
|
||||
value: "{{ item.value }}"
|
||||
with_items: "{{ config.ulimit }}"
|
||||
become: yes
|
||||
register: ulimit_output
|
||||
|
||||
- name: ulimit output
|
||||
debug: msg={{ ulimit_output }}
|
||||
when: ulimit_output is defined
|
@ -31,10 +31,12 @@ save_and_load_docker_image(){
|
||||
worker_node_ips=$(get_worker_ips)
|
||||
echo "Copying Image to following worker Nodes"
|
||||
echo $worker_node_ips
|
||||
touch $HOME/hello
|
||||
for i in $worker_node_ips
|
||||
do
|
||||
sshpass -p "vagrant" scp -o StrictHostKeyChecking=no $IMAGE_NAME vagrant@$i:~/.
|
||||
sshpass -p "vagrant" ssh vagrant@$i docker load -i $IMAGE_NAME
|
||||
sshpass -p "vagrant" ssh vagrant@$i touch hello
|
||||
done
|
||||
}
|
||||
|
||||
|
@ -2,5 +2,7 @@
|
||||
- version: v1alpha1
|
||||
group: hostconfig.airshipit.org
|
||||
kind: HostConfig
|
||||
# role: hostconfig
|
||||
playbook: /opt/ansible/playbook.yaml
|
||||
playbook: playbooks/create_playbook.yaml
|
||||
finalizer:
|
||||
name: finalizer.hostconfig.airshipit.org
|
||||
playbook: playbooks/delete_playbook.yaml
|
||||
|
BIN
docs/CR_creation_flow.png
Normal file
BIN
docs/CR_creation_flow.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 32 KiB |
BIN
docs/Deployment_Architecture.png
Normal file
BIN
docs/Deployment_Architecture.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 40 KiB |
112
docs/Overview.md
Normal file
112
docs/Overview.md
Normal file
@ -0,0 +1,112 @@
|
||||
## HostConfig Operator
|
||||
|
||||
|
||||
## Overview
|
||||
An ansible based operator for performing host configuration LCM operations on Kubernetes Nodes. It is built to perform the configuration on kubernetes nodes after the intial kubernetes setup is done on the nodes. It is managed by the cluster itself.
|
||||
|
||||
Current implementation have been tested with running one replica of the hostconfig operator deployment on one of the master node in the kubernetes setup.
|
||||
|
||||
Once the hostconfig operator is deployed and the corresponding CRD is created on the kubernetes cluster, we can then create the HostConfig CR objects to perform the required configuration on the nodes.
|
||||
|
||||
The host configuration on the kubernetes nodes is done by executing the appropriate ansible playbook on that Kubernetes node by the hostconfig operator pod.
|
||||
|
||||
|
||||
## Scope and Features
|
||||
* Perform host configuration LCM operations on Kubernetes hosts
|
||||
* LCM operations managed using HostConfig CR objects
|
||||
* Inventory built dynamically, at the time of playbook execution
|
||||
* Connects to hosts using the secrets associated with the nodes, which have the ssh keys associated in them.
|
||||
* Supports execution based on host-groups, which are built based out of labels associated with kubernetes nodes
|
||||
* Supports serial/parallel execution of configuration on hosts
|
||||
* Supports host selection with AND and OR operations of the labels mentioned in the host-groups of the CR object
|
||||
* Reconcile on failed nodes, based on reconcile period - feature available from ansible-operator
|
||||
* Current support is available to perform `sysctl` and `ulimit` operations on the kubernetes nodes
|
||||
* WIP: Display the status of each Hostconfig CR object as part of the `kubectl describe hostconfig <name>`
|
||||
|
||||
## Architecture
|
||||
|
||||

|
||||
|
||||
|
||||
Hostconfig operator will be running as a kubernetes deployment on the target kubernetes cluster.
|
||||
|
||||
**Hostconfig Operator Code**
|
||||
|
||||
The code base for the ansible operator is available at: https://github.com/SirishaGopigiri/airship-host-config/tree/integration
|
||||
|
||||
The repository also have vagrants scripts to build kubernetes cluster on the Vagrant VMs and then test the ansible-operator pod.
|
||||
|
||||
## Deployment and Host Configuration Flow
|
||||
|
||||
The hostconfig operator deployment sequence
|
||||
|
||||

|
||||
|
||||
Using operator pod to perform host configuration on kubernetes nodes
|
||||
|
||||

|
||||
|
||||
|
||||
## How to Deploy(On existing kubernetes cluster)
|
||||
|
||||
**Pre-requisite:**
|
||||
|
||||
1. The Kubernetes nodes should be labelled with any one of the below label to execute based on host-groups, if not labelled by default executes on all the nodes as no selection happens.
|
||||
Valid labels:
|
||||
* [`topology.kubernetes.io/region`](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
|
||||
* [`topology.kubernetes.io/zone`](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesioregion)
|
||||
* `kubernetes.io/role`
|
||||
* [`kubernetes.io/hostname`](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-hostname)
|
||||
* [`kubernetes.io/arch`](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-arch)
|
||||
* [`kubernetes.io/os`](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os)
|
||||
|
||||
2. **Operator pod connecting to Kubernetes Nodes:**
|
||||
|
||||
The kubernetes nodes should be annotated with secret name having the username and private key as part of the contents.
|
||||
2. **Operator pod connecting to Kubernetes Nodes:**
|
||||
|
||||
The kubernetes nodes should be annotated with secret name having the username and private key as part of the contents.
|
||||
|
||||
git clone the hostconfig repository
|
||||
|
||||
`git clone -b integration https://github.com/SirishaGopigiri/airship-host-config.git`
|
||||
|
||||
Move to airship-host-config directory
|
||||
|
||||
`cd airship-host-config/airship-host-config`
|
||||
|
||||
Create a HostConfig CRD
|
||||
|
||||
`kubectl create -f deploy/crds/hostconfig.airshipit.org_hostconfigs_crd.yaml`
|
||||
|
||||
Create hostconfig role, service account, role-binding and cluster-role-binding which is used to deploy and manage the operations done using the hostconfig operator pod
|
||||
|
||||
`kubectl create -f deploy/role.yaml`
|
||||
`kubectl create -f deploy/service_account.yaml`
|
||||
`kubectl create -f deploy/role_binding.yaml`
|
||||
`kubectl create -f deploy/cluster_role_binding.yaml`
|
||||
|
||||
Now deploy the hostconfig operator pod
|
||||
|
||||
`kubectl create -f deploy/operator.yaml`
|
||||
|
||||
Once the hostconfig operator pod is deployed, we can create the desired HostConfig CR with the required configuration. And this CR can be passed to the operator pod which performs the required operation.
|
||||
|
||||
Some example CRs are available in the demo_examples directory.
|
||||
|
||||
## Airshipctl integration
|
||||
|
||||
The hostconfig operator can be integrated with airshipctl code by changing the manifests folder, which are used to build the target workload cluster.
|
||||
|
||||
For the proposed changes, please refer to below Patch set:
|
||||
|
||||
https://review.opendev.org/#/c/744098
|
||||
|
||||
To deploy the operator on the cluster using `airshipctl phase apply`, the function needs to be invoked from the corresponding kuztomization.yaml file. This is WIP.
|
||||
|
||||
## References
|
||||
|
||||
1. https://docs.openshift.com/container-platform/4.1/applications/operator_sdk/osdk-ansible.html
|
||||
2. https://github.com/operator-framework/operator-sdk
|
||||
3. https://docs.ansible.com/ansible/latest/modules/sysctl_module.html
|
||||
4. https://docs.ansible.com/ansible/latest/modules/pam_limits_module.html
|
BIN
docs/deployment_flow.png
Normal file
BIN
docs/deployment_flow.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 20 KiB |
@ -11,7 +11,7 @@ A vagrant script for setting up a Kubernetes cluster using Kubeadm
|
||||
Git clone the repo on the host machine which has vagrant and virtual box installed
|
||||
|
||||
```
|
||||
git clone https://github.com/SirishaGopigiri/airship-host-config.git
|
||||
git clone https://github.com/SirishaGopigiri/airship-host-config.git -b june_29
|
||||
```
|
||||
|
||||
Navigate to the kubernetes folder
|
||||
@ -20,7 +20,7 @@ Navigate to the kubernetes folder
|
||||
cd airship-host-config/kubernetes/
|
||||
```
|
||||
|
||||
Execute the following vagrant command to start a new Kubernetes cluster, this will start one master and two nodes:
|
||||
Execute the following vagrant command to start a new Kubernetes cluster, this will start three master and five nodes:
|
||||
|
||||
```
|
||||
vagrant up
|
||||
@ -28,7 +28,33 @@ vagrant up
|
||||
|
||||
You can also start invidual machines by vagrant up k8s-head, vagrant up k8s-node-1 and vagrant up k8s-node-2
|
||||
|
||||
If more than two nodes are required, you can edit the servers array in the Vagrantfile
|
||||
If you would need more master nodes, you can edit the servers array in the Vagrantfile. Please change the name, and IP address for eth1.
|
||||
```
|
||||
servers = [
|
||||
{
|
||||
:name => "k8s-master-1",
|
||||
:type => "master",
|
||||
:box => "ubuntu/xenial64",
|
||||
:box_version => "20180831.0.0",
|
||||
:eth1 => "192.168.205.10",
|
||||
:mem => "2048",
|
||||
:cpu => "2"
|
||||
}
|
||||
]
|
||||
```
|
||||
Also update the haproxy.cfg file to add more master servers.
|
||||
|
||||
```
|
||||
balance roundrobin
|
||||
server k8s-api-1 192.168.205.10:6443 check
|
||||
server k8s-api-2 192.168.205.11:6443 check
|
||||
server k8s-api-3 192.168.205.12:6443 check
|
||||
server k8s-api-4 <ip:port> check
|
||||
```
|
||||
|
||||
|
||||
If more than five nodes are required, you can edit the servers array in the Vagrantfile. Please chang ethe name, an
|
||||
d IP address for eth1.
|
||||
|
||||
```
|
||||
servers = [
|
||||
@ -37,7 +63,7 @@ servers = [
|
||||
:type => "node",
|
||||
:box => "ubuntu/xenial64",
|
||||
:box_version => "20180831.0.0",
|
||||
:eth1 => "192.168.205.13",
|
||||
:eth1 => "192.168.205.14",
|
||||
:mem => "2048",
|
||||
:cpu => "2"
|
||||
}
|
||||
|
90
kubernetes/Vagrantfile
vendored
90
kubernetes/Vagrantfile
vendored
@ -3,7 +3,16 @@
|
||||
|
||||
servers = [
|
||||
{
|
||||
:name => "k8s-master",
|
||||
:name => "k8s-lbhaproxy",
|
||||
:type => "lbhaproxy",
|
||||
:box => "ubuntu/xenial64",
|
||||
:box_version => "20180831.0.0",
|
||||
:eth1 => "192.168.205.13",
|
||||
:mem => "2048",
|
||||
:cpu => "2"
|
||||
},
|
||||
{
|
||||
:name => "k8s-master-1",
|
||||
:type => "master",
|
||||
:box => "ubuntu/xenial64",
|
||||
:box_version => "20180831.0.0",
|
||||
@ -11,12 +20,30 @@ servers = [
|
||||
:mem => "2048",
|
||||
:cpu => "2"
|
||||
},
|
||||
{
|
||||
:name => "k8s-master-2",
|
||||
:type => "master-join",
|
||||
:box => "ubuntu/xenial64",
|
||||
:box_version => "20180831.0.0",
|
||||
:eth1 => "192.168.205.11",
|
||||
:mem => "2048",
|
||||
:cpu => "2"
|
||||
},
|
||||
{
|
||||
:name => "k8s-master-3",
|
||||
:type => "master-join",
|
||||
:box => "ubuntu/xenial64",
|
||||
:box_version => "20180831.0.0",
|
||||
:eth1 => "192.168.205.12",
|
||||
:mem => "2048",
|
||||
:cpu => "2"
|
||||
},
|
||||
{
|
||||
:name => "k8s-node-1",
|
||||
:type => "node",
|
||||
:box => "ubuntu/xenial64",
|
||||
:box_version => "20180831.0.0",
|
||||
:eth1 => "192.168.205.11",
|
||||
:eth1 => "192.168.205.14",
|
||||
:mem => "2048",
|
||||
:cpu => "2"
|
||||
},
|
||||
@ -25,7 +52,34 @@ servers = [
|
||||
:type => "node",
|
||||
:box => "ubuntu/xenial64",
|
||||
:box_version => "20180831.0.0",
|
||||
:eth1 => "192.168.205.12",
|
||||
:eth1 => "192.168.205.15",
|
||||
:mem => "2048",
|
||||
:cpu => "2"
|
||||
},
|
||||
{
|
||||
:name => "k8s-node-3",
|
||||
:type => "node",
|
||||
:box => "ubuntu/xenial64",
|
||||
:box_version => "20180831.0.0",
|
||||
:eth1 => "192.168.205.16",
|
||||
:mem => "2048",
|
||||
:cpu => "2"
|
||||
},
|
||||
{
|
||||
:name => "k8s-node-4",
|
||||
:type => "node",
|
||||
:box => "ubuntu/xenial64",
|
||||
:box_version => "20180831.0.0",
|
||||
:eth1 => "192.168.205.17",
|
||||
:mem => "2048",
|
||||
:cpu => "2"
|
||||
},
|
||||
{
|
||||
:name => "k8s-node-5",
|
||||
:type => "node",
|
||||
:box => "ubuntu/xenial64",
|
||||
:box_version => "20180831.0.0",
|
||||
:eth1 => "192.168.205.18",
|
||||
:mem => "2048",
|
||||
:cpu => "2"
|
||||
}
|
||||
@ -83,7 +137,7 @@ $configureMaster = <<-SCRIPT
|
||||
|
||||
# install k8s master
|
||||
HOST_NAME=$(hostname -s)
|
||||
kubeadm init --apiserver-advertise-address=$IP_ADDR --apiserver-cert-extra-sans=$IP_ADDR --node-name $HOST_NAME --pod-network-cidr=172.16.0.0/16
|
||||
kubeadm init --apiserver-advertise-address=$IP_ADDR --apiserver-cert-extra-sans=$IP_ADDR --node-name $HOST_NAME --pod-network-cidr=172.16.0.0/16 --control-plane-endpoint "192.168.205.13:443" --upload-certs
|
||||
|
||||
#copying credentials to regular user - vagrant
|
||||
sudo --user=vagrant mkdir -p /home/vagrant/.kube
|
||||
@ -94,14 +148,29 @@ $configureMaster = <<-SCRIPT
|
||||
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||
kubectl apply -f https://raw.githubusercontent.com/SirishaGopigiri/airship-host-config/master/kubernetes/calico/calico.yaml
|
||||
|
||||
kubeadm init phase upload-certs --upload-certs > /etc/upload_cert
|
||||
kubeadm token create --print-join-command >> /etc/kubeadm_join_cmd.sh
|
||||
chmod +x /etc/kubeadm_join_cmd.sh
|
||||
|
||||
cat /etc/kubeadm_join_cmd.sh > /etc/kubeadm_join_master.sh
|
||||
CERT=`tail -1 /etc/upload_cert`
|
||||
sed -i '$ s/$/ --control-plane --certificate-key '"$CERT"'/' /etc/kubeadm_join_master.sh
|
||||
|
||||
#Install sshpass for futher docker image copy
|
||||
apt-get install -y sshpass
|
||||
|
||||
SCRIPT
|
||||
|
||||
$configureMasterJoin = <<-SCRIPT
|
||||
echo -e "\nThis is Master with Join Commadn:\n"
|
||||
apt-get install -y sshpass
|
||||
sshpass -p "vagrant" scp -o StrictHostKeyChecking=no vagrant@192.168.205.10:/etc/kubeadm_join_master.sh .
|
||||
IP_ADDR=`ifconfig enp0s8 | grep Mask | awk '{print $2}'| cut -f2 -d:`
|
||||
|
||||
sed -i '$ s/$/ --apiserver-advertise-address '"$IP_ADDR"'/' kubeadm_join_master.sh
|
||||
sh ./kubeadm_join_master.sh
|
||||
SCRIPT
|
||||
|
||||
$configureNode = <<-SCRIPT
|
||||
echo -e "\nThis is worker:\n"
|
||||
apt-get install -y sshpass
|
||||
@ -123,21 +192,24 @@ Vagrant.configure("2") do |config|
|
||||
config.vm.provider "virtualbox" do |v|
|
||||
|
||||
v.name = opts[:name]
|
||||
v.customize ["modifyvm", :id, "--groups", "/Ballerina Development"]
|
||||
v.customize ["modifyvm", :id, "--groups", "/Ballerina Development"]
|
||||
v.customize ["modifyvm", :id, "--memory", opts[:mem]]
|
||||
v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
|
||||
|
||||
end
|
||||
|
||||
# we cannot use this because we can't install the docker version we want - https://github.com/hashicorp/vagrant/issues/4871
|
||||
# config.vm.provision "docker"
|
||||
|
||||
config.vm.provision "shell", inline: $configureBox
|
||||
|
||||
if opts[:type] == "master"
|
||||
config.vm.provision "shell", inline: $configureBox
|
||||
config.vm.provision "shell", inline: $configureMaster
|
||||
config.vm.provision "file", source: "../airship-host-config", destination: "/home/vagrant/airship-host-config/airship-host-config"
|
||||
elsif opts[:type] == "lbhaproxy"
|
||||
config.vm.provision "shell", :path => "haproxy.sh"
|
||||
elsif opts[:type] == "master-join"
|
||||
config.vm.provision "shell", inline: $configureBox
|
||||
config.vm.provision "shell", inline: $configureMasterJoin
|
||||
else
|
||||
config.vm.provision "shell", inline: $configureBox
|
||||
config.vm.provision "shell", inline: $configureNode
|
||||
end
|
||||
|
||||
|
73
kubernetes/haproxy.sh
Normal file
73
kubernetes/haproxy.sh
Normal file
@ -0,0 +1,73 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ ! -f /etc/haproxy/haproxy.cfg ]; then
|
||||
|
||||
# Install haproxy
|
||||
sudo sed -i "/^[^#]*PasswordAuthentication[[:space:]]no/c\PasswordAuthentication yes" /etc/ssh/sshd_config
|
||||
sudo service sshd restart
|
||||
/usr/bin/apt-get -y install haproxy
|
||||
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.orig
|
||||
|
||||
# Configure haproxy
|
||||
cat > /etc/default/haproxy <<EOD
|
||||
# Set ENABLED to 1 if you want the init script to start haproxy.
|
||||
ENABLED=1
|
||||
# Add extra flags here.
|
||||
#EXTRAOPTS="-de -m 16"
|
||||
EOD
|
||||
cat > /etc/haproxy/haproxy.cfg <<EOD
|
||||
# /etc/haproxy/haproxy.cfg
|
||||
#---------------------------------------------------------------------
|
||||
# Global settings
|
||||
#---------------------------------------------------------------------
|
||||
global
|
||||
log /dev/log local0
|
||||
log /dev/log local1 notice
|
||||
daemon
|
||||
|
||||
#---------------------------------------------------------------------
|
||||
# common defaults that all the 'listen' and 'backend' sections will
|
||||
# use if not designated in their block
|
||||
#---------------------------------------------------------------------
|
||||
defaults
|
||||
mode http
|
||||
log global
|
||||
option httplog
|
||||
option dontlognull
|
||||
option http-server-close
|
||||
option forwardfor except 127.0.0.0/8
|
||||
option redispatch
|
||||
retries 1
|
||||
timeout http-request 10s
|
||||
timeout queue 20s
|
||||
timeout connect 5s
|
||||
timeout client 20s
|
||||
timeout server 20s
|
||||
timeout http-keep-alive 10s
|
||||
timeout check 10s
|
||||
|
||||
#---------------------------------------------------------------------
|
||||
# apiserver frontend which proxys to the masters
|
||||
#---------------------------------------------------------------------
|
||||
frontend k8-apiserver
|
||||
bind *:443
|
||||
mode tcp
|
||||
option tcplog
|
||||
default_backend k8-apiserver
|
||||
|
||||
#---------------------------------------------------------------------
|
||||
# round robin balancing for apiserver
|
||||
#---------------------------------------------------------------------
|
||||
backend k8-apiserver
|
||||
option httpchk GET /healthz
|
||||
http-check expect status 200
|
||||
mode tcp
|
||||
option ssl-hello-chk
|
||||
balance roundrobin
|
||||
server k8s-api-1 192.168.205.10:6443 check
|
||||
server k8s-api-2 192.168.205.11:6443 check
|
||||
server k8s-api-3 192.168.205.12:6443 check
|
||||
EOD
|
||||
|
||||
/usr/sbin/service haproxy restart
|
||||
fi
|
Loading…
x
Reference in New Issue
Block a user