Kubespray: Deploy a Production Ready Kubernetes Cluster

In this post, we will guide you through the process of leveraging Kubespray — a flexible, Ansible-based tool — to streamline the deployment of Kubernetes.

1. Installing Ansible

Ref: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible/ansible.md#installing-ansible

Requirements

  1. python3
  2. venv
  3. pip
  4. git

For Debian-based system (Ubuntu):

Bash
sudo apt update
sudo apt install -y python3 python3-pip python3-venv git

For Fedora-based system (RHEL):

Bash
sudo yum update
sudo yum install -y python3 python3-pip python3-venv git

Deploy Ansible

Clone Kubespray repository:

Bash
git clone https://github.com/kubernetes-sigs/kubespray

Create Python virtual environment and install Ansible:

Bash
VENVDIR=kubespray-venv
KUBESPRAYDIR=kubespray
python3 -m venv $VENVDIR
source $VENVDIR/bin/activate
cd $KUBESPRAYDIR
pip install -U -r requirements.txt

2. Building Cluster Inventory

Ref: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting_started/getting-started.md

Defining Cluster Nodes

This step determines which nodes will be part of the Kubernetes cluster. For a production-grade setup, it is recommended to have at least two control-plane (master) nodes and three or more etcd nodes, ensuring the etcd count is an odd number for quorum.

If resources are limited, the control-plane and etcd roles can be combined on the same nodes (e.g., three nodes serving as both control-plane and etcd nodes). In the worst-case scenario, a Kubernetes cluster can function with just one node serving as both the control-plane and etcd.

In this example, we will deploy Kubernetes on a single node. This node will handle the control-plane, etcd, and worker functions all within the same instance.

Create a new inventory folder by copying the provided sample inventory folder. For example, name the new folder “mycluster“.

Bash
cp -rfp inventory/sample inventory/mycluster

Update the inventory.ini file in the new folder to match the specific roles and designations of your cluster nodes.

Bash
vi inventory/mycluster/inventory.ini
inventory/mycluster/inventory.ini
[kube_control_plane]
andi-vm ansible_host=192.168.0.77 ip=192.168.0.77 etcd_member_name=andi-vm

[etcd:children]
kube_control_plane

[kube_node]
andi-vm ansible_host=192.168.0.77 ip=192.168.0.77

For detailed instructions on customizing the inventory.ini file to suit your cluster’s topology, visit: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible/inventory.md.

Customizing Installation Options

Update the parameters in the following files to customize your Kubernetes cluster installation:

Bash
vi inventory/mycluster/group_vars/all/all.yml
vi inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
vi inventory/mycluster/group_vars/k8s_cluster/addons.yml

Here are a few things we believe are important to adjust, nice to have, or should be enabled:

inventory/mycluster/group_vars/all/all.yml
# Set the DNS servers to be used by Kubernetes containers
upstream_dns_servers:
  - 192.168.0.1
  - 1.1.1.1

# Ensure all cluster nodes have synchronized system time
ntp_enabled: true
ntp_manage_config: true
ntp_servers:
  - "0.pool.ntp.org iburst"
  - "1.pool.ntp.org iburst"
  - "2.pool.ntp.org iburst"
  - "3.pool.ntp.org iburst"
inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
# Specify the version of Kubernetes to install
kube_version: v1.31.4

# Decide on the network plugin to use. We recommend using the default plugin, 'calico'
kube_network_plugin: calico

# If your cluster has multiple network interfaces, set this option to 'true'
kube_network_plugin_multus: true

# If you're using MetalLB or kube-vip with ARP enabled, set this option to 'true'
kube_proxy_strict_arp: true

# Enable this option (true) to automatically renew Kubernetes certificates
auto_renew_certificates: true
auto_renew_certificates_systemd_calendar: "Mon *-*-1,2,3,4,5,6,7 03:{{ groups['kube_control_plane'].index(inventory_hostname) }}0:00"
inventory/mycluster/group_vars/k8s_cluster/addons.yml
# Enable helm deployment
helm_enabled: true

# Enable local path storage provisioner deployment
local_path_provisioner_enabled: true
local_path_provisioner_namespace: "local-path"
local_path_provisioner_storage_class: "local-path"
local_path_provisioner_reclaim_policy: Delete
local_path_provisioner_claim_root: /data/

# Enable NGINX Ingress controller deployment
ingress_nginx_enabled: true

# Enable cert-manager deployment
cert_manager_enabled: true

# Enable MetalLB deployment
metallb_enabled: true
metallb_speaker_enabled: "{{ metallb_enabled }}"
metallb_namespace: "metallb-system"

# Enable krew plugin for kubectl
krew_enabled: true

3. Deploying Kubernetes

Set Up Passwordless SSH

Create a private/public SSH key pair on your system:

Bash
ssh-keygen -t rsa

Copy the public key to each node that will be part of the Kubernetes cluster. Repeat this step for every node:

Bash
ssh-copy-id andi-vm

Ensure Sudo Access

Some Kubernetes installation tasks require sudo privileges. To add a user (e.g., andi) to the sudo (wheel) group, use the following command:

For Debian-based system (Ubuntu):

Bash
sudo usermod -aG sudo andi

For Fedora-based system (RHEL):

Bash
sudo usermod -aG wheel andi

Install Kubernetes

Bash
ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml -b -K -v --private-key=~/.ssh/id_rsa

When prompted for the ‘BECOME password‘ during the installation process, enter your account password.

4. Validation

Check the status of Kubernetes nodes:

Bash
sudo kubectl get nodes

List down all the Kubernetes pods status:

Bash
sudo kubectl get pods -A

To run kubectl commands without using sudo, you can copy the configuration file to your home directory with the following steps:

Bash
sudo cp -rfp /root/.kube ~/
sudo chown -R $USER ~/.kube

Leave a Reply

Your email address will not be published. Required fields are marked *