What is K8E

Kubernetes Easy (k8e) is a lightweight, Extensible, Enterprise Kubernetes distribution that allows users to uniformly manage, secure, and get out-of-the-box kubernetes cluster for enterprise environments.

Get started

Kubernetes Easy (k8e) is a lightweight, Extensible, Enterprise Kubernetes distribution that allows users to uniformly manage, secure, and get out-of-the-box kubernetes cluster for enterprise environments.

The k8e 🚀 (said ‘kuber easy’) project builds on upstream project K3s as codebase, remove Edge/IoT features and extend enterprise features with best practices.

Great for:

  • CI
  • Development
  • Enterprise Deployment

For User - Deploy HA kubernetes cluster with k8e

The list of prepared hosts (2Core/4G RAM is the minimum standard) is as follows.

Host Name Configuration Count CIDR Role
k8e-test1 2Core vCPU/8G RAM 1 172.25.1.56 master/etcd
k8e-test2 2Core vCPU/8G RAM 1 172.25.1.57 master/etcd
k8e-test3 2Core vCPU/8G RAM 1 172.25.1.58 master/etcd
k8e-test4 2Core vCPU/8G RAM 1 172.25.1.59 agent

The following ports need to be open on the host:(https://kubernetes.io/docs/reference/ports-and-protocols/)

Control plane

Protocol Direction Port Range Purpose Used By
TCP Inbound 6443 Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252 kube-controller-manager Self

Worker node(s)

Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services All

k8e currently supports kubernetes 1.21 mainline release(LTS)

Deployment steps are as follows:

  1. Start the k8s clustering process.
  • Note that the host system must meet: Linux kernel >= 4.9.17 to facilitate support for eBPF networks
  • The first one, the boot service (note: the first host IP is the IP of the api-server).

# Define environment variables
# K8E_TOKEN cluster token, used to transfer tls certificates for use
# K8E_NODE_NAME hosthostname, used to indicate a unique alias in etcd
# K8E_CLUSTER_INIT Enable built-in etcd for storage

curl -sfL https://getk8e.com/install.sh | K8E_TOKEN=ilovek8e INSTALL_K8E_EXEC="server --cluster-init --write-kubeconfig-mode=666" sh -

At this point, the standalone k8s cluster is up and running and is sufficient for a technical verification environment. However, if you need to build a highly available version of k8s cluster, you need to enable 3 master nodes, which can be added according to the following steps. Follow the CAP principle Highly available clusters require more than 3 instances of etcd to ensure the consistency of the cluster state.

The second master node is configured as follows:

curl -sfL https://getk8e.com/install.sh | K8E_TOKEN=ilovek8e K8E_URL=https://172.25.1.56:6443 INSTALL_K8E_EXEC="server --write-kubeconfig-mode=666" sh -s -

The third master node is configured as follows:

curl -sfL https://getk8e.com/install.sh | K8E_TOKEN=ilovek8e K8E_URL=https://172.25.1.56:6443 INSTALL_K8E_EXEC="server --write-kubeconfig-mode=666" sh -s -
  1. Add workload node agent
curl -sfL https://getk8e.com/install.sh | K8E_TOKEN=ilovek8e K8E_URL=https://172.25.1.56:6443 sh -
  1. Initializing the network Built-in Cilium installation tool
[[email protected] ~]# export KUBECONFIG=/etc/k8e/k8e.yaml
[[email protected] ~]# cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble:         disabled
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:       cilium             Running: 2
                  cilium-operator    Running: 1
Cluster Pods:     3/3 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.10.5: 2
                  cilium-operator    quay.io/cilium/operator-generic:v1.10.5: 1

Note : The three master nodes have built-in LB, and after starting the cluster, the first one is turned off, and the compute node agent can automatically switch to the second one. Don’t worry about the cluster being affected. For the management console to access the api-server address, you can put an nginx aggregation of the three master nodes' IPs. If you want to do VIP, you can consider using kube-vip to achieve this.

The default kubeconfig is placed in /etc/k8e/k8e.yaml.

Note : For cluster high availability scenario with control plane kube-api-server, a VIP entry is required, which needs to be noted here. Because k8e is added with TLS certificate on port 6443 by default, you need to sign a certificate for VIP when starting k8e, so that the management side can effectively access this K8S cluster for management. The configuration parameters are as follows.

# --tls-san value   (listener) Add additional hostname or IP as a Subject Alternative Name in the TLS cert
curl -sfL https://getk8e.com/install.sh | INSTALL_K8E_EXEC="server --tls-san 192.168.1.1" sh -

Note : k8e provides the nerdctl tool by default, which is an advanced management tool for containerd and a perfect replacement for docker. so you can continue to use the docker command to manage images at

# in k8e node, have a try below cmd
[[email protected] k8e]# docker -n k8s.io images
REPOSITORY    TAG       IMAGE ID        CREATED        SIZE
nginx         latest    01c2e84120e8    2 hours ago    8.0 KiB

For Developer - Building && Installing

  1. Building k8e

The clone will be much faster on this repo if you do

git clone --depth 1 https://github.com/xiaods/k8e.git

This repo includes all of Kubernetes history so --depth 1 will avoid most of that.

The k8e build process requires some autogenerated code and remote artifacts that are not checked in to version control. To prepare these resources for your build environment, run:.

make generate

To build the full release binary, you may now run make, which will create ./dist/artifacts/k8e.

To build the binaries using without running linting (ie; if you have uncommitted changes):

SKIP_VALIDATE=true make
  1. Run server on your local development environment.
sudo ./k8e check-config
sudo ./k8e server &
# Kubeconfig is written to /etc/k8e/k8e.yaml
export KUBECONFIG=/etc/k8e/k8e.yaml

# On a different node run the below. NODE_TOKEN comes from
# /var/lib/k8e/server/node-token on your server
sudo ./k8e agent --server https://myserver:6443 --token ${NODE_TOKEN}

# query all node from k8s cluster
sudo ./k8e kubectl get nodes

Contributing

Find out how to contribute to K8e. Contributing →

Help

Get help on K8e. FAQ →