This the multi-page printable view of this section. Click here to print.
Document
- 1: Overview
- 2: Preparation
- 3: Install with Script
- 4: Step-By-Step Install
- 5: Kubernetes
- 5.1: Fixed CVEs
- 5.1.1: CVE-2021-25741
- 5.1.2: CVE-2021-3121
- 5.1.3: CVE-2020-8559
- 5.1.4: CVE-2020-8558
- 5.1.5: CVE-2020-8552
- 5.1.6: CVE-2019-1002101
- 5.1.7: CVE-2019-11251
- 5.1.8: CVE-2019-11248
- 5.1.9: CVE-2019-11247
- 5.1.10: CVE-2019-11246
- 5.1.11: CVE-2019-11245
- 5.1.12: CVE-2019-11249
- 5.1.13: nokmem
- 5.2: Release log
- 5.2.1: v1.23
- 5.2.1.1: v1.23.5-lts.1
- 5.2.2: v1.22
- 5.2.2.1: v1.22.8-lts.1
- 5.2.3: v1.21
- 5.2.3.1: v1.21.11-lts.1
- 5.2.4: v1.20
- 5.2.4.1: v1.20.15-lts.2
- 5.2.5: v1.19
- 5.2.5.1: v1.19.16-lts.3
- 5.2.6: v1.18
- 5.2.6.1: v1.18.20-lts.1
- 5.2.6.2: v1.18.20-lts.2
- 5.2.7: v1.17
- 5.2.7.1: v1.17.17-lts.1
- 5.2.8: v1.16
- 5.2.8.1: v1.16.15-lts.1
- 5.2.9: v1.15
- 5.2.9.1: v1.15.12-lts.1
- 5.2.10: v1.14
- 5.2.10.1: v1.14.10-lts.1
- 5.2.11: v1.13
- 5.2.11.1: v1.13.12-lts.1
- 5.2.12: v1.12
- 5.2.12.1: v1.12.10-lts.1
- 5.2.13: v1.11
- 5.2.13.1: v1.11.10-lts.1
- 5.2.14: v1.10
- 5.2.14.1: v1.10.13-lts.1
- 6: Containerd
- 6.1: Fixed CVEs
- 6.1.1: CVE-2021-41103
- 6.2: Release log
- 6.2.1: v1.3
- 6.2.1.1: v1.3.10-lts.1
- 7: Developer Guide
- 7.1: Clone
- 7.2: Dependencies
1 - Overview
KLTS, known as Kubernetes Long Term Support, has a primary mission to provide free long-term maintenance support for early versions of Kubernetes.
One of the reasons to maintain early versions is the fact that in a real production environment, the latest release is not the best or the most stable. In a normal case, a of Kubernetes is not available until one year after the initial release of a particular version. For details see Kubernetes release cycle. After the community aborts maintenance, KLTS will continue to maintain it in the next three years.
Why is the choice of most enterprises today to stay with early versions and not rush to upgrade?
-
Firstly, the high frequency of upgrades may cause more risks and each upgrade must be fully verified. In the financial industry, the change cycle of a PaaS platform is usually relatively long, because once the updated version has a bug, it needs to be forced to roll back or quickly respond and upgrade to a updated version, which will cause unnecessary expenditures.
-
Secondly, once an enterprise upgrades their the Kubernetes kernel, some functional alternatives may not yet fully ready for production, and incompatibility often occurs in the production environment.
-
Finally, the Kubernetes community only supports minor version upgrades one by one, and does not support cross-version upgrades, because the later upgrades often have some uncontrollable factors that may cause some production problems.
Therefore, the choice of most enterprises today is to stay with early versions and not rush to upgrade. But the Kubernetes community only maintains the most recent three to four releases, how can you keep early versions safe from the CVE bugs and vulnerabilities that the community may discover from time to time? That’s where KLTS comes in! We provide free maintenance support for early versions for up to three years, actively fix the CVE vulnerabilities and critical bugs.
KLTS release cycle
Kubernetes versions are expressed as x.y.z, where x is the major version, y is the minor version, and z is the patch version. For the versions maintained by KLTS, it is followed by a string beginning with lts. For the convenience of communication, people often use the first two digits x.y to describe the Kubernetes version.
Assuming that the latest Kubernetes released by the community is x.y, according to the Version Skew Policy, the community only maintains the latest three versions, and KLTS currently maintains nearly ten early versions starting from 1.10, as shown in the figure below.
When the Kubernetes community discovers new CVE vulnerabilities or bugs that may affect production, it may be affected not only the versions that the community is maintaining, but also the early versions that have been discontinued before but are still in use by some enterprises and cannot be upgraded rashly, which are maintained by the KLTS team. The current maintenance cycle of KLTS is as follows:
As shown above, the maintenance cycle of a certain version by the Kubernetes community is usually about one year, and KLTS can provide a long-term maintenance in the next three years until the code is incompatible, and then the corresponding version will be aborted.
Bugs fixed by KLTS
Some high-priority CVEs or serious bugs in the production environment may cause greater security risks. CVE security issues are the lifeblood of the cluster, so KLTS will fix mid-to-high-risk CVEs, and then fix major bugs to guarantee stable operation of the production environment.
As an example, the CVE-2021-3121 vulnerability discovered in January 2021 has a CVSS score of 7.5. However, as of September 2021 the Kubernetes community:
- Only fixed four versions: 1.18, 1.19, 1.20, 1.21
- Announced that “all prior versions remain exposed and users should stop using them immediately”
- Requests to fix early versions are denied:
KLTS addresses this situation by diligently fixing eight earlier versions that were heavily affected by the CVE-2021-3121 vulnerability. No complaints, no demands!
- v1.17.17
- v1.16.15
- v1.15.12
- v1.14.10
- v1.13.12
- v1.12.10
- v1.11.10
- v1.10.13
If you feel that the KLTS team’s efforts are valuable and interesting to you, don’t hesitate to join the KLTS community to talk and contribute.
Welcome to talk and join
After careful maintenance and diligent support by developers, KLTS brings the following results to these early versions:
- Three-year maintenance period: KLTS will provide continuous maintenance for up to three years once the Kubernetes community aborts maintenance.
- Safe and stable: Minor version upgrades are safer with high compatibility. This kind of progressive upgrade is more stable. For example, the new features provided by the latest version may be very attractive, but it may not be able to meet the standards available for production and require a long time to adapt.
- Easy to install: Combined with domestic image acceleration, it natively supports Kubeadm, and CentOS, Ubuntu, and openSUSE, and also provides a one-click installation script.
- Open and transparent: KLTS is an open source project hosted on GitHub and the whole process is open.
- more in roadmap: Long-term maintenance of Containerd and other components will be added later.
Here is a sincere invitation to developers. If you feel that the KLTS team’s contributions are valuable and make you trustworthy, you are welcome to join the KLTS community to talk and contribute. We look forward to any comments, suggestions, or solutions.
Kubernetes release cycle
The release cycle of recent ten versions by the Kubernetes community are as follows:
Ver. | Initial date | EOL date |
---|---|---|
1.10 | 2018-03-27 | 2019-02-13 |
1.11 | 2018-07-28 | 2019-05-01 |
1.12 | 2018-09-28 | 2019-07-08 |
1.13 | 2018-12-04 | 2019-10-15 |
1.14 | 2019-03-25 | 2019-12-11 |
1.15 | 2019-07-20 | 2020-05-06 |
1.16 | 2019-09-18 | 2020-09-02 |
1.17 | 2019-12-08 | 2021-01-13 |
1.18 | 2020-03-25 | 2021-06-18 |
1.19 | 2020-08-26 | 2021-10-28 |
1.20 | 2020-12-08 | 2022-02-28 |
1.21 | 2021-04-08 | 2022-06-28 |
1.22 | 2021-08-04 | 2022-10-28 |
An initial date refers to the date of first official release of a minor version such as 1.10.0, 1.11.0 … 1.22.0, etc.
An EOL date refers to the date of End Of Life (EOL) release, i.e., the community will not maintain this version from then on. This is the final bug-fix release about one year after the initial release.
2 - Preparation
This page introduces some preparations before installation. For example, it is required to install the kubeadm
toolbox to get started. For information on how to create a cluster with kubeadm, see the Step-By-Step Install page.
Before you start
You should prepare or perform the following actions:
- A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager.
- 2 GB or more of RAM per host (any less will leave little room for your apps.)
- 2 CPUs or more per host.
- Good network connectivity between all nodes in the cluster (either public or on-premise network.)
- The hostname, MAC address, and product_uuid shall be unique for every node. See here for more details.
- Open the required ports on the host. See Check required ports for more details.
- Disable the swap partition. You MUST disable the swap partition to keep the kubelet working properly.
Verify the MAC address and product_uuid
Verify the MAC address and product_uuid are unique for every node. Perform the following actions:
- Get the MAC address of the network interfaces using the command
ip link
orifconfig -a
- Check the product_uuid using the command
sudo cat /sys/class/dmi/id/product_uuid
It is very likely that hardware devices have unique addresses although some virtual machines may use identical addresses. Kubernetes uses these addresses to uniquely identify the nodes in the cluster. If these addresses are not unique to each node, the installation process may fail due to some issues.
Check network adapters
If you have more than one network adapter and your Kubernetes components are not reachable via the default route, it is recommended to add IP route(s) so the Kubernetes cluster can set up proper connections via the appropriate adapter.
Enable iptables discover the bridged traffic
Make sure that the br_netfilter
module is loaded. This can be done by running lsmod | grep br_netfilter
. To load it explicitly, you can run the command sudo modprobe br_netfilter
.
To enable the iptables on your Linux node to correctly discover the bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables
is set to 1 in your sysctl
config file, for example:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
For more details see the Network Plugin Requirements page.
Check required ports
This section lists all ports that you may use on your nodes.
Control-plane node(s)
Protocol | Direction | Port Range | Purpose | Used By |
---|---|---|---|---|
TCP | Inbound | 6443* | Kubernetes API server | All |
TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
TCP | Inbound | 10250 | kubelet API | Self, Control plane |
TCP | Inbound | 10251 | kube-scheduler | Self |
TCP | Inbound | 10252 | kube-controller-manager | Self |
Worker node(s)
Protocol | Direction | Port Range | Purpose | Used By |
---|---|---|---|---|
TCP | Inbound | 10250 | kubelet API | Self, Control plane |
TCP | Inbound | 30000-32767 | NodePort Services | All |
Above is the default port range for NodePort Services.
Any port numbers marked with * are overridable, so you will need to ensure any custom ports you provide are also open.
Although etcd ports are included in control-plane nodes, you can also host your own etcd cluster externally or on custom ports.
The pod network plugin you use (see below) may also require certain ports to be open. Since this differs with each pod network plugin, see the documentation for the plugins about what port(s) they need.
Set a host name
Set a hostname for your host by using the following command:
hostnamectl set-hostname your-new-host-name
echo "127.0.0.1 $(hostname)" >> /etc/hosts
echo "::1 $(hostname)" >> /etc/hosts
Disable Swap
Run the following command to disable the partition swap:
swapoff -a
If you want to disable swap permanently, edit the /etc/fstab
file to comment out the swap mount.
Disable Selinux
Run the following command to disable the selinux:
setenforce 0
If you want to disable selinux permanently, edit /etc/sysconfig/selinux
and replace SELINUX=enforcing
with SELINUX=disabled
.
Install Runtime
To run containers in Pods, Kubernetes uses a Container Runtime Interface (CRI).
By default, Kubernetes uses a CRI to interface with your chosen container runtime.
If you don’t specify a runtime, kubeadm automatically tries to detect an installed container runtime by scanning through a list of well-known Unix domain sockets. The following table lists container runtimes and their associated socket paths:
Runtime | Path to Unix domain socket |
---|---|
Docker | /var/run/dockershim.sock |
Containerd | /run/containerd/containerd.sock |
CRI-O | /var/run/crio/crio.sock |
If both Docker and Containerd are detected, Docker takes precedence. This is inevitable because Docker 18.09 ships with Containerd and both are detectable even if you only installed Docker. If any other two or more runtimes are detected, kubeadm exits with an error.
The kubelet integrates with Docker through the built-in dockershim
CRI implementation.
By default, kubeadm uses Docker as the container runtime. The kubelet integrates with Docker through the built-in dockershim
CRI implementation.
Run the following command to install a Red Hat-based distribution:
yum install docker
Run the following command to install a Debian-based distribution:
apt-get install docker.io
By default, Containerd only provides download packages for the amd64 architecture. If you are using a different architecture, you can install the containerd.io
package from the official Docker repository. Instructions for setting up the Docker repository and installing the containerd.io
package for your respective Linux distribution can be found in Installing the Docker Engine.
You can also use the following source code to build.
VERSION=1.5.4
wget -c https://github.com/containerd/containerd/releases/download/v${VERSION}/containerd-${VERSION}-linux-amd64.tar.gz
tar xvf containerd-${VERSION}-linux-amd64.tar.gz -C /usr/local/
mkdir /etc/containerd/ && containerd config default > /etc/containerd/config.toml
wget -c -O /etc/systemd/system/containerd.service https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
systemctl start containerd && systemctl enable containerd
See container runtimes for more information.
3 - Install with Script
KLTS provides a script that automates the installation process.
Install
wget https://github.com/klts-io/klts/raw/main/install.sh
chmod +x install.sh
./install.sh
Usage: ./install.sh [OPTIONS]
-h, --help : Display this help and exit
--kubernetes-container-registry=ghcr.io/klts-io/kubernetes-lts : Kubernetes container registry
--kubernetes-version=1.18.20-lts.1 : Kubernetes version to install
--containerd-version=1.3.10-lts.0 : Containerd version to install
--runc-version=1.0.2-lts.0 : Runc version to install
--kubernetes-rpm-source=https://github.com/klts-io/kubernetes-lts/raw/rpm-v1.18.20-lts.2 : Kubernetes RPM source
--containerd-rpm-source=https://github.com/klts-io/containerd-lts/raw/rpm-v1.3.10-lts.0 : Containerd RPM source
--runc-rpm-source=https://github.com/klts-io/runc-lts/raw/rpm-v1.0.2-lts.0 : Runc RPM source
--others-rpm-source=https://github.com/klts-io/others/raw/rpm : Other RPM source
--kubernetes-deb-source=https://github.com/klts-io/kubernetes-lts/raw/deb-v1.18.20-lts.2 : Kubernetes DEB source
--containerd-deb-source=https://github.com/klts-io/containerd-lts/raw/deb-v1.3.10-lts.0 : Containerd DEB source
--runc-deb-source=https://github.com/klts-io/runc-lts/raw/deb-v1.0.2-lts.0 : Runc DEB source
--others-deb-source=https://github.com/klts-io/others/raw/deb : Other DEB source
--focus=enable-iptables-discover-bridged-traffic,disable-swap,disable-selinux,setup-source,install-kubernetes,install-containerd,install-runc,install-crictl,install-cniplugins,setup-crictl-config,setup-containerd-cni-config,setup-kubelet-config,setup-containerd-config,daemon-reload,start-containerd,status-containerd,enable-containerd,start-kubelet,status-kubelet,enable-kubelet,images-pull,control-plane-init,status-nodes,show-join-command : Focus on specific step
--skip='' : Skip on specific step
4 - Step-By-Step Install
KLTS provides a way to install source packages based on Deb and RPM. You can choose the installation method that suits your system.
Before installation, ensure that your Preparation is good enough.
Set the KLTS source package
Run the following code to set the source of downloading a proper distribution:
VERSION=1.18.20-lts.2
cat << EOF > /etc/yum.repos.d/klts.repo
[klts]
name=klts
baseurl=https://raw.githubusercontent.com/klts-io/kubernetes-lts/rpm-v${VERSION}/\$basearch/
enabled=1
gpgcheck=0
[klts-others]
name=klts-others
baseurl=https://raw.githubusercontent.com/klts-io/others/rpm/\$basearch/
enabled=1
gpgcheck=0
EOF
yum makecache
Run the following code to set the source of downloading a proper distribution:
VERSION=1.18.20-lts.2
cat << EOF > /etc/apt/sources.list.d/klts.list
deb [trusted=yes] https://raw.githubusercontent.com/klts-io/kubernetes-lts/deb-v${VERSION} stable main
deb [trusted=yes] https://raw.githubusercontent.com/klts-io/others/deb stable main
EOF
apt-get update
Install
Run the following code to install a distribution:
yum install kubeadm kubelet kubectl
Run the following code to install a distribution:
apt-get install kubeadm kubelet kubectl
Auto-start Kubelet on boot
Run the following code to start Kubelet on boot:
systemctl enable kubelet
Pull the dependency image
Run the following code to pull the dependency image:
VERSION=1.18.20-lts.2
REPOS=ghcr.io/klts-io/kubernetes-lts
kubeadm config images pull --image-repository ${REPOS} --kubernetes-version v${VERSION}
All subsequent operations on Kubeadm need to include --image-repository
and --kubernetes-version
to actively specify the image.
Initialize the control plane node
Run the following code to initialize the control plane node:
VERSION=1.18.20-lts.2
REPOS=ghcr.io/klts-io/kubernetes-lts
kubeadm init --image-repository ${REPOS} --kubernetes-version v${VERSION}
For details see Create a cluster with kubeadm.
5 - Kubernetes
5.1 - Fixed CVEs
5.1.1 - CVE-2021-25741
Vulnerability detailsThis is a volume security issue related to permission access. A user can access files and directories outside the volume mounting directory, including the host’s file system, through the volume mounting method of subpath
in the created container.
Scope
This vulnerability affects related behaviors of kubelet
, and the issue is particularly serious for cluster administrators who may strictly restrict the creation of hostPath
.
CVSS scores
This vulnerability is rated as medium-risk with a CVSS
score of 5.5.
Prevention
For users who do not want to upgrade kubelet, they can use two preventative measures:
- Disable
VolumeSubpath
forkubelet
andkube-apiserver
and remove allpods
that are using this feature. - Use
admission control
to prevent users with low trust levels from running the container with theroot
permission.
Fixed by official
- v1.22.2
- v1.21.5
- v1.20.11
- v1.19.15
Fixed by KLTS
- v1.18.20-lts.1 kubernetes/kubernetes#104253
- v1.17.17-lts.1 TODO
- v1.16.15-lts.1 TODO
- v1.15.12-lts.1 TODO
- v1.14.10-lts.1 TODO
- v1.13.12-lts.1 TODO
- v1.12.10-lts.1 TODO
- v1.11.10-lts.1 TODO
- v1.10.13-lts.1 TODO
5.1.2 - CVE-2021-3121
Vulnerability detailsA program with this vulnerability may crash because of processing some messages that contain malicious Protobuf
. If the version of Gogo Protobuf
you are using is too low, this vulnerability may exist.
Scope
The Kubernetes
system component has its own recovery mechanism to deal with crashes and will not interrupt service when encountering a maliciously submitted Protobuf
message, so it is not within the scope affected by the vulnerability.
When the program receives and processes the Protobuf
message in the application system, if the component does not have a recovery mechanism to deal with the crash, then such programs are within the scope of the vulnerability, and the service may be interrupted when such a malicious attack comes.
The Kubernetes
community has tested and verified that API Server
is not affected by this vulnerability, but in order to prevent you from being affected by the hidden risk of this security vulnerability, the community has upgraded the relevant Protobuf
files.
Prevention
If you use the automatically generated Protobuf
message in your application code and find that the relevant component exits due to the following exception, the vulnerability may exist.
panic: runtime error: index out of range [-9223372036854775804]
goroutine 1 [running]:
v1.(*MessageName).Unmarshal(0xc00006f1e8, 0xc0000281a8, 0xa, 0x10, 0xc00006f1b8, 0x1)
.../protofile.pb.go:250 +0xb86
If you use a component related to the Protobuf
message, it is recommended to upgrade the Gogo Protobuf
compiler to the bug-fixed version (v1.3.2 or higher), and then regenerate the relevant Protobuf
message based on the upgraded Protobuf
compiler.
Fixed by official
- v1.21.1
- v1.20.7
- v1.19.11
- v1.18.19
Fixed by KLTS
- v1.17.17-lts.1 kubernetes/kubernetes#101327
- v1.16.15-lts.1 kubernetes/kubernetes#101327
- v1.15.12-lts.1 kubernetes/kubernetes#101327
- v1.14.10-lts.1 kubernetes/kubernetes#101327
- v1.13.12-lts.1 kubernetes/kubernetes#101327
- v1.12.10-lts.1 kubernetes/kubernetes#101327
- v1.11.10-lts.1 kubernetes/kubernetes#101327
- v1.10.13-lts.1 kubernetes/kubernetes#101327
5.1.3 - CVE-2020-8559
Vulnerability detailsThis is a security vulnerability of the kube-apiserver
component. An attacker can intercept certain upgrade requests sent to the node kubelet
, and forward the request to other target nodes through the original access credentials in the request that could allow an attacker to escalate privileges from a node compromise to a full cluster compromise.
Scope
Since kube-apiserver
allows the request to be propagated back to the source client in the proxied upgrade request, the attacker can intercept certain upgrade requests sent to the node kubelet
, and then use the original access credentials in the request to forward requests to other target nodes, resulting in a privilege escalation vulnerability on the attacked node.
CVSS scores
This vulnerability is rated as medium-risk with a CVSS
score of 6.4. If multiple clusters share the same CA
and authentication credentials, an attacker can use this vulnerability to attack other clusters. In this case, it is a high-risk vulnerability.
Prevention
For cross-node attacks in the cluster, it is recommended that you take the following preventative measures:
- Timely revoke
kubeconfig
credentials that may cause leakage potentially, and follow the least principle of minimum permissions to converge unnecessarypods/exec
,pods/attach
,pods/portforward
andproxy
resource models with theRBAC
permission.
Fixed by official
- v1.18.6
- v1.17.9
- v1.16.13
Fixed by KLTS
5.1.4 - CVE-2020-8558
Vulnerability detailsThe kube-proxy
component was found to set the kernel parameter net.ipv4.conf.all.route_localnet=1
in both iptables
and ipvs
modes to allow local loopback access. An attacker may use the container sharing the host network, or bind and listen to the TCP/UDP service of the local 127.0.0.1
on the cluster node to access the same LAN or adjacent node under the second layer network to obtain interface information. If your service does not set the necessary security certification, it may cause the risk of information leakage.
Scope
When an attacker has the capability of configuring host network or can access a container instance with the CAP_NET_RAW
capability, he can get the socket
service information by listening to 127.0.0.1
on the target node. If there is an exposed service that can be accessed by 127.0.0.1
and does not require any authentication on the target host, then the service information can be obtained by the attacker.
CVSS scores
- If the cluster
API Server
opens the non-authenticated port (default 8080), then the attacker may obtain information about theAPI Server
interface, the threat level is high-risk vulnerabilities, and the score is 8.8. - If the cluster
API Server
closes non-authenticated ports by default, the threat level is medium-risk vulnerability, and the score is 5.4.
Prevention
It is recommended that you take the following preventative measures:
If the business container needs to use the host network mode and listen on a non-secure port, you can mitigate this vulnerability by manually adding the iptables
rule on the node.
Run the following command to configure the iptables
rule in the cluster to deny non-local access traffic to 127.0.0.1
:
iptables -I INPUT --dst 127.0.0.0/8 ! --src 127.0.0.0/8 -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
If the cluster does not need to open the API Server insecure port, you can add --insecure-port=0
to the kubernetes API server command line to disable the port.
If untrusted containers are running in the cluster, you can prohibit Container
from enabling the CAP_NET_RAW
capability, and disable the Container
’s CAP_NET_RAW
capability in pod spec
.
securityContext:
capabilities:
drop:
- "NET_RAW"
Use PodSecurityPolicy
to restrict deployment privileges or shared host network containers. In addition, you can configure requiredDropCapabilities
in the policy to force container deployment to close the CAP_NET_RAW
capability.
Fixed by official
- v1.18.4
- v1.17.7
- v1.16.11
Fixed by KLTS
5.1.5 - CVE-2020-8552
Vulnerability detailsThis vulnerability may make the API Server
vulnerable to a DoS
(Denial of Service) attack caused by successful API
requests.
Scope
API Server
is vulnerable to a denial of service attack via successful API
requests.
Fixed by Official
- v1.17.3
- v1.16.7
- v1.15.10
Fixed by KLTS
5.1.6 - CVE-2019-1002101
Vulnerability detailsThis vulnerability may allow an attacker to modify or monitor any file in the directory with the same name in the symbolic link header during the unpacking process of the kubectl cp
command, thereby causing damage.
Scope
The kubectl cp
command allows copying files between containers and the user machine. An attacker may implant a malicious tar
package with a symbolic link header into the image or running container, and modify or monitor any file in the directory that has the same name as the symbolic link header during the unpacking process of the cp
command, thereby causing damage.
Fixed by Official
- v1.14.1
- v1.13.6
- v1.12.8
- v1.11.10
Fixed by KLTS
5.1.7 - CVE-2019-11251
Vulnerability detailsThis vulnerability may allow an attacker to use the kubectl cp
command to write malicious files in the container tar
package to any path on the host using Path Traversal. This process is limited only by the system permissions of the local user.
Scope
This vulnerability has a similar impact caused by CVE-2019-1002101, CVE-2019-11246, and CVE-2019-11249 that were found not long before.
The kubectl cp
command allows copying files between containers and the user machine. To copy files from a container, Kubernetes
runs tar
inside the container to create a tar archive, copies it over the network, and kubectl
unpacks it on the user’s machine.
If the tar binary in the container is malicious, it could run any code and output unexpected, malicious results. An attacker could use this to write files to any path (Path Traversal) on the user’s machine when kubectl cp is called.
Fixed by Official
- v1.15.4
- v1.14.7
- v1.13.11
Fixed by KLTS
5.1.8 - CVE-2019-11248
Vulnerability detailsThe debugging endpoint /debug/pprof
is exposed over the unauthenticated Kubelet healthz port.
Scope
The go pprof
endpoint is exposed over the Kubelet
’s healthz port. This debugging endpoint can potentially leak sensitive information such as internal Kubelet
memory addresses and configuration, or for limited denial of service. The issue is of medium severity, but not exposed by the default configuration.
Fixed by Official
- v1.14.4
- v1.13.8
- v1.12.10
Fixed by KLTS
5.1.9 - CVE-2019-11247
Vulnerability detailsAPI Server
mistakenly allows access to a cluster-scoped custom resource.
Scope
The Kubernetes kube-apiserver mistakenly allows access to a cluster-scoped custom resource if the request is made as if the resource were namespaced.
Authorizations for the resource accessed in this manner are enforced using roles and role bindings within the namespace, meaning that a user with access only to a resource in one namespace could create, view update or delete the cluster-scoped resource (according to their namespace role privileges).
Fixed by Official
- v1.12.12
Fixed by KLTS
5.1.10 - CVE-2019-11246
Vulnerability detailsThis vulnerability may allow an attacker to use the kubectl cp
command to write malicious files in the container tar
package to any path on the host using Path Traversal. This process is limited only by the system permissions of the local user.
Scope
This vulnerability has a similar impact caused by CVE-2019-1002101 that was found not long before.
The kubectl cp
command allows copying files between containers and the user machine. To copy files from a container, Kubernetes runs tar
inside the container to create a tar archive, copies it over the network, and kubectl
unpacks it on the user’s machine.
If the tar
binary in the container is malicious, it could run any code and output unexpected, malicious results. An attacker could use this to write files to any path (Path Traversal) on the user’s machine when kubectl cp
is called.
Fixed by Official
- v1.14.3
- v1.13.7
- v1.12.10
Fixed by KLTS
5.1.11 - CVE-2019-11245
Vulnerability detailsContainers for pods that do not specify an explicit runAsUser attempt to run as uid 0
(root) on container restart, or if the image was previously pulled to the node. If the pod specified mustRunAsNonRoot: true
, the kubelet will refuse to start the container as root. If the pod did not specify mustRunAsNonRoot: true
, the kubelet will run the container as uid 0
.
Scope
If a pod specifies mustRunAsNonRoot: true
, the pod will run as uid 0
when it restarts or the image is pulled on to a node.
Prevention
Specify mustRunAsNonRoot: true
for pods.
Fixed by Official
- v1.14.3
- v1.13.7
Fixed by KLTS
5.1.12 - CVE-2019-11249
Vulnerability detailsThis vulnerability may allow an attacker to use the kubectl cp
command to write malicious files in the container tar
package to any path on the host using Path Traversal. This process is limited only by the system permissions of the local user.
Scope
This vulnerability has a similar impact caused by CVE-2019-1002101 and CVE-2019-11246 that were found not long before.
The kubectl cp
command allows copying files between containers and the user machine. To copy files from a container, Kubernetes
runs tar
inside the container to create a tar archive, copies it over the network, and kubectl
unpacks it on the user’s machine.
If the tar binary in the container is malicious, it could run any code and output unexpected, malicious results. An attacker could use this to write files to any path (Path Traversal) on the user’s machine when kubectl cp is called.
Fixed by Official
- v1.15.2
- v1.14.5
- v1.13.9
Fixed by KLTS
5.1.13 - nokmem
Bug detailsThe node has sufficient disks, but it keeps reporting that the disk is insufficient to create a Pod.
Scope
When the node is used for a long time, it prompts an error that the remaining space is insufficient. The error message is as follows:
mkdir: cannot create directory '/sys/fs/cgroup/memory/8': No space left on device
The node disk is sufficient but reports this error, and the creation of Pod
always fails. This is a potential “time bomb”.
All environments that use early-version kernels and Kubernetes versions before 1.21 will be affected. In runc 1.0.0-rc94 (opencontainers/runc#2840) it has been fixed (removed directly).
Prevention
- Upgrade the system kernel
- Kubernetes 1.14 to 1.20
- Rebuild Kubelet with
-tags=nokmem
- Rebuild Kubelet with
- Kubernetes 1.14 or earlier
- For hard coding, refer to nokmem.1.13.patch
- Kubernetes 1.21 or higher
- not affected
Fixed by KLTS
- /docs/kubernetes/releases/v1.20/v1.20.15-lts.1/ nokmem.1.20.patch
- /docs/kubernetes/releases/v1.19/v1.19.16-lts.1/ nokmem.1.20.patch
- v1.18.20-lts.1 nokmem.1.20.patch
- v1.17.17-lts.1 nokmem.1.20.patch
- v1.16.15-lts.1 nokmem.1.20.patch
- v1.15.12-lts.1 nokmem.1.20.patch
- v1.14.10-lts.1 nokmem.1.20.patch
- v1.13.12-lts.1 nokmem.1.13.patch
- v1.12.10-lts.1 nokmem.1.13.patch
- v1.11.10-lts.1 nokmem.1.13.patch
- v1.10.13-lts.1 nokmem.1.13.patch
5.2 - Release log
5.2.1 - v1.23
5.2.1.1 - v1.23.5-lts.1
This is the first fixed release by KLTS for v1.23.5.
Patches
- There are no fixes just CI processes running
5.2.2 - v1.22
5.2.2.1 - v1.22.8-lts.1
This is the first fixed release by KLTS for v1.22.8.
Patches
- There are no fixes just CI processes running
5.2.3 - v1.21
5.2.3.1 - v1.21.11-lts.1
This is the first fixed release by KLTS for v1.21.11.
Patches
- There are no fixes just CI processes running
5.2.4 - v1.20
5.2.4.1 - v1.20.15-lts.2
This is the second fixed release by KLTS for v1.20.15.
Patches
- nokmem
The node has sufficient disks, but it keeps reporting that the disk is insufficient to create a Pod.
5.2.5 - v1.19
5.2.5.1 - v1.19.16-lts.3
This is the third fixed release by KLTS for v1.19.16.
Patches
- nokmem
The node has sufficient disks, but it keeps reporting that the disk is insufficient to create a Pod.
- /docs/kubernetes/patches/cve-2020-8554/
5.2.6 - v1.18
5.2.6.1 - v1.18.20-lts.1
This is the first fixed release by KLTS for v1.18.20.
Patches
- nokmem
The node has sufficient disks, but it keeps reporting that the disk is insufficient to create a Pod.
- CVE-2021-25741
This is a volume security issue related to permission access. A user can access files and directories outside the volume mounting directory, including the host’s file system, through the volume mounting method of
subpath
in the created container.
5.2.6.2 - v1.18.20-lts.2
This is the second fixed release by KLTS for v1.18.20.
Patches
- Bugfix: reducing race risk in kubelet for missing KUBERNETES_SERVICE_HOST
5.2.7 - v1.17
5.2.7.1 - v1.17.17-lts.1
This is the first fixed release by KLTS for v1.17.17.
Patches
- CVE-2021-3121
A program with this vulnerability may crash because of processing some messages that contain malicious
Protobuf
. If the version ofGogo Protobuf
you are using is too low, this vulnerability may exist. - nokmem
The node has sufficient disks, but it keeps reporting that the disk is insufficient to create a Pod.
5.2.8 - v1.16
5.2.8.1 - v1.16.15-lts.1
This is the first fixed release by KLTS for v1.16.15.
Patches
- CVE-2021-3121
A program with this vulnerability may crash because of processing some messages that contain malicious
Protobuf
. If the version ofGogo Protobuf
you are using is too low, this vulnerability may exist. - nokmem
The node has sufficient disks, but it keeps reporting that the disk is insufficient to create a Pod.
5.2.9 - v1.15
5.2.9.1 - v1.15.12-lts.1
This is the first fixed release by KLTS for v1.15.12.
Patches
- CVE-2020-8558
The
kube-proxy
component was found to set the kernel parameternet.ipv4.conf.all.route_localnet=1
in bothiptables
andipvs
modes to allow local loopback access. An attacker may use the container sharing the host network, or bind and listen to the TCP/UDP service of the local127.0.0.1
on the cluster node to access the same LAN or adjacent node under the second layer network to obtain interface information. If your service does not set the necessary security certification, it may cause the risk of information leakage. - CVE-2021-3121
A program with this vulnerability may crash because of processing some messages that contain malicious
Protobuf
. If the version ofGogo Protobuf
you are using is too low, this vulnerability may exist. - nokmem
The node has sufficient disks, but it keeps reporting that the disk is insufficient to create a Pod.
5.2.10 - v1.14
5.2.10.1 - v1.14.10-lts.1
This is the first fixed release by KLTS for v1.14.10.
Patches
- CVE-2020-8552
This vulnerability may make the
API Server
vulnerable to aDoS
(Denial of Service) attack caused by successfulAPI
requests. - CVE-2020-8558
The
kube-proxy
component was found to set the kernel parameternet.ipv4.conf.all.route_localnet=1
in bothiptables
andipvs
modes to allow local loopback access. An attacker may use the container sharing the host network, or bind and listen to the TCP/UDP service of the local127.0.0.1
on the cluster node to access the same LAN or adjacent node under the second layer network to obtain interface information. If your service does not set the necessary security certification, it may cause the risk of information leakage. - CVE-2020-8559
This is a security vulnerability of the
kube-apiserver
component. An attacker can intercept certain upgrade requests sent to the nodekubelet
, and forward the request to other target nodes through the original access credentials in the request that could allow an attacker to escalate privileges from a node compromise to a full cluster compromise. - CVE-2021-3121
A program with this vulnerability may crash because of processing some messages that contain malicious
Protobuf
. If the version ofGogo Protobuf
you are using is too low, this vulnerability may exist. - nokmem
The node has sufficient disks, but it keeps reporting that the disk is insufficient to create a Pod.
5.2.11 - v1.13
5.2.11.1 - v1.13.12-lts.1
This is the first fixed release by KLTS for v1.13.12.
Patches
- CVE-2020-8552
This vulnerability may make the
API Server
vulnerable to aDoS
(Denial of Service) attack caused by successfulAPI
requests. - CVE-2020-8558
The
kube-proxy
component was found to set the kernel parameternet.ipv4.conf.all.route_localnet=1
in bothiptables
andipvs
modes to allow local loopback access. An attacker may use the container sharing the host network, or bind and listen to the TCP/UDP service of the local127.0.0.1
on the cluster node to access the same LAN or adjacent node under the second layer network to obtain interface information. If your service does not set the necessary security certification, it may cause the risk of information leakage. - TODO CVE-2020-8559
This is a security vulnerability of the
kube-apiserver
component. An attacker can intercept certain upgrade requests sent to the nodekubelet
, and forward the request to other target nodes through the original access credentials in the request that could allow an attacker to escalate privileges from a node compromise to a full cluster compromise. - CVE-2021-3121
A program with this vulnerability may crash because of processing some messages that contain malicious
Protobuf
. If the version ofGogo Protobuf
you are using is too low, this vulnerability may exist. - nokmem
The node has sufficient disks, but it keeps reporting that the disk is insufficient to create a Pod.
5.2.12 - v1.12
5.2.12.1 - v1.12.10-lts.1
This is the first fixed release by KLTS for v1.12.10.
Patches
- CVE-2019-11245
Containers for pods that do not specify an explicit runAsUser attempt to run as uid 0 (root) on container restart, or if the image was previously pulled to the node.
- CVE-2019-11247
API Server
mistakenly allows access to a cluster-scoped custom resource. - CVE-2019-11249
This vulnerability may allow an attacker to use the
kubectl cp
command to write malicious files in the containertar
package to any path on the host using Path Traversal. This process is limited only by the system permissions of the local user. - CVE-2019-11251
This vulnerability may allow an attacker to use the
kubectl cp
command to write malicious files in the containertar
package to any path on the host using Path Traversal. This process is limited only by the system permissions of the local user. - CVE-2020-8552
This vulnerability may make the
API Server
vulnerable to aDoS
(Denial of Service) attack caused by successfulAPI
requests. - CVE-2020-8558
The
kube-proxy
component was found to set the kernel parameternet.ipv4.conf.all.route_localnet=1
in bothiptables
andipvs
modes to allow local loopback access. An attacker may use the container sharing the host network, or bind and listen to the TCP/UDP service of the local127.0.0.1
on the cluster node to access the same LAN or adjacent node under the second layer network to obtain interface information. If your service does not set the necessary security certification, it may cause the risk of information leakage. - TODO CVE-2020-8559
This is a security vulnerability of the
kube-apiserver
component. An attacker can intercept certain upgrade requests sent to the nodekubelet
, and forward the request to other target nodes through the original access credentials in the request that could allow an attacker to escalate privileges from a node compromise to a full cluster compromise. - CVE-2021-3121
A program with this vulnerability may crash because of processing some messages that contain malicious
Protobuf
. If the version ofGogo Protobuf
you are using is too low, this vulnerability may exist. - nokmem
The node has sufficient disks, but it keeps reporting that the disk is insufficient to create a Pod.
5.2.13 - v1.11
5.2.13.1 - v1.11.10-lts.1
This is the first fixed release by KLTS for v1.11.10.
Patches
- CVE-2019-11245
Containers for pods that do not specify an explicit runAsUser attempt to run as uid 0 (root) on container restart, or if the image was previously pulled to the node.
- CVE-2019-11246
This vulnerability may allow an attacker to use the
kubectl cp
command to write malicious files in the containertar
package to any path on the host using Path Traversal. This process is limited only by the system permissions of the local user. - CVE-2019-11247
API Server
mistakenly allows access to a cluster-scoped custom resource. - CVE-2019-11248
The debugging endpoint
/debug/pprof
is exposed over the unauthenticated Kubelet healthz port. - CVE-2019-11249
This vulnerability may allow an attacker to use the
kubectl cp
command to write malicious files in the containertar
package to any path on the host using Path Traversal. This process is limited only by the system permissions of the local user. - CVE-2019-11251
This vulnerability may allow an attacker to use the
kubectl cp
command to write malicious files in the containertar
package to any path on the host using Path Traversal. This process is limited only by the system permissions of the local user. - CVE-2020-8552
This vulnerability may make the
API Server
vulnerable to aDoS
(Denial of Service) attack caused by successfulAPI
requests. - CVE-2020-8558
The
kube-proxy
component was found to set the kernel parameternet.ipv4.conf.all.route_localnet=1
in bothiptables
andipvs
modes to allow local loopback access. An attacker may use the container sharing the host network, or bind and listen to the TCP/UDP service of the local127.0.0.1
on the cluster node to access the same LAN or adjacent node under the second layer network to obtain interface information. If your service does not set the necessary security certification, it may cause the risk of information leakage. - TODO CVE-2020-8559
This is a security vulnerability of the
kube-apiserver
component. An attacker can intercept certain upgrade requests sent to the nodekubelet
, and forward the request to other target nodes through the original access credentials in the request that could allow an attacker to escalate privileges from a node compromise to a full cluster compromise. - CVE-2021-3121
A program with this vulnerability may crash because of processing some messages that contain malicious
Protobuf
. If the version ofGogo Protobuf
you are using is too low, this vulnerability may exist. - nokmem
The node has sufficient disks, but it keeps reporting that the disk is insufficient to create a Pod.
5.2.14 - v1.10
5.2.14.1 - v1.10.13-lts.1
This is the first fixed release by KLTS for v1.10.13.
Patches
- CVE-2019-11245
Containers for pods that do not specify an explicit runAsUser attempt to run as uid 0 (root) on container restart, or if the image was previously pulled to the node.
- CVE-2019-1002101
This vulnerability may allow an attacker to modify or monitor any file in the directory with the same name in the symbolic link header during the unpacking process of the
kubectl cp
command, thereby causing damage. - CVE-2019-11246
This vulnerability may allow an attacker to use the
kubectl cp
command to write malicious files in the containertar
package to any path on the host using Path Traversal. This process is limited only by the system permissions of the local user. - TODO CVE-2019-11247
API Server
mistakenly allows access to a cluster-scoped custom resource. - CVE-2019-11248
The debugging endpoint
/debug/pprof
is exposed over the unauthenticated Kubelet healthz port. - CVE-2019-11249
This vulnerability may allow an attacker to use the
kubectl cp
command to write malicious files in the containertar
package to any path on the host using Path Traversal. This process is limited only by the system permissions of the local user. - CVE-2019-11251
This vulnerability may allow an attacker to use the
kubectl cp
command to write malicious files in the containertar
package to any path on the host using Path Traversal. This process is limited only by the system permissions of the local user. - CVE-2020-8552
This vulnerability may make the
API Server
vulnerable to aDoS
(Denial of Service) attack caused by successfulAPI
requests. - TODO CVE-2020-8558
The
kube-proxy
component was found to set the kernel parameternet.ipv4.conf.all.route_localnet=1
in bothiptables
andipvs
modes to allow local loopback access. An attacker may use the container sharing the host network, or bind and listen to the TCP/UDP service of the local127.0.0.1
on the cluster node to access the same LAN or adjacent node under the second layer network to obtain interface information. If your service does not set the necessary security certification, it may cause the risk of information leakage. - TODO CVE-2020-8559
This is a security vulnerability of the
kube-apiserver
component. An attacker can intercept certain upgrade requests sent to the nodekubelet
, and forward the request to other target nodes through the original access credentials in the request that could allow an attacker to escalate privileges from a node compromise to a full cluster compromise. - CVE-2021-3121
A program with this vulnerability may crash because of processing some messages that contain malicious
Protobuf
. If the version ofGogo Protobuf
you are using is too low, this vulnerability may exist. - nokmem
The node has sufficient disks, but it keeps reporting that the disk is insufficient to create a Pod.
6 - Containerd
6.1 - Fixed CVEs
6.1.1 - CVE-2021-41103
Vulnerability details Vulnerability details(official)
allowing otherwise unprivileged Linux users to traverse directory contents and execute programs.
Scope
When containers included executable programs with extended permission bits (such as setuid), unprivileged Linux users could discover and execute those programs. When the UID of an unprivileged Linux user on the host collided with the file owner or group inside a container, the unprivileged Linux user on the host could discover, read, and modify those files.
CVSS scores
This vulnerability is rated as high-risk with a CVSS
score of 7.2.
Prevention
Ensure that the login users of the cluster nodes are all trusted users, and restrict the access rights of untrusted users to the cluster nodes. Remove unnecessary extended permissions in the container bundles directory.
Fixed by Official
- v1.4.11
- v1.5.7
Fixed by KLTS
6.2 - Release log
6.2.1 - v1.3
6.2.1.1 - v1.3.10-lts.1
This is the first fixed release by KLTS Containerd for v1.3.10.
Patches
- CVE-2021-41103
Where container root directories and some plugins had insufficiently restricted permissions, allowing otherwise unprivileged Linux users to traverse directory contents and execute programs.
7 - Developer Guide
7.1 - Clone
This page describes how you can clone a Kubernetes master branch to your local computer.
Clone single branch
Since the repos
branch is used as a source package for RPM and DEB, it will be very large to directly clon it, so you shall try to only clone the master branch.
git clone --single-branch -b master https://github.com/klts-io/kubernetes-lts
7.2 - Dependencies
This page describes how you can install dependencies on different operating systems.
Install dependencies
Run the following code to install dependencies:
brew install jq git python@3 # Install brew, See https://brew.sh/
pip3 install yq
Run the following code to install dependencies:
yum install -y epel-release
yum install -y jq git python3
pip3 install yq
Run the following code to install dependencies:
apt-get install -y jq git python3 python3-pip
pip3 install yq