kubeadm 是 Kubernetes 官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。
一、环境准备
1、软件版本
软件 | 版本 |
---|---|
kubernetes | v1.12.2 |
CentOS 7.5 | CentOS Linux release 7.5.1804 |
Docker | v18.06 |
flannel | 0.10.0 |
2、节点规划
IP | 角色 | 主机名 |
---|---|---|
172.18.8.200 | k8s master | master.wzlinux.com |
172.18.8.201 | k8s node01 | node01.wzlinux.com |
172.18.8.202 | k8s node02 | node02.wzlinux.com |
节点及网络规划如下:
3、系统配置
关闭防火墙。
systemctl stop firewalldsystemctl disable firewalld
配置/etc/hosts,添加如下内容。
172.18.8.200 master.wzlinux.com master172.18.8.201 node01.wzlinux.com node01172.18.8.202 node02.wzlinux.com node02
关闭SELinux。
sed -i ‘s#SELINUX=enforcing#SELINUX=disabled#g‘ /etc/selinux/configsetenforce 0
关闭swap。
swapoff -ased -i ‘s/.*swap.*/#&/‘ /etc/fstab
配置转发参数。
cat <<EOF > ?/etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF
sysctl --system
设置国内kubernetes阿里云源。
cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
4、docker安装
因为不管是master还是node,都是需要容器引擎,所以我们提前把docker安装好。
设置官方docker源。
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -P /etc/yum.repos.d/
查看目前官方仓库的docker版本。
[root@master ~]# yum list docker-ce.x86_64 ?--showduplicates |sort -r已加载插件:fastestmirror可安装的软件包 * updates: mirrors.aliyun.comLoading mirror speeds from cached hostfile * extras: mirrors.aliyun.comdocker-ce.x86_64 ???????????3:18.09.0-3.el7 ????????????????????docker-ce-stabledocker-ce.x86_64 ???????????18.06.1.ce-3.el7 ???????????????????docker-ce-stabledocker-ce.x86_64 ???????????18.06.0.ce-3.el7 ???????????????????docker-ce-stabledocker-ce.x86_64 ???????????18.03.1.ce-1.el7.centos ????????????docker-ce-stabledocker-ce.x86_64 ???????????18.03.0.ce-1.el7.centos ????????????docker-ce-stabledocker-ce.x86_64 ???????????17.12.1.ce-1.el7.centos ????????????docker-ce-stabledocker-ce.x86_64 ???????????17.12.0.ce-1.el7.centos ????????????docker-ce-stabledocker-ce.x86_64 ???????????17.09.1.ce-1.el7.centos ????????????docker-ce-stabledocker-ce.x86_64 ???????????17.09.0.ce-1.el7.centos ????????????docker-ce-stabledocker-ce.x86_64 ???????????17.06.2.ce-1.el7.centos ????????????docker-ce-stabledocker-ce.x86_64 ???????????17.06.1.ce-1.el7.centos ????????????docker-ce-stabledocker-ce.x86_64 ???????????17.06.0.ce-1.el7.centos ????????????docker-ce-stabledocker-ce.x86_64 ???????????17.03.3.ce-1.el7 ???????????????????docker-ce-stabledocker-ce.x86_64 ???????????17.03.2.ce-1.el7.centos ????????????docker-ce-stabledocker-ce.x86_64 ???????????17.03.1.ce-1.el7.centos ????????????docker-ce-stabledocker-ce.x86_64 ???????????17.03.0.ce-1.el7.centos ????????????docker-ce-stable * base: mirrors.aliyun.com
根据官方的推荐要求,我们需要安装v18.06。
yum install docker-ce-18.06.1.ce -ysystemctl enable docker && systemctl start docker
配置国内镜像仓库加速器。
sudo mkdir -p /etc/dockersudo tee /etc/docker/daemon.json <<-‘EOF‘{ ?"registry-mirrors": ["https://hdi5v8p1.mirror.aliyuncs.com"]}EOFsudo systemctl daemon-reload
二、安装 master 节点
1、软件安装
因为一些组件都是运行在容器里面的,比如kube-apiserver
,kube-controller-manager
,kube-scheduler
,所以我们master需要安装的软件有docker
,kubeadm
,kubelet
,kubectl
。
yum install kubelet kubeadm kubectl -ysystemctl enable kubelet && systemctl start kubelet
2、初始化
因为国内没办法访问Google的镜像源,变通的方法是从其他镜像源下载后,注意下载的版本尽量和我们的kubeadm等版本一样,我们选择v1.12.2,修改tag。执行下面这个Shell脚本即可。
#!/bin/bashkube_version=:v1.12.2kube_images=(kube-proxy kube-scheduler kube-controller-manager kube-apiserver)addon_images=(etcd-amd64:3.2.24 coredns:1.2.2 pause-amd64:3.1)for imageName in ${kube_images[@]} ; do ?docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version ?docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version k8s.gcr.io/$imageName$kube_version ?docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_versiondonefor imageName in ${addon_images[@]} ; do ?docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName ?docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName ?docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageNamedonedocker tag k8s.gcr.io/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24docker image rm k8s.gcr.io/etcd-amd64:3.2.24docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1docker image rm k8s.gcr.io/pause-amd64:3.1
关于脚本中的各镜像的版本,如果大家不清楚的话,可以先进行
kubeadm init
初始化一下,查看一下报错的版本,然后我们在针对获取。
执行脚本,我们就把需要的的镜像下载下来了,我们是使用别人做好的仓库,当然我们也可以建自己的私有仓库。
[root@master ~]# docker imagesREPOSITORY ??????????????????????????TAG ????????????????IMAGE ID ???????????CREATED ????????????SIZEk8s.gcr.io/kube-proxy ???????????????v1.12.2 ????????????15e9da1ca195 ???????4 weeks ago ????????96.5MBk8s.gcr.io/kube-apiserver ???????????v1.12.2 ????????????51a9c329b7c5 ???????4 weeks ago ????????194MBk8s.gcr.io/kube-controller-manager ??v1.12.2 ????????????15548c720a70 ???????4 weeks ago ????????164MBk8s.gcr.io/kube-scheduler ???????????v1.12.2 ????????????d6d57c76136c ???????4 weeks ago ????????58.3MBk8s.gcr.io/etcd ?????????????????????3.2.24 ?????????????3cab8e1b9802 ???????2 months ago ???????220MBk8s.gcr.io/coredns ??????????????????1.2.2 ??????????????367cdc8433a4 ???????3 months ago ???????39.2MBk8s.gcr.io/pause ????????????????????3.1 ????????????????da86e6ba6ca1 ???????11 months ago ??????742kB
接下来执行Master节点的初始化。
kubeadm init --kubernetes-version=v1.12.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
[init] using Kubernetes version: v1.12.2[preflight] running pre-flight checks[preflight/images] Pulling images required for setting up a Kubernetes cluster[preflight/images] This might take a minute or two, depending on the speed of your internet connection[preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull‘[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[preflight] Activating the kubelet service[certificates] Generated front-proxy-ca certificate and key.[certificates] Generated front-proxy-client certificate and key.[certificates] Generated etcd/ca certificate and key.[certificates] Generated apiserver-etcd-client certificate and key.[certificates] Generated etcd/server certificate and key.[certificates] etcd/server serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [127.0.0.1 ::1][certificates] Generated etcd/peer certificate and key.[certificates] etcd/peer serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [172.18.8.200 127.0.0.1 ::1][certificates] Generated etcd/healthcheck-client certificate and key.[certificates] Generated ca certificate and key.[certificates] Generated apiserver-kubelet-client certificate and key.[certificates] Generated apiserver certificate and key.[certificates] apiserver serving cert is signed for DNS names [master.wzlinux.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.18.8.200][certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"[certificates] Generated sa key and public key.[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled[apiclient] All control plane components are healthy after 21.002639 seconds[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster[markmaster] Marking the node namenode01 as master by adding the label "node-role.kubernetes.io/master=‘‘"[markmaster] Marking the node namenode01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule][patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "namenode01" as an annotation[bootstraptoken] using token: vdufzz.0748pbrqzcxmr71a[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user: ?mkdir -p $HOME/.kube ?sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config ?sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: ?https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each nodeas root: ?kubeadm join 172.18.8..200:6443 --token vdufzz.0748pbrqzcxmr71a --discovery-token-ca-cert-hash sha256:39679c09d366c22d525f0db0c61793362409951b70086deba5388cab593ffbe2
至此,master节点我们已经初始化完成。
CentOS7.5 使用 kubeadm 安装配置 Kubernetes 集群(四)
原文地址:http://blog.51cto.com/wzlinux/2322616