分享web开发知识

注册/登录|最近发布|今日推荐

主页 IT知识网页技术软件开发前端开发代码编程运营维护技术分享教程案例
当前位置:首页 > 运营维护

kubernetes集群部署

发布时间:2023-09-06 02:29责任编辑:顾先生关键词:kubernetes

安装环境

172.19.2.49(kube-apiserver,kube-controller-manager,kube-dns,kube-proxy,kubectl,etcd)

172.19.2.50(kubectl,etcd,kube-proxy)

172.19.2.51(kubectl,etcd,kube-proxy)

一、创建 CA 证书和秘钥,并提前设置环境变量

172.19.2.49上进行操作

mkdir-pv/root/local/binvim/root/local/bin/environment.sh#!/usr/bin/bashexportPATH=/root/local/bin:$PATH#TLSBootstrapping使用的Token,可以使用命令head-c16/dev/urandom|od-An-tx|tr-d‘‘生成BOOTSTRAP_TOKEN="11d74483444fb57f6a1cc114ed715949"#最好使用主机未用的网段来定义服务网段和Pod网段#服务网段(ServiceCIDR),部署前路由不可达,部署后集群内使用IP:Port可达SERVICE_CIDR="10.254.0.0/16"#POD网段(ClusterCIDR),部署前路由不可达,**部署后**路由可达(flanneld保证)CLUSTER_CIDR="172.30.0.0/16"#服务端口范围(NodePortRange)exportNODE_PORT_RANGE="8400-9000"#etcd集群服务地址列表exportETCD_ENDPOINTS="https://172.19.2.49:2379,https://172.19.2.50:2379,https://172.19.2.51:2379"#flanneld网络配置前缀exportFLANNEL_ETCD_PREFIX="/kubernetes/network"#kubernetes服务IP(一般是SERVICE_CIDR中第一个IP)exportCLUSTER_KUBERNETES_SVC_IP="10.254.0.1"#集群DNS服务IP(从SERVICE_CIDR中预分配)exportCLUSTER_DNS_SVC_IP="10.254.0.2"#集群DNS域名exportCLUSTER_DNS_DOMAIN="cluster.local."#当前部署的机器名称(随便定义,只要能区分不同机器即可)exportNODE_NAME=etcd-host0#当前部署的机器IPexportNODE_IP=172.19.2.49#etcd集群所有机器IPexportNODE_IPS="172.19.2.49172.19.2.50172.19.2.51"#etcd集群间通信的IP和端口exportETCD_NODES=etcd-host0=https://172.19.2.49:2380,etcd-host1=https://172.19.2.50:2380,etcd-host2=https://172.19.2.51:2380#替换为kubernetesmaste集群任一机器IPexportMASTER_IP=172.19.2.49exportKUBE_APISERVER="https://${MASTER_IP}:6443"scp/root/local/bin/environment.shapp@172.19.2.50:/home/appscp/root/local/bin/environment.shapp@172.19.2.51:/home/app

172.19.2.50的环境变量配置

vim/home/app/environment.sh#!/usr/bin/bashexportPATH=/root/local/bin:$PATH#TLSBootstrapping使用的Token,可以使用命令head-c16/dev/urandom|od-An-tx|tr-d‘‘生成BOOTSTRAP_TOKEN="11d74483444fb57f6a1cc114ed715949"#最好使用主机未用的网段来定义服务网段和Pod网段#服务网段(ServiceCIDR),部署前路由不可达,部署后集群内使用IP:Port可达SERVICE_CIDR="10.254.0.0/16"#POD网段(ClusterCIDR),部署前路由不可达,**部署后**路由可达(flanneld保证)CLUSTER_CIDR="172.30.0.0/16"#服务端口范围(NodePortRange)exportNODE_PORT_RANGE="8400-9000"#etcd集群服务地址列表exportETCD_ENDPOINTS="https://172.19.2.49:2379,https://172.19.2.50:2379,https://172.19.2.51:2379"#flanneld网络配置前缀exportFLANNEL_ETCD_PREFIX="/kubernetes/network"#kubernetes服务IP(一般是SERVICE_CIDR中第一个IP)exportCLUSTER_KUBERNETES_SVC_IP="10.254.0.1"#集群DNS服务IP(从SERVICE_CIDR中预分配)exportCLUSTER_DNS_SVC_IP="10.254.0.2"#集群DNS域名exportCLUSTER_DNS_DOMAIN="cluster.local."#当前部署的机器名称(随便定义,只要能区分不同机器即可)exportNODE_NAME=etcd-host1#当前部署的机器IPexportNODE_IP=172.19.2.50#etcd集群所有机器IPexportNODE_IPS="172.19.2.49172.19.2.50172.19.2.51"#etcd集群间通信的IP和端口exportETCD_NODES=etcd-host0=https://172.19.2.49:2380,etcd-host1=https://172.19.2.50:2380,etcd-host2=https://172.19.2.51:2380#替换为kubernetesmaste集群任一机器IPexportMASTER_IP=172.19.2.49exportKUBE_APISERVER="https://${MASTER_IP}:6443"

172.19.2.51的环境变量配置

vim/home/app/environment.sh#!/usr/bin/bashexportPATH=/root/local/bin:$PATH#TLSBootstrapping使用的Token,可以使用命令head-c16/dev/urandom|od-An-tx|tr-d‘‘生成BOOTSTRAP_TOKEN="11d74483444fb57f6a1cc114ed715949"#最好使用主机未用的网段来定义服务网段和Pod网段#服务网段(ServiceCIDR),部署前路由不可达,部署后集群内使用IP:Port可达SERVICE_CIDR="10.254.0.0/16"#POD网段(ClusterCIDR),部署前路由不可达,**部署后**路由可达(flanneld保证)CLUSTER_CIDR="172.30.0.0/16"#服务端口范围(NodePortRange)exportNODE_PORT_RANGE="8400-9000"#etcd集群服务地址列表exportETCD_ENDPOINTS="https://172.19.2.49:2379,https://172.19.2.50:2379,https://172.19.2.51:2379"#flanneld网络配置前缀exportFLANNEL_ETCD_PREFIX="/kubernetes/network"#kubernetes服务IP(一般是SERVICE_CIDR中第一个IP)exportCLUSTER_KUBERNETES_SVC_IP="10.254.0.1"#集群DNS服务IP(从SERVICE_CIDR中预分配)exportCLUSTER_DNS_SVC_IP="10.254.0.2"#集群DNS域名exportCLUSTER_DNS_DOMAIN="cluster.local."#当前部署的机器名称(随便定义,只要能区分不同机器即可)exportNODE_NAME=etcd-host2#当前部署的机器IPexportNODE_IP=172.19.2.51#etcd集群所有机器IPexportNODE_IPS="172.19.2.49172.19.2.50172.19.2.51"#etcd集群间通信的IP和端口exportETCD_NODES=etcd-host0=https://172.19.2.49:2380,etcd-host1=https://172.19.2.50:2380,etcd-host2=https://172.19.2.51:2380#替换为kubernetesmaste集群任一机器IPexportMASTER_IP=172.19.2.49exportKUBE_APISERVER="https://${MASTER_IP}:6443"

172.19.2.49、172.19.2.50、172.19.2.51上都执行

mv/home/app/environment.sh/root/local/bin/chownroot:root/root/local/bin/environment.shchmod777/root/local/bin/environment.shsource/root/local/bin/environment.shwgethttps://pkg.cfssl.org/R1.2/cfssl_linux-amd64chmod+xcfssl_linux-amd64cpcfssl_linux-amd64/root/local/bin/cfsslwgethttps://pkg.cfssl.org/R1.2/cfssljson_linux-amd64chmod+xcfssljson_linux-amd64cpcfssljson_linux-amd64/root/local/bin/cfssljsonwgethttps://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64chmod+xcfssl-certinfo_linux-amd64cpcfssl-certinfo_linux-amd64/root/local/bin/cfssl-certinfoexportPATH=/root/local/bin:$PATH

172.19.2.49上执行操作

mkdirsslcdsslcfsslprint-defaultsconfig>config.jsoncfsslprint-defaultscsr>csr.jsoncat>ca-config.json<<EOF{"signing":{"default":{"expiry":"8760h"},"profiles":{"kubernetes":{"usages":["signing","keyencipherment","serverauth","clientauth"],"expiry":"8760h"}}}}EOFcat>ca-csr.json<<EOF{"CN":"kubernetes","key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST":"BeiJing","L":"BeiJing","O":"k8s","OU":"System"}]}EOFcfsslgencert-initcaca-csr.json|cfssljson-barecalsca*ca-config.jsonca.csrca-csr.jsonca-key.pemca.pemmkdir-pv/etc/kubernetes/sslcpca*/etc/kubernetes/sslscpca*root@172.19.2.50:/home/app/cascpca*root@172.19.2.51:/home/app/ca

172.19.2.50和172.19.2.51上执行操作

chown-Rroot:root/home/app/camkdir-pv/etc/kubernetes/sslcp/home/app/ca/ca*/etc/kubernetes/ssl

二、部署高可用etcd集群

172.19.2.49、172.19.2.50、172.19.2.51上都执行

source/root/local/bin/environment.shwgethttps://github.com/coreos/etcd/releases/download/v3.1.6/etcd-v3.1.6-linux-amd64.tar.gztar-xvfetcd-v3.1.6-linux-amd64.tar.gzcpetcd-v3.1.6-linux-amd64/etcd*/root/local/bincat>etcd-csr.json<<EOF{"CN":"etcd","hosts":["127.0.0.1","${NODE_IP}"],"key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST":"BeiJing","L":"BeiJing","O":"k8s","OU":"System"}]}EOFexportPATH=/root/local/bin:$PATHsource/root/local/bin/environment.shcfsslgencert-ca=/etc/kubernetes/ssl/ca.pem-ca-key=/etc/kubernetes/ssl/ca-key.pem-config=/etc/kubernetes/ssl/ca-config.json-profile=kubernetesetcd-csr.json|cfssljson-bareetcdlsetcd*etcd.csretcd-csr.jsonetcd-key.pemetcd.pemmkdir-p/etc/etcd/sslmvetcd*.pem/etc/etcd/sslrmetcd.csretcd-csr.jsonmkdir-p/var/lib/etcdcat>etcd.service<<EOF[Unit]Description=EtcdServerAfter=network.targetAfter=network-online.targetWants=network-online.targetDocumentation=https://github.com/coreos[Service]Type=notifyWorkingDirectory=/var/lib/etcd/ExecStart=/root/local/bin/etcd\--name=${NODE_NAME}\--cert-file=/etc/etcd/ssl/etcd.pem\--key-file=/etc/etcd/ssl/etcd-key.pem\--peer-cert-file=/etc/etcd/ssl/etcd.pem\--peer-key-file=/etc/etcd/ssl/etcd-key.pem\--trusted-ca-file=/etc/kubernetes/ssl/ca.pem\--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem\--initial-advertise-peer-urls=https://${NODE_IP}:2380\--listen-peer-urls=https://${NODE_IP}:2380\--listen-client-urls=https://${NODE_IP}:2379,http://127.0.0.1:2379\--advertise-client-urls=https://${NODE_IP}:2379\--initial-cluster-token=etcd-cluster-0\--initial-cluster=${ETCD_NODES}\--initial-cluster-state=new\--data-dir=/var/lib/etcdRestart=on-failureRestartSec=5LimitNOFILE=65536[Install]WantedBy=multi-user.targetEOFmvetcd.service/etc/systemd/system/systemctldaemon-reloadsystemctlenableetcdsystemctlstartetcdsystemctlstatusetcd

在172.19.2.49上验证集群

foripin${NODE_IPS};doETCDCTL_API=3/root/local/bin/etcdctl--endpoints=https://${ip}:2379--cacert=/etc/kubernetes/ssl/ca.pem--cert=/etc/etcd/ssl/etcd.pem--key=/etc/etcd/ssl/etcd-key.pemendpointhealth;done

三、部署Kubectl命令行工具

172.19.2.49、172.19.2.50、172.19.2.51上都执行

vim/root/local/bin/environment.sh#替换为kubernetes集群mastr机器IPexportMASTER_IP=172.19.2.49exportKUBE_APISERVER="https://${MASTER_IP}:6443"source/root/local/bin/environment.sh

172.19.2.49上都执行

wgethttps://dl.k8s.io/v1.6.2/kubernetes-client-linux-amd64.tar.gztar-xzvfkubernetes-client-linux-amd64.tar.gzcpkubernetes/client/bin/kube*/root/local/bin/chmoda+x/root/local/bin/kube*cat>admin-csr.json<<EOF{"CN":"admin","hosts":[],"key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST":"BeiJing","L":"BeiJing","O":"system:masters","OU":"System"}]}EOFcfsslgencert-ca=/etc/kubernetes/ssl/ca.pem-ca-key=/etc/kubernetes/ssl/ca-key.pem-config=/etc/kubernetes/ssl/ca-config.json-profile=kubernetesadmin-csr.json|cfssljson-bareadminlsadmin*admin.csradmin-csr.jsonadmin-key.pemadmin.pemmvadmin*.pem/etc/kubernetes/ssl/rmadmin.csradmin-csr.json#设置集群参数kubectlconfigset-clusterkubernetes--certificate-authority=/etc/kubernetes/ssl/ca.pem--embed-certs=true--server=${KUBE_APISERVER}#设置客户端认证参数kubectlconfigset-credentialsadmin--client-certificate=/etc/kubernetes/ssl/admin.pem--embed-certs=true--client-key=/etc/kubernetes/ssl/admin-key.pem#设置上下文参数kubectlconfigset-contextkubernetes--cluster=kubernetes--user=admin#设置默认上下文kubectlconfiguse-contextkubernetescat~/.kube/config

172.19.2.50、172.19.2.51上执行

scpkubernetes-client-linux-amd64.tar.gzapp@172.19.2.50:/home/appscpkubernetes-client-linux-amd64.tar.gzapp@172.19.2.51:/home/appmv/home/app/kubernetes-client-linux-amd64.tar.gz/home/lvqingshanchownroot:rootkubernetes-client-linux-amd64.tar.gztar-xzvfkubernetes-client-linux-amd64.tar.gzcpkubernetes/client/bin/kube*/root/local/bin/chmoda+x/root/local/bin/kube*mkdir~/.kube/

172.19.2.49上都执行

scp~/.kube/configroot@172.19.2.50:/home/appscp~/.kube/configroot@172.19.2.51:/home/app

172.19.2.50、172.19.2.51上都执行

mv/home/app/config~/.kube/chownroot:root~/.kube/config

四、部署Flannel网络

172.19.2.49上都执行

cat>flanneld-csr.json<<EOF{"CN":"flanneld","hosts":[],"key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST":"BeiJing","L":"BeiJing","O":"k8s","OU":"System"}]}EOFcfsslgencert-ca=/etc/kubernetes/ssl/ca.pem-ca-key=/etc/kubernetes/ssl/ca-key.pem-config=/etc/kubernetes/ssl/ca-config.json-profile=kubernetesflanneld-csr.json|cfssljson-bareflanneldlsflanneld*flanneld.csrflanneld-csr.jsonflanneld-key.pemflanneld.pemscpflanneld*root@172.19.2.50:/home/app/scpflanneld*root@172.19.2.51:/home/app/

172.19.2.50、172.19.2.51上都执行

mv/home/app/flanneld*.mv/home/app/flanneld*.

172.19.2.49、172.19.2.50、172.19.2.51上都执行

mkdir-p/etc/flanneld/sslmvflanneld*.pem/etc/flanneld/sslrmflanneld.csrflanneld-csr.json

172.19.2.49上执行一次(只在master上执行一次,其他节点不执行)

/root/local/bin/etcdctl--endpoints=${ETCD_ENDPOINTS}--ca-file=/etc/kubernetes/ssl/ca.pem--cert-file=/etc/flanneld/ssl/flanneld.pem--key-file=/etc/flanneld/ssl/flanneld-key.pemset${FLANNEL_ETCD_PREFIX}/config‘{"Network":"‘${CLUSTER_CIDR}‘","SubnetLen":24,"Backend":{"Type":"vxlan"}}‘mkdirflannelwgethttps://github.com/coreos/flannel/releases/download/v0.7.1/flannel-v0.7.1-linux-amd64.tar.gztar-xzvfflannel-v0.7.1-linux-amd64.tar.gz-Cflannelcpflannel/{flanneld,mk-docker-opts.sh}/root/local/bincat>flanneld.service<<EOF[Unit]Description=FlanneldoverlayaddressetcdagentAfter=network.targetAfter=network-online.targetWants=network-online.targetAfter=etcd.serviceBefore=docker.service[Service]Type=notifyExecStart=/root/local/bin/flanneld\-etcd-cafile=/etc/kubernetes/ssl/ca.pem\-etcd-certfile=/etc/flanneld/ssl/flanneld.pem\-etcd-keyfile=/etc/flanneld/ssl/flanneld-key.pem\-etcd-endpoints=${ETCD_ENDPOINTS}\-etcd-prefix=${FLANNEL_ETCD_PREFIX}ExecStartPost=/root/local/bin/mk-docker-opts.sh-kDOCKER_NETWORK_OPTIONS-d/run/flannel/dockerRestart=on-failure[Install]WantedBy=multi-user.targetRequiredBy=docker.serviceEOFcpflanneld.service/etc/systemd/system/systemctldaemon-reloadsystemctlenableflanneldsystemctlstartflanneldsystemctlstatusflanneldjournalctl-uflanneld|grep‘Leaseacquired‘ifconfigflannel.1#查看集群Pod网段(/16)/root/local/bin/etcdctl--endpoints=${ETCD_ENDPOINTS}--ca-file=/etc/kubernetes/ssl/ca.pem--cert-file=/etc/flanneld/ssl/flanneld.pem--key-file=/etc/flanneld/ssl/flanneld-key.pemget${FLANNEL_ETCD_PREFIX}/config正常结果{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}#查看已分配的Pod子网段列表(/24)/root/local/bin/etcdctl--endpoints=${ETCD_ENDPOINTS}--ca-file=/etc/kubernetes/ssl/ca.pem--cert-file=/etc/flanneld/ssl/flanneld.pem--key-file=/etc/flanneld/ssl/flanneld-key.pemls${FLANNEL_ETCD_PREFIX}/subnets正常结果/kubernetes/network/subnets/172.30.27.0-24#查看某一Pod网段对应的flanneld进程监听的IP和网络参数/root/local/bin/etcdctl--endpoints=${ETCD_ENDPOINTS}--ca-file=/etc/kubernetes/ssl/ca.pem--cert-file=/etc/flanneld/ssl/flanneld.pem--key-file=/etc/flanneld/ssl/flanneld-key.pemget${FLANNEL_ETCD_PREFIX}/subnets/172.30.27.0-24正常结果{"PublicIP":"172.19.2.49","BackendType":"vxlan","BackendData":{"VtepMAC":"9a:7b:7e:6a:2e:0b"}}

172.19.2.49上都执行

/root/local/bin/etcdctl--endpoints=${ETCD_ENDPOINTS}--ca-file=/etc/kubernetes/ssl/ca.pem--cert-file=/etc/flanneld/ssl/flanneld.pem--key-file=/etc/flanneld/ssl/flanneld-key.pemls${FLANNEL_ETCD_PREFIX}/subnets

正常结果

/kubernetes/network/subnets/172.30.27.0-24/kubernetes/network/subnets/172.30.22.0-24/kubernetes/network/subnets/172.30.38.0-24

分别ping以下地址,注意自己ping自己ping不通

172.30.27.1172.30.22.1172.30.38.1

五、部署master节点

kubernetes master 节点包含的组件: kube-apiserver kube-scheduler kube-controller-manager

172.19.2.49上执行

wgethttps://github.com/kubernetes/kubernetes/releases/download/v1.6.2/kubernetes.tar.gztar-xzvfkubernetes.tar.gzcdkubernetes./cluster/get-kube-binaries.shwgethttps://dl.k8s.io/v1.6.2/kubernetes-server-linux-amd64.tar.gztar-xzvfkubernetes-server-linux-amd64.tar.gzcdkubernetestar-xzvfkubernetes-src.tar.gzcp-rserver/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet}/root/local/bin/cd../..cat>kubernetes-csr.json<<EOF{"CN":"kubernetes","hosts":["127.0.0.1","${MASTER_IP}","${CLUSTER_KUBERNETES_SVC_IP}","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST":"BeiJing","L":"BeiJing","O":"k8s","OU":"System"}]}EOFcfsslgencert-ca=/etc/kubernetes/ssl/ca.pem-ca-key=/etc/kubernetes/ssl/ca-key.pem-config=/etc/kubernetes/ssl/ca-config.json-profile=kuberneteskubernetes-csr.json|cfssljson-barekuberneteslskubernetes*kubernetes.csrkubernetes-csr.jsonkubernetes-key.pemkubernetes.pemmkdir-p/etc/kubernetes/ssl/mvkubernetes*.pem/etc/kubernetes/ssl/rmkubernetes.csrkubernetes-csr.jsoncat>token.csv<<EOF${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"EOFmvtoken.csv/etc/kubernetes/cat>kube-apiserver.service<<EOF[Unit]Description=KubernetesAPIServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target[Service]ExecStart=/root/local/bin/kube-apiserver\--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota\--advertise-address=${MASTER_IP}\--bind-address=${MASTER_IP}\--insecure-bind-address=${MASTER_IP}\--authorization-mode=RBAC\--runtime-config=rbac.authorization.k8s.io/v1alpha1\--kubelet-https=true\--experimental-bootstrap-token-auth\--token-auth-file=/etc/kubernetes/token.csv\--service-cluster-ip-range=${SERVICE_CIDR}\--service-node-port-range=${NODE_PORT_RANGE}\--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem\--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem\--client-ca-file=/etc/kubernetes/ssl/ca.pem\--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem\--etcd-cafile=/etc/kubernetes/ssl/ca.pem\--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem\--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem\--etcd-servers=${ETCD_ENDPOINTS}\--enable-swagger-ui=true\--allow-privileged=true\--apiserver-count=3\--audit-log-maxage=30\--audit-log-maxbackup=3\--audit-log-maxsize=100\--audit-log-path=/var/lib/audit.log\--event-ttl=1h\--v=2Restart=on-failureRestartSec=5Type=notifyLimitNOFILE=65536[Install]WantedBy=multi-user.targetEOFcpkube-apiserver.service/etc/systemd/system/systemctldaemon-reloadsystemctlenablekube-apiserversystemctlstartkube-apiserversystemctlstatuskube-apiservercat>kube-controller-manager.service<<EOF[Unit]Description=KubernetesControllerManagerDocumentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]ExecStart=/root/local/bin/kube-controller-manager\--address=127.0.0.1\--master=http://${MASTER_IP}:8080\--allocate-node-cidrs=true\--service-cluster-ip-range=${SERVICE_CIDR}\--cluster-cidr=${CLUSTER_CIDR}\--cluster-name=kubernetes\--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem\--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem\--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem\--root-ca-file=/etc/kubernetes/ssl/ca.pem\--leader-elect=true\--v=2Restart=on-failureRestartSec=5[Install]WantedBy=multi-user.targetEOFcpkube-controller-manager.service/etc/systemd/system/systemctldaemon-reloadsystemctlenablekube-controller-managersystemctlstartkube-controller-managersystemctlstatuskube-controller-managercat>kube-scheduler.service<<EOF[Unit]Description=KubernetesSchedulerDocumentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]ExecStart=/root/local/bin/kube-scheduler\--address=127.0.0.1\--master=http://${MASTER_IP}:8080\--leader-elect=true\--v=2Restart=on-failureRestartSec=5[Install]WantedBy=multi-user.targetEOFcpkube-scheduler.service/etc/systemd/system/systemctldaemon-reloadsystemctlenablekube-schedulersystemctlstartkube-schedulersystemctlstatuskube-scheduler

验证节点健康状况

kubectlgetcomponentstatuses

六、部署Node节点

kubernetes Node 节点包含如下组件: flanneld docker kubelet kube-proxy

172.19.2.49、172.19.2.50、172.19.2.51上都执行

#安装dockeryuminstalldocker-cerpm-qa|grepdockerdocker-ce-selinux-17.03.1.ce-1.el7.centos.noarchdocker-ce-17.03.1.ce-1.el7.centos.x86_64#修改docker启动文件,在启动文件中加入一行vim/etc/systemd/system/docker.service[Service]Type=notifyEnvironment=GOTRACEBACK=crashEnvironmentFile=-/run/flannel/dockersystemctldaemon-reloadsystemctlenabledockersystemctlstartdockerdockerversionkubectlcreateclusterrolebindingkubelet-bootstrap--clusterrole=system:node-bootstrapper--user=kubelet-bootstrapwgethttps://dl.k8s.io/v1.6.2/kubernetes-server-linux-amd64.tar.gztar-xzvfkubernetes-server-linux-amd64.tar.gzcdkubernetestar-xzvfkubernetes-src.tar.gzcp-r./server/bin/{kube-proxy,kubelet}/root/local/bin/#设置集群参数kubectlconfigset-clusterkubernetes--certificate-authority=/etc/kubernetes/ssl/ca.pem--embed-certs=true--server=${KUBE_APISERVER}--kubeconfig=bootstrap.kubeconfig#设置客户端认证参数kubectlconfigset-credentialskubelet-bootstrap--token=${BOOTSTRAP_TOKEN}--kubeconfig=bootstrap.kubeconfig#设置上下文参数kubectlconfigset-contextdefault--cluster=kubernetes--user=kubelet-bootstrap--kubeconfig=bootstrap.kubeconfig#设置默认上下文kubectlconfiguse-contextdefault--kubeconfig=bootstrap.kubeconfigmvbootstrap.kubeconfig/etc/kubernetes/mkdir/var/lib/kubeletcat>kubelet.service<<EOF[Unit]Description=KubernetesKubeletDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service[Service]WorkingDirectory=/var/lib/kubeletExecStart=/root/local/bin/kubelet\--address=${NODE_IP}\--hostname-override=${NODE_IP}\--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest\--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig\--kubeconfig=/etc/kubernetes/kubelet.kubeconfig\--require-kubeconfig\--cert-dir=/etc/kubernetes/ssl\--cluster_dns=${CLUSTER_DNS_SVC_IP}\--cluster_domain=${CLUSTER_DNS_DOMAIN}\--hairpin-modepromiscuous-bridge\--allow-privileged=true\--serialize-image-pulls=false\--logtostderr=true\--v=2Restart=on-failureRestartSec=5[Install]WantedBy=multi-user.targetEOFcpkubelet.service/etc/systemd/system/kubelet.servicesystemctldaemon-reloadsystemctlenablekubeletsystemctlstartkubeletsystemctlstatuskubeletjournalctl-e-ukubelet

172.19.2.49上执行

kubectlgetcsr#注意approve后为kubelet节点的NAME,此处为通过节点的验证kubectlcertificateapprovecsr-1w6sjkubectlgetcsrkubectlgetnodesls-l/etc/kubernetes/kubelet.kubeconfigls-l/etc/kubernetes/ssl/kubelet*cat>kube-proxy-csr.json<<EOF{"CN":"system:kube-proxy","hosts":[],"key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST":"BeiJing","L":"BeiJing","O":"k8s","OU":"System"}]}EOFcfsslgencert-ca=/etc/kubernetes/ssl/ca.pem-ca-key=/etc/kubernetes/ssl/ca-key.pem-config=/etc/kubernetes/ssl/ca-config.json-profile=kuberneteskube-proxy-csr.json|cfssljson-barekube-proxylskube-proxy*cpkube-proxy*.pem/etc/kubernetes/ssl/rmkube-proxy.csrkube-proxy-csr.jsonscpkube-proxy*.pemroot@172.19.2.51:/home/lvqingshanscpkube-proxy*.pemroot@172.19.2.51:/home/lvqingshan

172.19.2.50、172.19.2.51上都执行

mv/home/lvqingshan/kube-proxy*.pem/etc/kubernetes/ssl/

172.19.2.49、172.19.2.50、172.19.2.51

知识推荐

我的编程学习网——分享web前端后端开发技术知识。 垃圾信息处理邮箱 tousu563@163.com 网站地图
icp备案号 闽ICP备2023006418号-8 不良信息举报平台 互联网安全管理备案 Copyright 2023 www.wodecom.cn All Rights Reserved