分享web开发知识

注册/登录|最近发布|今日推荐

主页 IT知识网页技术软件开发前端开发代码编程运营维护技术分享教程案例
当前位置:首页 > 网页技术

kubernetes 1.10.1 版本 部署

发布时间:2023-09-06 01:50责任编辑:林大明关键词:kubernetes
kubernetes组件

Master组件:

kube-apiserverKubernetes API,集群的统一入口,各组件协调者,以HTTP API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交kube-controller-manager处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。kube-scheduler根据调度算法为新创建的Pod选择一个Node节点。

Node组件:

kubeletkubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、 Pod挂载数据卷、下载secret、获取容器和节点状态等工作。 kubelet将每个Pod转换成一组容器。kube-proxy在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。docker或rocket/rkt运行容器。

第三方服务:

 etcd分布式键值存储系统。用于保持集群状态,比如Pod、 Service等对象信息。

K8S部署

1、环境规划
2、安装Docker
3、 自签TLS证书
4、部署Etcd集群
5、部署Flannel网络
6、创建Node节点kubeconfig文件
7、获取K8S二进制包
8、运行Master组件
9、运行Node组件
10、查询集群状态


1 环境规划


2 部署docker

node1 node2

mkdir ?/data/dockersudo yum install -y yum-utils device-mapper-persistent-data lvm2sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.reposudo yum makecache fastsudo yum -y install docker-cedocker versionsystemctl enable docker.service ???systemctl start docker.servicesudo mkdir -p /etc/dockersudo tee /etc/docker/daemon.json <<-‘EOF‘{"graph": "/data/docker"}EOFsudo systemctl daemon-reloadsudo systemctl restart docker

3 自签TLS证书


k8s-master 安装证书生成工具cfssl:

mkdir /data/ssl -pwget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64mv cfssl_linux-amd64 /usr/local/bin/cfsslmv cfssljson_linux-amd64 /usr/local/bin/cfssljsonmv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfocd /data/ssl/

创建certificate.sh

vim ?certificate.shcat > ca-config.json <<EOF{ ?"signing": { ???"default": { ?????"expiry": "87600h" ???}, ???"profiles": { ?????"kubernetes": { ????????"expiry": "87600h", ????????"usages": [ ???????????"signing", ???????????"key encipherment", ???????????"server auth", ???????????"client auth" ???????] ?????} ???} ?}}EOFcat > ca-csr.json <<EOF{ ???"CN": "kubernetes", ???"key": { ???????"algo": "rsa", ???????"size": 2048 ???}, ???"names": [ ???????{ ???????????"C": "CN", ???????????"L": "Beijing", ???????????"ST": "Beijing", ?????????????"O": "k8s", ???????????"OU": "System" ???????} ???]}EOFcfssl gencert -initca ca-csr.json | cfssljson -bare ca -#-----------------------cat > server-csr.json <<EOF{ ???"CN": "kubernetes", ???"hosts": [ ?????"127.0.0.1", ?????"192.168.1.107", ?????"192.168.1.111", ?????"192.168.1.14", ?????"10.10.10.1", ?????"kubernetes", ?????"kubernetes.default", ?????"kubernetes.default.svc", ?????"kubernetes.default.svc.cluster", ?????"kubernetes.default.svc.cluster.local" ???], ???"key": { ???????"algo": "rsa", ???????"size": 2048 ???}, ???"names": [ ???????{ ???????????"C": "CN", ???????????"L": "BeiJing", ???????????"ST": "BeiJing", ???????????"O": "k8s", ???????????"OU": "System" ???????} ???]}EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server#-----------------------cat > admin-csr.json <<EOF{ ?"CN": "admin", ?"hosts": [], ?"key": { ???"algo": "rsa", ???"size": 2048 ?}, ?"names": [ ???{ ?????"C": "CN", ?????"L": "BeiJing", ?????"ST": "BeiJing", ?????"O": "system:masters", ?????"OU": "System" ???} ?]}EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin#-----------------------cat > kube-proxy-csr.json <<EOF{ ?"CN": "system:kube-proxy", ?"hosts": [], ?"key": { ???"algo": "rsa", ???"size": 2048 ?}, ?"names": [ ???{ ?????"C": "CN", ?????"L": "BeiJing", ?????"ST": "BeiJing", ?????"O": "k8s", ?????"OU": "System" ???} ?]}EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

修改如下,然后执行

50 ?????"192.168.1.107",51 ?????"192.168.1.111",52 ?????"192.168.1.14",

生成证书

admin-key.pem ??ca.csr ??????ca.pem ??????????????kube-proxy-key.pem ?server-csr.jsonadmin.csr ??????admin.pem ??????ca-csr.json ?kube-proxy.csr ??????kube-proxy.pem ?????server-key.pemadmin-csr.json ?ca-config.json ?ca-key.pem ??kube-proxy-csr.json ?server.csr ?????????server.pem

4 部署Etcd

二进制包下载地址: https://github.com/coreos/etcd/releases/tag/v3.2.123个节点mkdir /data/etcd/cd /data/etcd/mkdir /opt/kubernetes/{bin,cfg,ssl} ?-ptar zxvf etcd-v3.2.12-linux-amd64.tar.gzcd etcd-v3.2.12-linux-amd64/cp etcd etcdctl ?/opt/kubernetes/bin/cd /data/sslcp ca*pem ?server*pem ?/opt/kubernetes/ssl/scp -r /opt/kubernetes/* ?192.168.1.111:/opt/kubernetesscp -r /opt/kubernetes/* ?192.168.1.14:/opt/kubernetes
cd /data/etcdvim ?etcd.sh#!/bin/bashETCD_NAME=${1:-"etcd01"}ETCD_IP=${2:-"127.0.0.1"}ETCD_CLUSTER=${3:-"etcd01=http://127.0.0.1:2379"}cat <<EOF >/opt/kubernetes/cfg/etcd#[Member]ETCD_NAME="${ETCD_NAME}"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"ETCD_INITIAL_CLUSTER="${ETCD_CLUSTER}"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"EOFcat <<EOF >/usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyEnvironmentFile=-/opt/kubernetes/cfg/etcdExecStart=/opt/kubernetes/bin/etcd \--name=\${ETCD_NAME} \--data-dir=\${ETCD_DATA_DIR} \--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \--initial-cluster=\${ETCD_INITIAL_CLUSTER} \--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \--initial-cluster-state=new \--cert-file=/opt/kubernetes/ssl/server.pem \--key-file=/opt/kubernetes/ssl/server-key.pem \--peer-cert-file=/opt/kubernetes/ssl/server.pem \--peer-key-file=/opt/kubernetes/ssl/server-key.pem \--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pemRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable etcdsystemctl restart etcdchmod +x etcd.shmaster: ./etcd.sh ?etcd01 192.168.1.107 ?etcd01=https://192.168.1.107:2380,etcd02=https://192.168.1.111:2380,etcd03=https://192.168.1.14:2380node1:./etcd.sh ?etcd02 192.168.1.111 ?etcd01=https://192.168.1.107:2380,etcd02=https://192.168.1.111:2380,etcd03=https://192.168.1.14:2380node2:./etcd.sh ?etcd03 192.168.1.14 ?etcd01=https://192.168.1.107:2380,etcd02=https://192.168.1.111:2380,etcd03=https://192.168.1.14:2380tailf /var/log/messagesps -ef | grep etcd
查看集群状态:cd /opt/kubernetes/ssl/opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.107:2379,https://192.168.1.111:2379,https://192.168.1.14:2379" cluster-healthmember 21ab2aeb56731588 is healthy: got healthy result from https://192.168.1.111:2379member 5997140dfeb3820d is healthy: got healthy result from https://192.168.1.14:2379member 9a57e056c2e030b8 is healthy: got healthy result from https://192.168.1.107:2379cluster is healthy

5 ?部署Flannel网络

写入分配的子网段到etcd,供flanneld使用

cd /opt/kubernetes/ssl/opt/kubernetes/bin/etcdctl > --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem > --endpoints="https://192.168.1.107:2379,https://192.168.1.111:2379,https://192.168.1.14:2379" > set /coreos.com/network/config ‘{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}‘{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

下载二进制包

wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

配置flanneld--三个节点 操作

mkdir ?/data/flanneldcd /data/flanneldtar xf flannel-v0.10.0-linux-amd64.tar.gzmv flanneld mk-docker-opts.sh ?/opt/kubernetes/bin/vim flanneld.sh#!/bin/bashETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}cat <<EOF >/opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"EOFcat <<EOF >/usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.targetEOFcat <<EOF >/usr/lib/systemd/system/docker.service[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target[Service]Type=notifyEnvironmentFile=/run/flannel/subnet.envExecStart=/usr/bin/dockerd ?\$DOCKER_NETWORK_OPTIONSExecReload=/bin/kill -s HUP \$MAINPIDLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable flanneldsystemctl restart flanneldsystemctl restart dockerchmod +x flanneld.sh./flanneld.sh ?https://192.168.1.107:2379,https://192.168.1.111:2379,https://192.168.1.14:2379cat /run/flannel/subnet.envDOCKER_OPT_BIP="--bip=172.17.1.1/24"DOCKER_OPT_IPMASQ="--ip-masq=false"DOCKER_OPT_MTU="--mtu=1450"DOCKER_NETWORK_OPTIONS=" --bip=172.17.1.1/24 --ip-masq=false --mtu=1450"

查看配置

cd /opt/kubernetes/ssl/opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.107:2379,https://192.168.0.212:2379,https://192.168.0.213:2379" ?ls /coreos.com/network/subnets/coreos.com/network/subnets/172.17.1.0-24/coreos.com/network/subnets/172.17.66.0-24/coreos.com/network/subnets/172.17.87.0-24/opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.107:2379,https://192.168.0.212:2379,https://192.168.0.213:2379" ?get /coreos.com/network/subnets/172.17.1.0-24{"PublicIP":"192.168.1.107","BackendType":"vxlan","BackendData":{"VtepMAC":"4a:e5:53:6d:4a:66"}}netstat -antp | grep flanneldtcp ???????0 ?????0 192.168.1.107:1618 ?????192.168.1.14:2379 ??????ESTABLISHED 1760/flanneld ??????tcp ???????0 ?????0 192.168.1.107:1620 ?????192.168.1.14:2379 ??????ESTABLISHED 1760/flanneld ??????tcp ???????0 ?????0 192.168.1.107:1616 ?????192.168.1.14:2379 ??????ESTABLISHED 1760/flanneld

6 ?创建Node节点kubeconfig文件

master节点操作

  • 创建TLS Bootstrapping Token
  • 创建kubelet kubeconfig
  • 创建kube-proxy kubeconfig
cd /data/ssl/vim kubeconfig.sh ?????????##修改第10行 ip# 创建 TLS Bootstrapping Tokenexport BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘)cat > token.csv <<EOF${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"EOF#----------------------# 创建kubelet bootstrapping kubeconfigexport KUBE_APISERVER="https://192.168.1.107:6443"# 设置集群参数kubectl config set-cluster kubernetes ??--certificate-authority=./ca.pem ??--embed-certs=true ??--server=${KUBE_APISERVER} ??--kubeconfig=bootstrap.kubeconfig# 设置客户端认证参数kubectl config set-credentials kubelet-bootstrap ??--token=${BOOTSTRAP_TOKEN} ??--kubeconfig=bootstrap.kubeconfig# 设置上下文参数kubectl config set-context default ??--cluster=kubernetes ??--user=kubelet-bootstrap ??--kubeconfig=bootstrap.kubeconfig# 设置默认上下文kubectl config use-context default --kubeconfig=bootstrap.kubeconfig#----------------------# 创建kube-proxy kubeconfig文件kubectl config set-cluster kubernetes ??--certificate-authority=./ca.pem ??--embed-certs=true ??--server=${KUBE_APISERVER} ??--kubeconfig=kube-proxy.kubeconfigkubectl config set-credentials kube-proxy ??--client-certificate=./kube-proxy.pem ??--client-key=./kube-proxy-key.pem ??--embed-certs=true ??--kubeconfig=kube-proxy.kubeconfigkubectl config set-context default ??--cluster=kubernetes ??--user=kube-proxy ??--kubeconfig=kube-proxy.kubeconfigkubectl config use-context default --kubeconfig=kube-proxy.kubeconfig## ?kubectl ??软件在kubernetes-server-linux-amd64.tar.gz ?里面,可从官网下载,下面的部署也需要这个软件包mv kubectl ?/usr/bin/chmod +x /usr/bin/kubectlsh kubeconfig.shkubeconfig.sh ??kube-proxy-csr.json ?kube-proxy.kubeconfigkube-proxy.csr ?kube-proxy-key.pem ??kube-proxy.pem bootstrap.kubeconfig
scp *kubeconfig root@192.168.1.111:/opt/kubernetes/cfgscp *kubeconfig root@192.168.1.14:/opt/kubernetes/cfg

7 ?获取K8S二进制包

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1101kubernetes-server-linux-amd64.tar.gz

需要用到
master

  • kubectl
  • kube-scheduler
  • kube-apiserver
  • kube-controller-manager

node

  • kubelet
  • kube-proxy
vim ??apiserver.sh#!/bin/bashMASTER_ADDRESS=${1:-"192.168.1.107"}ETCD_SERVERS=${2:-"http://127.0.0.1:2379"}cat <<EOF >/opt/kubernetes/cfg/kube-apiserverKUBE_APISERVER_OPTS="--logtostderr=true \--v=4 \--etcd-servers=${ETCD_SERVERS} \--insecure-bind-address=127.0.0.1 \--bind-address=${MASTER_ADDRESS} \--insecure-port=8080 \--secure-port=6443 \--advertise-address=${MASTER_ADDRESS} \--allow-privileged=true \--service-cluster-ip-range=10.10.10.0/24 \--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node \--kubelet-https=true \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/opt/kubernetes/ssl/server.pem ?\--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/kubernetes/ssl/ca.pem \--etcd-certfile=/opt/kubernetes/ssl/server.pem \--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"EOFcat <<EOF >/usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserverExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kube-apiserversystemctl restart kube-apiserver
vim controller-manager.sh#!/bin/bashMASTER_ADDRESS=${1:-"127.0.0.1"}cat <<EOF >/opt/kubernetes/cfg/kube-controller-managerKUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \--v=4 \--master=${MASTER_ADDRESS}:8080 \--leader-elect=true \--address=127.0.0.1 \--service-cluster-ip-range=10.10.10.0/24 \--cluster-name=kubernetes \--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem ?\--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \--root-ca-file=/opt/kubernetes/ssl/ca.pem"EOFcat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-managerExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kube-controller-managersystemctl restart kube-controller-manager
vim scheduler.sh#!/bin/bashMASTER_ADDRESS=${1:-"127.0.0.1"}cat <<EOF >/opt/kubernetes/cfg/kube-schedulerKUBE_SCHEDULER_OPTS="--logtostderr=true \--v=4 \--master=${MASTER_ADDRESS}:8080 \--leader-elect"EOFcat <<EOF >/usr/lib/systemd/system/kube-scheduler.service[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-schedulerExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kube-schedulersystemctl restart kube-scheduler

8 运行Master组件

mv kube-apiserver kube-controller-manager kube-scheduler kubectl /opt/kubernetes/binchmod +x /opt/kubernetes/bin/* && chmod +x *.shcp ssl/token.csv /opt/kubernetes/cfg/ls /opt/kubernetes/ssl/ca-key.pem ?ca.pem ?server-key.pem ?server.pem./apiserver.sh 192.168.1.107 https://192.168.1.107:2379,https://192.168.1.111:2379,https://192.168.1.14:2379./scheduler.sh 127.0.0.1./controller-manager.sh 127.0.0.1echo "export PATH=$PATH:/opt/kubernetes/bin" >> /etc/profilesource /etc/profile

创建用户

kubectl create clusterrolebinding ?kubelet-bootstrap --clusterrole=system:node-bootstrapper ?--user=kubelet-bootstrap

检查

kubectl get csNAME ????????????????STATUS ???MESSAGE ?????????????ERRORcontroller-manager ??Healthy ??ok ??????????????????scheduler ???????????Healthy ??ok ??????????????????etcd-0 ??????????????Healthy ??{"health": "true"} ??etcd-1 ??????????????Healthy ??{"health": "true"} ??etcd-2 ??????????????Healthy ??{"health": "true"} ??

9 运行Node组件

vim ?kubelet.sh#!/bin/bashNODE_ADDRESS=${1:-"192.168.1.111"}DNS_SERVER_IP=${2:-"10.10.10.2"}cat <<EOF >/opt/kubernetes/cfg/kubeletKUBELET_OPTS="--logtostderr=true \--v=4 \--address=${NODE_ADDRESS} \--hostname-override=${NODE_ADDRESS} \--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \--cert-dir=/opt/kubernetes/ssl \--allow-privileged=true \--cluster-dns=${DNS_SERVER_IP} \--cluster-domain=cluster.local \--fail-swap-on=false \--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"EOFcat <<EOF >/usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=-/opt/kubernetes/cfg/kubeletExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTSRestart=on-failureKillMode=process[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kubeletsystemctl restart kubelet
vim proxy.sh#!/bin/bashNODE_ADDRESS=${1:-"192.168.1.111"}cat <<EOF >/opt/kubernetes/cfg/kube-proxyKUBE_PROXY_OPTS="--logtostderr=true --v=4 --hostname-override=${NODE_ADDRESS} --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"EOFcat <<EOF >/usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes ProxyAfter=network.target[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-proxyExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kube-proxysystemctl restart kube-proxy

node1 ???node2重复此步骤

mv kubelet kube-proxy /opt/kubernetes/binchmod +x /opt/kubernetes/bin/* && chmod +x *.sh./kubelet.sh 192.168.1.111 10.10.10.2./proxy.sh 192.168.1.111node2./kubelet.sh 192.168.1.14 10.10.10.2./proxy.sh 192.168.1.14

master:

kubectl get csrNAME ??????????????????????????????????????????????????AGE ??????REQUESTOR ??????????CONDITIONnode-csr-OBBWrBrJEDjmG2Cnu62ZGfRPfElYXbzrBOdwZoNP9GY ??2m ???????kubelet-bootstrap ??Pendingkubectl ?certificate approve node-csr-OBBWrBrJEDjmG2Cnu62ZGfRPfElYXbzrBOdwZoNP9GYcertificatesigningrequest "node-csr-OBBWrBrJEDjmG2Cnu62ZGfRPfElYXbzrBOdwZoNP9GY" approvedkubectl get csrNAME ??????????????????????????????????????????????????AGE ??????REQUESTOR ??????????CONDITIONnode-csr-OBBWrBrJEDjmG2Cnu62ZGfRPfElYXbzrBOdwZoNP9GY ??3m ???????kubelet-bootstrap ??Approved,Issuedkubectl get nodeNAME ???????????STATUS ????ROLES ????AGE ??????VERSION192.168.1.111 ??Ready ?????<none> ???11m ??????v1.10.1192.168.1.14 ???NotReady ??<none> ???8s ???????v1.10.1

10 查询集群状态

kubectl get componentstatuskubectl get node

kubernetes 1.10.1 版本 部署

原文地址:http://blog.51cto.com/hequan/2106618

知识推荐

我的编程学习网——分享web前端后端开发技术知识。 垃圾信息处理邮箱 tousu563@163.com 网站地图
icp备案号 闽ICP备2023006418号-8 不良信息举报平台 互联网安全管理备案 Copyright 2023 www.wodecom.cn All Rights Reserved