分享web开发知识

注册/登录|最近发布|今日推荐

主页 IT知识网页技术软件开发前端开发代码编程运营维护技术分享教程案例
当前位置:首页 > 代码编程

Kubernetes+Etcd-v1.7.0 + CA 分布式集群部署

发布时间:2023-09-06 01:27责任编辑:熊小新关键词:暂无标签
kubernetes1.7.0+flannel二进制部署kubernetes1.7.0+flannel基于二进制文件部署本地化kube-apiserver,kube-controller-manager,kube-scheduler(1).环境说明k8s-master-1:192.168.54.12k8s-node1:192.168.54.13k8s-node2:192.168.54.14(2).初始化环境hostnamectl--staticset-hostnamehostname192.168.54.12-k8s-master-1192.168.54.13-k8s-node1192.168.54.14-k8s-node2#编辑/etc/hosts文件,配置hostname通信vi/etc/hosts192.168.54.12k8s-master-1192.168.54.13k8s-node1192.168.54.14k8s-node2创建验证这里使用CloudFlare的PKI工具集cfssl来生成CertificateAuthority(CA)证书和秘钥文件。(1).安装cfsslmkdir-p/opt/local/cfsslcd/opt/local/cfsslwgethttps://pkg.cfssl.org/R1.2/cfssl_linux-amd64mvcfssl_linux-amd64cfsslwgethttps://pkg.cfssl.org/R1.2/cfssljson_linux-amd64mvcfssljson_linux-amd64cfssljsonwgethttps://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64mvcfssl-certinfo_linux-amd64cfssl-certinfochmod+x*(2).创建CA证书配置mkd.ir/opt/sslcd/opt/ssl/opt/local/cfssl/cfsslprint-defaultsconfig>config.json/opt/local/cfssl/cfsslprint-defaultscsr>csr.json#config.json文件{"signing":{"default":{"expiry":"87600h"},"profiles":{"kubernetes":{"usages":["signing","keyencipherment","serverauth","clientauth"],"expiry":"87600h"}}}}#csr.json文件{"CN":"kubernetes","key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST":"BeiJing","L":"BeiJing","O":"k8s","OU":"System"}]}(3).生成CA证书和私钥cd/opt/ssl//opt/local/cfssl/cfsslgencert-initcacsr.json|/opt/local/cfssl/cfssljson-bareca[root@k8s-master-1ssl]#ls-lt总用量20-rw-r--r--1rootroot10057月317:26ca.csr-rw-------1rootroot16757月317:26ca-key.pem-rw-r--r--1rootroot13637月317:26ca.pem-rw-r--r--1rootroot2107月317:24csr.json-rw-r--r--1rootroot2927月317:23config.json(4).分发证书#创建证书目录mkdir-p/etc/kubernetes/ssl#拷贝所有文件到目录下cp*/etc/kubernetes/ssl#这里要将文件拷贝到所有的k8s机器上scp*192.168.54.13:/etc/kubernetes/ssl/scp*192.168.54.14:/etc/kubernetes/ssl/etcd集群etcd是k8s集群的基础组件,这里感觉没必要创建双向认证。(1).安装etcdyum-yinstalletcd3(2).修改etcd配置#etcd-1#修改配置文件,/etc/etcd/etcd.conf需要修改如下参数:mv/etc/etcd/etcd.conf/etc/etcd/etcd.conf-bakvi/etc/etcd/etcd.confETCD_NAME=etcd1ETCD_DATA_DIR="/var/lib/etcd/etcd1.etcd"ETCD_LISTEN_PEER_URLS="http://192.168.54.12:2380"ETCD_LISTEN_CLIENT_URLS="http://192.168.54.12:2379,http://127.0.0.1:2379"ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.54.12:2380"ETCD_INITIAL_CLUSTER="etcd1=http://192.168.54.12:2380,etcd2=http://192.168.54.13:2380,etcd3=http://192.168.54.14:2380"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="http://192.168.54.12:2379"#etcd-2#修改配置文件,/etc/etcd/etcd.conf需要修改如下参数:mv/etc/etcd/etcd.conf/etc/etcd/etcd.conf-bakvi/etc/etcd/etcd.confETCD_NAME=etcd2ETCD_DATA_DIR="/var/lib/etcd/etcd2.etcd"ETCD_LISTEN_PEER_URLS="http://192.168.54.13:2380"ETCD_LISTEN_CLIENT_URLS="http://192.168.54.13:2379,http://127.0.0.1:2379"ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.54.13:2380"ETCD_INITIAL_CLUSTER="etcd1=http://192.168.54.12:2380,etcd2=http://192.168.54.13:2380,etcd3=http://192.168.54.14:2380"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="http://192.168.54.13:2379"#etcd-3#修改配置文件,/etc/etcd/etcd.conf需要修改如下参数:mv/etc/etcd/etcd.conf/etc/etcd/etcd.conf-bakvi/etc/etcd/etcd.confETCD_NAME=etcd3ETCD_DATA_DIR="/var/lib/etcd/etcd3.etcd"ETCD_LISTEN_PEER_URLS="http://192.168.54.14:2380"ETCD_LISTEN_CLIENT_URLS="http://192.168.54.14:2379,http://127.0.0.1:2379"ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.54.14:2380"ETCD_INITIAL_CLUSTER="etcd1=http://192.168.54.12:2380,etcd2=http://192.168.54.13:2380,etcd3=http://192.168.54.14:2380"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="http://192.168.54.14:2379"修改etcd启动文件/usr/lib/systemd/system/etcd.servicesed-i's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"--listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"--advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\"--initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\"--initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\"--initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g'/usr/lib/systemd/system/etcd.service(3).启动etcd分别启动所有节点的etcd服务systemctlenableetcdsystemctlstartetcdsystemctlstatusetcd(4).验证etcd集群状态查看etcd集群状态:etcdctlcluster-health#出现clusterishealthy表示成功查看etcd集群成员:etcdctlmemberlistmember4b622f1d4543c5f7ishealthy:gothealthyresultfromhttp://192.168.54.13:2379member647542be2d7fdef3ishealthy:gothealthyresultfromhttp://192.168.54.12:2379member83464a62a714c625ishealthy:gothealthyresultfromhttp://192.168.54.14:2379Flannel网络(1).安装flannel这边其实由于内网,就没有使用SSL认证,直接使用了yum-yinstallflannel清除网络中遗留的docker网络(docker0,flannel0等)ifconfig如果存在请删除之,以免发生不必要的未知错误iplinkdeletedocker0....(2).配置flannel设置flannel所用到的IP段etcdctl--endpointhttp://192.168.54.12:2379set/flannel/network/config'{"Network":"10.233.0.0/16","SubnetLen":25,"Backend":{"Type":"vxlan","VNI":1}}'接下来修改flannel配置文件vim/etc/sysconfig/flanneld#旧版本:FLANNEL_ETCD="http://192.168.54.12:2379,http://192.168.54.13:2379,http://192.168.54.14:2379"#修改为集群地址FLANNEL_ETCD_KEY="/flannel/network/config"#修改为上面导入配置中的/flannel/networkFLANNEL_OPTIONS="--iface=em1"#修改为本机物理网卡的名称#新版本:FLANNEL_ETCD="http://192.168.54.12:2379,http://192.168.54.13:2379,http://192.168.54.14:2379"#修改为集群地址FLANNEL_ETCD_PREFIX="/flannel/network"#修改为上面导入配置中的/flannel/networkFLANNEL_OPTIONS="--iface=em1"#修改为本机物理网卡的名称(3).启动flannelsystemctlenableflanneldsystemctlstartflanneldsystemctlstatusflanneld安装docker#导入yum源#安装yum-config-manageryum-yinstallyum-utils#导入yum-config-manager--add-repohttps://download.docker.com/linux/centos/docker-ce.repo#更新repoyummakecache#安装yuminstalldocker-ce(1).更改docker配置#修改配置vi/usr/lib/systemd/system/docker.service[Unit]Description=DockerApplicationContainerEngineDocumentation=https://docs.docker.comAfter=network-online.targetfirewalld.serviceWants=network-online.target[Service]Type=notifyExecStart=/usr/bin/dockerd$DOCKER_NETWORK_OPTIONS$DOCKER_OPTS$DOCKER_DNS_OPTIONSExecReload=/bin/kill-sHUP$MAINPIDLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.target#修改其他配置cat>>/usr/lib/systemd/system/docker.service.d/docker-options.conf<<EOF[Service]Environment="DOCKER_OPTS=--insecure-registry=10.254.0.0/16--graph=/opt/docker--registry-mirror=http://b438f72b.m.daocloud.io"EOF#重新读取配置,启动dockersystemctldaemon-reloadsystemctlstartdocker(3).查看docker网络ifconfigdocker0:flags=4099<UP,BROADCAST,MULTICAST>mtu1500inet10.233.19.1netmask255.255.255.128broadcast0.0.0.0ether02:42:c1:2c:c5:betxqueuelen0(Ethernet)RXpackets0bytes0(0.0B)RXerrors0dropped0overruns0frame0TXpackets0bytes0(0.0B)TXerrors0dropped0overruns0carrier0collisions0em1:flags=4163<UP,BROADCAST,RUNNING,MULTICAST>mtu1500inet192.168.54.12netmask255.255.255.0broadcast10.6.0.255inet6fe80::d6ae:52ff:fed1:f0c9prefixlen64scopeid0x20<link>etherd4:ae:52:d1:f0:c9txqueuelen1000(Ethernet)RXpackets16286600bytes1741928233(1.6GiB)RXerrors0dropped0overruns0frame0TXpackets15841272bytes1566357399(1.4GiB)TXerrors0dropped0overruns0carrier0collisions0flannel.1:flags=4163<UP,BROADCAST,RUNNING,MULTICAST>mtu1450inet10.233.19.0netmask255.255.255.255broadcast0.0.0.0inet6fe80::d9:e2ff:fe46:9cddprefixlen64scopeid0x20<link>ether02:d9:e2:46:9c:ddtxqueuelen0(Ethernet)RXpackets0bytes0(0.0B)RXerrors0dropped0overruns0frame0TXpackets0bytes0(0.0B)TXerrors0dropped26overruns0carrier0collisions0安装kubectl工具(1).Master端#首先安装kubectlwgethttps://dl.k8s.io/v1.7.0/kubernetes-client-linux-amd64.tar.gztar-xzvfkubernetes-client-linux-amd64.tar.gzcpkubernetes/client/bin/*/usr/local/bin/chmoda+x/usr/local/bin/kube*#验证安装kubectlversionClientVersion:version.Info{Major:"1",Minor:"7",GitVersion:"v1.7.0",GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad",GitTreeState:"clean",BuildDate:"2017-06-29T23:15:59Z",GoVersion:"go1.8.3",Compiler:"gc",Platform:"linux/amd64"}(2)创建admin证书kubectl与kube-apiserver的安全端口通信,需要为安全通信提供TLS证书和秘钥。cd/opt/ssl/viadmin-csr.json{"CN":"admin","hosts":[],"key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST":"BeiJing","L":"BeiJing","O":"system:masters","OU":"System"}]}#生成admin证书和私钥cd/opt/ssl//opt/local/cfssl/cfsslgencert-ca=/etc/kubernetes/ssl/ca.pem-ca-key=/etc/kubernetes/ssl/ca-key.pem-config=/etc/kubernetes/ssl/config.json-profile=kubernetesadmin-csr.json|/opt/local/cfssl/cfssljson-bareadmin#查看生成[root@k8s-master-1ssl]#lsadmin*admin.csradmin-csr.jsonadmin-key.pemadmin.pemcpadmin*.pem/etc/kubernetes/ssl/(3).配置kubectlkubeconfig文件#配置kubernetes集群kubectlconfigset-clusterkubernetes--certificate-authority=/etc/kubernetes/ssl/ca.pem--embed-certs=true--server=https://192.168.54.12:6443#配置客户端认证kubectlconfigset-credentialsadmin--client-certificate=/etc/kubernetes/ssl/admin.pem--embed-certs=true--client-key=/etc/kubernetes/ssl/admin-key.pemkubectlconfigset-contextkubernetes--cluster=kubernetes--user=adminkubectlconfiguse-contextkubernetes(4).分发kubectlconfig文件#将上面配置的kubeconfig文件分发到其他机器#其他服务器创建目录mkdir/root/.kubescp/root/.kube/config192.168.54.13:/root/.kube/scp/root/.kube/config192.168.54.14:/root/.kube/部署kubernetesMaster节点Master需要部署kube-apiserver,kube-scheduler,kube-controller-manager这三个组件。(1).安装组件#从github上下载版本cd/tmpwgethttps://dl.k8s.io/v1.7.0/kubernetes-server-linux-amd64.tar.gztar-xzvfkubernetes-server-linux-amd64.tar.gzcdkubernetescp-rserver/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet}/usr/local/bin/(2).创建kubernetes证书/opt/sslvikubernetes-csr.json{"CN":"kubernetes","hosts":["127.0.0.1","192.168.54.12","10.254.0.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST":"BeiJing","L":"BeiJing","O":"k8s","OU":"System"}]}##这里hosts字段中三个IP分别为127.0.0.1本机,192.168.54.12为Master的IP,10.254.0.1为kubernetesSVC的IP,一般是部署网络的第一个IP,如:10.254.0.1,在启动完成后,我们使用kubectlgetsvc,就可以查看到(3).生成kubernetes证书和私钥/opt/local/cfssl/cfsslgencert-ca=/etc/kubernetes/ssl/ca.pem-ca-key=/etc/kubernetes/ssl/ca-key.pem-config=/etc/kubernetes/ssl/config.json-profile=kuberneteskubernetes-csr.json|/opt/local/cfssl/cfssljson-barekubernetes#查看生成[root@k8s-master-1ssl]#ls-ltkubernetes*-rw-r--r--1rootroot12457月411:25kubernetes.csr-rw-------1rootroot16797月411:25kubernetes-key.pem-rw-r--r--1rootroot16197月411:25kubernetes.pem-rw-r--r--1rootroot4367月411:23kubernetes-csr.json#拷贝到目录cp-rkubernetes*/etc/kubernetes/ssl/(4).配置kube-apiserverkubelet首次启动时向kube-apiserver发送TLSBootstrapping请求,kube-apiserver验证kubelet请求中的token是否与它配置的token一致,如果一致则自动为kubelet生成证书和秘钥。#生成token[root@k8s-master-1ssl]#head-c16/dev/urandom|od-An-tx|tr-d''11849e4f70904706ab3e631e70e6af0d#创建token.csv文件/opt/sslvitoken.csv11849e4f70904706ab3e631e70e6af0d,kubelet-bootstrap,10001,"system:kubelet-bootstrap"#拷贝cptoken.csv/etc/kubernetes/(4).创建kube-apiserver.service文件一、开启了RBAC#自定义系统service文件一般存于/etc/systemd/system/下vi/etc/systemd/system/kube-apiserver.service[Unit]Description=kubernetesAPIServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target[Service]User=rootExecStart=/usr/local/bin/kube-apiserver--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota--advertise-address=192.168.54.12--allow-privileged=true--apiserver-count=3--audit-log-maxage=30--audit-log-maxbackup=3--audit-log-maxsize=100--audit-log-path=/var/lib/audit.log--authorization-mode=RBAC--bind-address=192.168.54.12--client-ca-file=/etc/kubernetes/ssl/ca.pem--enable-swagger-ui=true--etcd-cafile=/etc/kubernetes/ssl/ca.pem--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem--etcd-servers=http://192.168.54.12:2379,http://192.168.54.13:2379,http://192.168.54.14:2379--event-ttl=1h--kubelet-https=true--insecure-bind-address=192.168.54.12--runtime-config=rbac.authorization.k8s.io/v1alpha1--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem--service-cluster-ip-range=10.254.0.0/16--service-node-port-range=30000-32000--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem--experimental-bootstrap-token-auth--token-auth-file=/etc/kubernetes/token.csv--v=2Restart=on-failureRestartSec=5Type=notifyLimitNOFILE=65536[Install]WantedBy=multi-user.target二、关闭了RBAC#自定义系统service文件一般存于/etc/systemd/system/下vi/etc/systemd/system/kube-apiserver.service[Unit]Description=kubernetesAPIServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target[Service]User=rootExecStart=/usr/local/bin/kube-apiserver--storage-backend=etcd2--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota--advertise-address=192.168.54.12--allow-privileged=true--apiserver-count=3--audit-log-maxage=30--audit-log-maxbackup=3--audit-log-maxsize=100--audit-log-path=/var/lib/audit.log--bind-address=192.168.54.12--client-ca-file=/etc/kubernetes/ssl/ca.pem--enable-swagger-ui=true--etcd-cafile=/etc/kubernetes/ssl/ca.pem--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem--etcd-servers=http://192.168.54.12:2379,http://192.168.54.13:2379,http://192.168.54.14:2379--event-ttl=1h--kubelet-https=true--insecure-bind-address=192.168.54.12--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem--service-cluster-ip-range=10.254.0.0/16--service-node-port-range=30000-32000--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem--experimental-bootstrap-token-auth--token-auth-file=/etc/kubernetes/token.csv--v=2Restart=on-failureRestartSec=5Type=notifyLimitNOFILE=65536[Install]WantedBy=multi-user.target#这里面要注意的是--service-node-port-range=30000-32000#这个地方是映射外部端口时的端口范围,随机映射也在这个范围内映射,指定映射端口必须也在这个范围内。(5).启动kube-apiserversystemctldaemon-reloadsystemctlenablekube-apiserversystemctlstartkube-apiserversystemctlstatuskube-apiserver--storage-backend=etcd2(6).配置kube-controller-manager#创建kube-controller-manager.service文件vi/etc/systemd/system/kube-controller-manager.service[Unit]Description=kubernetesControllerManagerDocumentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]ExecStart=/usr/local/bin/kube-controller-manager--address=127.0.0.1--master=http://192.168.54.12:8080--allocate-node-cidrs=true--service-cluster-ip-range=10.254.0.0/16--cluster-cidr=10.233.0.0/16--cluster-name=kubernetes--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem--root-ca-file=/etc/kubernetes/ssl/ca.pem--leader-elect=true--v=2Restart=on-failureRestartSec=5[Install]WantedBy=multi-user.target(7).启动kube-controller-managersystemctldaemon-reloadsystemctlenablekube-controller-managersystemctlstartkube-controller-managersystemctlstatuskube-controller-manager(8).配置kube-scheduler#创建kube-cheduler.service文件vi/etc/systemd/system/kube-scheduler.service[Unit]Description=kubernetesSchedulerDocumentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]ExecStart=/usr/local/bin/kube-scheduler--address=127.0.0.1--master=http://192.168.54.12:8080--leader-elect=true--v=2Restart=on-failureRestartSec=5[Install]WantedBy=multi-user.target(9).启动kube-schedulersystemctldaemon-reloadsystemctlenablekube-schedulersystemctlstartkube-schedulersystemctlstatuskube-scheduler(10).验证Master节点[root@k8s-master-1opt]#kubectlgetcomponentstatusesNAMESTATUSMESSAGEERRORschedulerHealthyokcontroller-managerHealthyoketcd-0Healthy{"health":"true"}etcd-1Healthy{"health":"true"}etcd-2Healthy{"health":"true"}部署kubernetesNode节点(首先部署192.168.54.13)Node节点需要部署的组件有dockerflannelkubectlkubeletkube-proxy这几个组件。(1).配置kubectlwgethttps://dl.k8s.io/v1.7.0/kubernetes-client-linux-amd64.tar.gztar-xzvfkubernetes-client-linux-amd64.tar.gzcpkubernetes/client/bin/*/usr/local/bin/chmoda+x/usr/local/bin/kube*#验证安装kubectlversionClientVersion:version.Info{Major:"1",Minor:"7",GitVersion:"v1.7.0",GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad",GitTreeState:"clean",BuildDate:"2017-06-29T23:15:59Z",GoVersion:"go1.8.3",Compiler:"gc",Platform:"linux/amd64"}(2).配置kubeletkubelet启动时向kube-apiserver发送TLSbootstrapping请求,需要先将bootstraptoken文件中的kubelet-bootstrap用户赋予system:node-bootstrapper角色,然后kubelet才有权限创建认证请求(certificatesigningrequests)。#先创建认证请求#user为master中token.csv文件里配置的用户#只需在一个node中创建一次就可以kubectlcreateclusterrolebindingkubelet-bootstrap--clusterrole=system:node-bootstrapper--user=kubelet-bootstrap(3).下载二进制文件cd/tmpwgethttps://dl.k8s.io/v1.7.0/kubernetes-server-linux-amd64.tar.gztarzxvfkubernetes-server-linux-amd64.tar.gzcp-rkubernetes/server/bin/{kube-proxy,kubelet}/usr/local/bin/(4).创建kubeletkubeconfig文件#配置集群kubectlconfigset-clusterkubernetes--certificate-authority=/etc/kubernetes/ssl/ca.pem--embed-certs=true--server=https://192.168.54.12:6443--kubeconfig=bootstrap.kubeconfig#配置客户端认证kubectlconfigset-credentialskubelet-bootstrap--token=11849e4f70904706ab3e631e70e6af0d--kubeconfig=bootstrap.kubeconfig#配置关联kubectlconfigset-contextdefault--cluster=kubernetes--user=kubelet-bootstrap--kubeconfig=bootstrap.kubeconfig#配置默认关联kubectlconfiguse-contextdefault--kubeconfig=bootstrap.kubeconfig#拷贝生成的bootstrap.kubeconfig文件mvbootstrap.kubeconfig/etc/kubernetes/(5).创建kubelet.service文件#创建kubelet目录mkdir/var/lib/kubeletvi/etc/systemd/system/kubelet.service[Unit]Description=kubernetesKubeletDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service[Service]WorkingDirectory=/var/lib/kubeletExecStart=/usr/local/bin/kubelet--address=192.168.54.13--hostname-override=192.168.54.13--pod-infra-container-image=jicki/pause-amd64:3.0--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig--kubeconfig=/etc/kubernetes/kubelet.kubeconfig--require-kubeconfig--cert-dir=/etc/kubernetes/ssl--cluster_dns=10.254.0.2--cluster_domain=cluster.local.--hairpin-modepromiscuous-bridge--allow-privileged=true--serialize-image-pulls=false--logtostderr=true--v=2ExecStopPost=/sbin/iptables-AINPUT-s10.0.0.0/8-ptcp--dport4194-jACCEPTExecStopPost=/sbin/iptables-AINPUT-s172.16.0.0/12-p&n

知识推荐

我的编程学习网——分享web前端后端开发技术知识。 垃圾信息处理邮箱 tousu563@163.com 网站地图
icp备案号 闽ICP备2023006418号-8 不良信息举报平台 互联网安全管理备案 Copyright 2023 www.wodecom.cn All Rights Reserved