分享web开发知识

注册/登录|最近发布|今日推荐

主页 IT知识网页技术软件开发前端开发代码编程运营维护技术分享教程案例
当前位置:首页 > 前端开发

Kubernetes集群部署篇( 一)

发布时间:2023-09-06 01:59责任编辑:白小东关键词:暂无标签

K8S集群部署有几种方式:kubeadm、minikube和二进制包。前两者属于自动部署,简化部署操作,我们这里强烈推荐初学者使用二进制包部署,因为自动部署屏蔽了很多细节,使得对各个模块感知很少,非常不利用学习。所以,这篇文章也是使用二进制包部署Kubernetes集群。

一、架构拓扑图

二、环境规划

角色IP主机名组件
Master1192.168.161.161master1etcd1,master1
master2192.168.161.162master2etcd2,master2
node1192.168.161.163node1kubelet,kube-proxy,docker,flannel
node2192.168.161.164node2kubelet,kube-proxy,docker,flannel
  1. kube-apiserver:位于master节点,接受用户请求。
  2. kube-scheduler:位于master节点,负责资源调度,即pod建在哪个node节点。
  3. kube-controller-manager:位于master节点,包含ReplicationManager,Endpointscontroller,Namespacecontroller,Nodecontroller等。
  4. etcd:分布式键值存储系统,共享整个集群的资源对象信息。
  5. kubelet:位于node节点,负责维护在特定主机上运行的pod。
  6. kube-proxy:位于node节点,它起的作用是一个服务代理的角色。

再来普及一下:

① kubectl 发送部署请求到 API Server。

② API Server 通知 Controller Manager 创建一个 deployment 资源。

③ Scheduler 执行调度任务,将两个副本 Pod 分发到 k8s-node1 和 k8s-node2。

④ k8s-node1 和 k8s-node2 上的 kubectl 在各自的节点上创建并运行 Pod。

三、集群部署

  • 系统采用 Centos 7.3
  • 关闭防火墙
systemctl disable firewalld ?systemctl stop firewalld 
  • 关闭 selinux
  • 安装NTP并启动
# yum -y install ntp ?# systemctl start ntpd ?# systemctl enable ntpd

4台机器均设置好 hosts

vim /etc/hosts192.168.161.161 master1192.168.161.162 master2192.168.161.163 node1192.168.161.164 node2192.168.161.161 etcd192.168.161.162 etcd
3.1 部署master
安装etcd
[root@master1 ~]# yum -y install etcd
配置etcd

yum安装的etcd默认配置文件在/etc/etcd/etcd.conf,以下将2个节点上的配置贴出来,请注意不同点。

2379是默认的使用端口,为了防止端口占用问题的出现,增加4001端口备用。

master1:

[root@master1 ~]# vim /etc/etcd/etcd.conf # [member] ?ETCD_NAME=etcd1 ?ETCD_DATA_DIR="/var/lib/etcd/test.etcd" ?#ETCD_WAL_DIR="" ?#ETCD_SNAPSHOT_COUNT="10000" ?#ETCD_HEARTBEAT_INTERVAL="100" ?#ETCD_ELECTION_TIMEOUT="1000" ?ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380" ?ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001" ?#ETCD_MAX_SNAPSHOTS="5" ?#ETCD_MAX_WALS="5" ?#ETCD_CORS="" ?# ?#[cluster] ?ETCD_INITIAL_ADVERTISE_PEER_URLS="http://master1:2380" ?# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ?ETCD_INITIAL_CLUSTER="etcd1=http://master1:2380,etcd2=http://master2:2380" ?ETCD_INITIAL_CLUSTER_STATE="new" ?ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-baby" ?ETCD_ADVERTISE_CLIENT_URLS="http://master1:2379,http://master1:4001"

master2:

[root@master2 ~]# vim /etc/etcd/etcd.conf# [member] ?ETCD_NAME=etcd2 ?ETCD_DATA_DIR="/var/lib/etcd/test.etcd" ?#ETCD_WAL_DIR="" ?#ETCD_SNAPSHOT_COUNT="10000" ?#ETCD_HEARTBEAT_INTERVAL="100" ?#ETCD_ELECTION_TIMEOUT="1000" ?ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380" ?ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001" ?#ETCD_MAX_SNAPSHOTS="5" ?#ETCD_MAX_WALS="5" ?#ETCD_CORS="" ?# ?#[cluster] ?ETCD_INITIAL_ADVERTISE_PEER_URLS="http://master2:2380" ?# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ?ETCD_INITIAL_CLUSTER="etcd1=http://master1:2380,etcd2=http://master2:2380" ?ETCD_INITIAL_CLUSTER_STATE="new" ?ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-baby" ?ETCD_ADVERTISE_CLIENT_URLS="http://master2:2379,http://master2:4001"
参数说明:
name ?????节点名称data-dir ?????指定节点的数据存储目录listen-peer-urls ?????监听URL,用于与其他节点通讯listen-client-urls ???对外提供服务的地址:比如 http://ip:2379,http://127.0.0.1:2379 ,客户端会连接到这里和 etcd 交互initial-advertise-peer-urls ??该节点同伴监听地址,这个值会告诉集群中其他节点initial-cluster ??集群中所有节点的信息,格式为 node1=http://ip1:2380,node2=http://ip2:2380,… 。注意:这里的 node1 是节点的 --name 指定的名字;后面的 ip1:2380 是 --initial-advertise-peer-urls 指定的值initial-cluster-state ????新建集群的时候,这个值为 new ;假如已经存在的集群,这个值为 existinginitial-cluster-token ????创建集群的 token,这个值每个集群保持唯一。这样的话,如果你要重新创建集群,即使配置和之前一样,也会再次生成新的集群和节点 uuid;否则会导致多个集群之间的冲突,造成未知的错误advertise-client-urls ????对外公告的该节点客户端监听地址,这个值会告诉集群中其他节点

修改好以上配置后,在各个节点上启动etcd服务,并验证集群状态:

Master1

[root@master1 etcd]# systemctl start etcd[root@master1 etcd]# etcdctl -C http://etcd:2379 cluster-health member 22a9f7f65563bff5 is healthy: got healthy result from http://master2:2379member d03b92adc5af7320 is healthy: got healthy result from http://master1:2379cluster is healthy[root@master1 etcd]# etcdctl -C http://etcd:4001 cluster-health member 22a9f7f65563bff5 is healthy: got healthy result from http://master2:2379member d03b92adc5af7320 is healthy: got healthy result from http://master1:2379cluster is healthy[root@master1 etcd]# systemctl enable etcd Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

Master2

[root@master2 etcd]# systemctl start etcd[root@master2 etcd]# etcdctl -C http://etcd:2379 cluster-health member 22a9f7f65563bff5 is healthy: got healthy result from http://master2:2379member d03b92adc5af7320 is healthy: got healthy result from http://master1:2379cluster is healthy[root@master2 etcd]# etcdctl -C http://etcd:4001 cluster-healthmember 22a9f7f65563bff5 is healthy: got healthy result from http://master2:2379member d03b92adc5af7320 is healthy: got healthy result from http://master1:2379cluster is healthy[root@master2 etcd]# systemctl enable etcdCreated symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

部署 master

安装 docker ,设置开机自启动并开启服务

分别在 master1和master2上面安装docker服务

[root@master1 etcd]# yum install docker -y ?[root@master1 etcd]# chkconfig docker on ?[root@master1 etcd]# systemctl start docker.service ?
安装kubernets
yum install kubernetes -y

在 master 的虚机上,需要运行三个组件:Kubernets API Server、Kubernets Controller Manager、Kubernets Scheduler。

首先修改 /etc/kubernetes/apiserver 文件:

[root@master1 kubernetes]# vim apiserver### ?# kubernetes system config ?# ?# The following values are used to configure the kube-apiserver ?# ?# The address on the local server to listen to. ?KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"# The port on the local server to listen on. ?KUBE_API_PORT="--port=8080"# Port minions listen on ?# KUBELET_PORT="--kubelet-port=10250" ?# Comma separated list of nodes in the etcd cluster ?KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"# Address range to use for services ?KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"# default admission control policies ?# KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" ?KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"# Add your own! ?KUBE_API_ARGS=""

接着修改 /etc/kubernetes/config 文件:(最后一句 masterX:8080 ,对应master1/2机器就好

[root@master1 ~]# vim /etc/kubernetes/config#### kubernetes system config ?# ?# The following values are used to configure various aspects of all ?# kubernetes services, including ?# ?# ??kube-apiserver.service ?# ??kube-controller-manager.service ?# ??kube-scheduler.service ?# ??kubelet.service ?# ??kube-proxy.service ?# logging to stderr means we get it in the systemd journal ?KUBE_LOGTOSTDERR="--logtostderr=true" ???# journal message level, 0 is debug ?KUBE_LOG_LEVEL="--v=0" ???# Should this cluster be allowed to run privileged docker containers ?KUBE_ALLOW_PRIV="--allow-privileged=false" ???# How the controller-manager, scheduler, and proxy find the apiserver ?KUBE_MASTER="--master=http://master1:8080" 

修改完成后,启动服务并设置开机自启动即可:

systemctl enable kube-apiserver ?systemctl start kube-apiserver ?systemctl enable kube-controller-manager ?systemctl start kube-controller-manager ?systemctl enable kube-scheduler ?systemctl start kube-scheduler ?

部署 node

安装 docker ,设置开机自启动并开启服务
yum install docker -y ?chkconfig docker on ?systemctl start docker.service
安装 kubernetes
yum install kubernetes -y

在 node 的虚机上,需要运行三个组件:Kubelet、Kubernets Proxy。

首先修改 /etc/kubernetes/config 文件:(注意:这里配置的是etcd的地址,也就是master1/2的地址其中之一)

[root@node1 ~]# vim /etc/kubernetes/config#### kubernetes system config ?# ?# The following values are used to configure various aspects of all ?# kubernetes services, including ?# ?# ??kube-apiserver.service ?# ??kube-controller-manager.service ?# ??kube-scheduler.service ?# ??kubelet.service ?# ??kube-proxy.service ?# logging to stderr means we get it in the systemd journal ?KUBE_LOGTOSTDERR="--logtostderr=true" ???# journal message level, 0 is debug ?KUBE_LOG_LEVEL="--v=0" ???# Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV="--allow-privileged=false" ???# How the controller-manager, scheduler, and proxy find the apiserver ?KUBE_MASTER="--master=http://etcd:8080"

接着修改 /etc/kubernetes/kubelet 文件:(注:–hostname-override= 对应的node机器)

[root@node1 ~]# vim /etc/kubernetes/kubelet### ?# kubernetes kubelet (minion) config ???# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) ?KUBELET_ADDRESS="--address=0.0.0.0" ???# The port for the info server to serve on ?# KUBELET_PORT="--port=10250" ???# You may leave this blank to use the actual hostname ?KUBELET_HOSTNAME="--hostname-override=node1" ???# location of the api-server ?KUBELET_API_SERVER="--api-servers=http://etcd:8080" ???# pod infrastructure container ?KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" ???# Add your own! ?KUBELET_ARGS="" ?

修改完成后,启动服务并设置开机自启动即可:

systemctl enable kubelet ?systemctl start kubelet ?systemctl enable kube-proxy ?systemctl start kube-proxy 

查看集群状态

在任意一台master上查看集群中节点及节点状态:

[root@master1 kubernetes]# kubectl get nodeNAME ?????STATUS ???AGEnode1 ????Ready ????1mnode2 ????Ready ????1m

至此,已经搭建了一个kubernetes集群了,但目前该集群还不能很好的工作,因为需要对集群中pod的网络进行统一管理。

创建覆盖网络 flannel

在master、node上均执行如下命令,安装 flannel

yum install flannel -y ?

在master、node上均编辑 /etc/sysconfig/flanneld 文件

[root@master1 kubernetes]# vim /etc/sysconfig/flanneld# Flanneld configuration options ?????# etcd url location. ?Point this to the server where etcd runs ?FLANNEL_ETCD_ENDPOINTS="http://etcd:2379" ???# etcd config key. ?This is the configuration key that flannel queries ?# For address range assignment ?FLANNEL_ETCD_PREFIX="/atomic.io/network" ???# Any additional options that you want to pass ?#FLANNEL_OPTIONS="" ?

flannel使用etcd进行配置,来保证多个flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:

etcdctl mk /atomic.io/network/config ‘{ "Network": "10.0.0.0/16" }‘

(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)

启动修改后的 flannel ,并依次重启docker、kubernete

在 master 虚机上执行:

systemctl enable flanneld ?systemctl start flanneld ?service docker restart ?systemctl restart kube-apiserver ?systemctl restart kube-controller-manager ?systemctl restart kube-scheduler 

在 node 虚机上执行:

systemctl enable flanneld ?systemctl start flanneld ?service docker restart ?systemctl restart kubelet ?systemctl restart kube-proxy 

这样etcd集群 + flannel + kubernetes集群 在centOS7上就搭建起来了。

注:

flannel架构介绍

flannel默认使用8285端口作为UDP封装报文的端口,VxLan使用8472端口。

那么一条网络报文是怎么从一个容器发送到另外一个容器的呢?

1. 容器直接使用目标容器的ip访问,默认通过容器内部的eth0发送出去。2. 报文通过veth pair被发送到vethXXX。3. vethXXX是直接连接到虚拟交换机docker0的,报文通过虚拟bridge docker0发送出去。4. 查找路由表,外部容器ip的报文都会转发到flannel0虚拟网卡,这是一个P2P的虚拟网卡,然后报文就被转发到监听在另一端的flanneld。5. flanneld通过etcd维护了各个节点之间的路由表,把原来的报文UDP封装一层,通过配置的iface发送出去。6. 报文通过主机之间的网络找到目标主机。7. 报文继续往上,到传输层,交给监听在8285端口的flanneld程序处理。8. 数据被解包,然后发送给flannel0虚拟网卡。9. 查找路由表,发现对应容器的报文要交给docker0。10. docker0找到连到自己的容器,把报文发送过去。

Kubernetes集群部署篇( 一)

原文地址:https://www.cnblogs.com/syf-com/p/9159186.html

知识推荐

我的编程学习网——分享web前端后端开发技术知识。 垃圾信息处理邮箱 tousu563@163.com 网站地图
icp备案号 闽ICP备2023006418号-8 不良信息举报平台 互联网安全管理备案 Copyright 2023 www.wodecom.cn All Rights Reserved