分享web开发知识

注册/登录|最近发布|今日推荐

主页 IT知识网页技术软件开发前端开发代码编程运营维护技术分享教程案例
当前位置:首页 > 前端开发

kubespray容器化部署kubernetes高可用集群

发布时间:2023-09-06 01:36责任编辑:郭大石关键词:kubernetes

一、基础环境

  • docker版本1.12.6
  • CentOS 7

1.准备好要部署的机器

IPROLE
172.30.33.89k8s-registry-lb
172.30.33.90k8s-master01-etcd01
172.30.33.91k8s-master02-etcd02
172.30.33.92k8s-master03-etcd03
172.30.33.93k8s-node01-ingress01
172.30.33.94k8s-node02-ingress02
172.30.33.31ansible-client

2.准备部署机器 ansible-client

3.准备所需要镜像,由于被墙,所需镜像可以在百度云去下载,点击这里

IMAGEVERSION
quay.io/coreos/hyperkubev1.6.7_coreos.0
quay.io/coreos/etcdv3.1.10
calico/ctlv1.1.3
calico/nodev2.4.1
calico/cniv1.10.0
calico/kube-policy-controllerv0.7.0
quay.io/calico/routereflectorv0.3.0
gcr.io/google_containers/kubernetes-dashboard-amd64v1.6.3
gcr.io/google_containers/nginx-ingress-controller0.9.0-beta.11
gcr.io/google_containers/defaultbackend1.3
gcr.io/google_containers/cluster-proportional-autoscaler-amd641.1.1
gcr.io/google_containers/fluentd-elasticsearch1.22
gcr.io/google_containers/kibanav4.6.1
gcr.io/google_containers/elasticsearchv2.4.1
gcr.io/google_containers/k8s-dns-sidecar-amd641.14.4
gcr.io/google_containers/k8s-dns-kube-dns-amd641.14.4
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd641.14.4
andyshinn/dnsmasq2.72
nginx1.11.4-alpine
gcr.io/google_containers/heapster-grafana-amd64v4.4.1
gcr.io/google_containers/heapster-amd64v1.4.0
gcr.io/google_containers/heapster-influxdb-amd64v1.1.1
gcr.io/google_containers/pause-amd643.0
lachlanevenson/k8s-helmv2.2.2
gcr.io/kubernetes-helm/tillerv2.2.2

4.load所有下载的镜像

# 在ansible-client上操作$ IP=(172.30.33.89 172.30.33.90 172.30.33.91 172.30.33.92 172.30.33.93 172.30.33.94)$ for i in ${IP[*]}; do scp -r kubespray_images_v1.6.7 $i:~/; done# 对所有要部署的节点操作$ IMAGES=$(ls -l ~/kubespray_images_v1.6.7|awk -F‘ ‘ ‘{ print $9 }‘)$ for x in ${images[*]}; do docker load -i kubespray_images_v1.6.7/$x; done

二、搭建集群

1.获取kubespray源码

$ git clone https://github.com/kubernetes-incubator/kubespray.git

2.编辑配置文件

$ vim ~/kubespray/inventory/group_vars/k8s-cluster.yml---# 启动集群的基础系统,支持ubuntu, coreos, centos, nonebootstrap_os: centos# etcd数据存放位置etcd_data_dir: /var/lib/etcd# kubernetes所需二进制文件将要安装的位置bin_dir: /usr/local/bin# kubrnetes配置文件存放目录kube_config_dir: /etc/kubernetes# 生成证书和token的脚本的存放位置kube_script_dir: "{ { bin_dir } }/kubernetes-scripts"# kubernetes manifest文件存放目录kube_manifest_dir: "{ { kube_config_dir } }/manifests"# kubernetes 命名空间system_namespace: kube-system# 日志存放位置kube_log_dir: "/var/log/kubernetes"# kubernetes证书存放位置kube_cert_dir: "{ { kube_config_dir } }/ssl"# kubernetes token存放位置kube_token_dir: "{ { kube_config_dir } }/tokens"# basic auth 认证文件存放位置kube_users_dir: "{ { kube_config_dir } }/users"# 关闭匿名授权kube_api_anonymous_auth: false## kubernetes使用版本kube_version: v1.6.7# 安装过程中缓存文件下载位置(最少1G)local_release_dir: "/tmp/releases"# 重试次数,比如下载失败等情况retry_stagger: 5# 证书组kube_cert_group: kube-cert# 集群日志等级kube_log_level: 2# HTTP下api server的basic auth认证用户名密码kube_api_pwd: "test123"kube_users: ?kube: ???pass: "{ {kube_api_pwd} }" ???role: admin ?root: ???pass: "{ {kube_api_pwd} }" ???role: admin## 开关认证 (basic auth, static token auth)#kube_oidc_auth: false#kube_basic_auth: false#kube_token_auth: false## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/## To use OpenID you have to deploy additional an OpenID Provider (e.g Dex, Keycloak, ...)# kube_oidc_url: https:// ...# kube_oidc_client_id: kubernetes## Optional settings for OIDC# kube_oidc_ca_file: { { kube_cert_dir } }/ca.pem# kube_oidc_username_claim: sub# kube_oidc_groups_claim: groups# 网络插件 (calico, weave or flannel)kube_network_plugin: calico# 开启 kubernetes network policiesenable_network_policy: false# Kubernetes 服务的地址范围.kube_service_addresses: 10.233.0.0/18# pod 地址范围kube_pods_subnet: 10.233.64.0/18# 网络节点大小分配kube_network_node_prefix: 24# api server 监听地址及端口kube_apiserver_ip: "{ { kube_service_addresses|ipaddr(‘net‘)|ipaddr(1)|ipaddr(‘address‘) } }"kube_apiserver_port: 6443 # (https)kube_apiserver_insecure_port: 8080 # (http)# 默认dns后缀cluster_name: cluster.local# 为使用主机网络的pods使用/etc/resolv.conf解析DNS的子域ndots: 2# DNS 组件dnsmasq_kubedns/kubednsdns_mode: dnsmasq_kubedns# dns模式,可以是 docker_dns, host_resolvconf or noneresolvconf_mode: docker_dns# 部署netchecker来检测DNS和HTTP状态deploy_netchecker: false# skydns service IP配置skydns_server: "{ { kube_service_addresses|ipaddr(‘net‘)|ipaddr(3)|ipaddr(‘address‘) } }"dns_server: "{ { kube_service_addresses|ipaddr(‘net‘)|ipaddr(2)|ipaddr(‘address‘) } }"dns_domain: "{ { cluster_name } }"# docker 存储目录docker_daemon_graph: "/var/lib/docker"## A string of extra options to pass to the docker daemon.## This string should be exactly as you wish it to appear.## An obvious use case is allowing insecure-registry access## to self hosted registries like so:docker_options: "--insecure-registry={ { kube_service_addresses } } --graph={ { docker_daemon_graph } } --iptables=false --storage-driver=devicemapper"docker_bin_dir: "/usr/bin"# 组件部署方式# Settings for containerized control plane (etcd/kubelet/secrets)etcd_deployment_type: dockerkubelet_deployment_type: dockercert_management: scriptvault_deployment_type: docker# K8s image pull policy (imagePullPolicy)k8s_image_pull_policy: IfNotPresent# Monitoring apps for k8sefk_enabled: true# Helm deploymenthelm_enabled: false

3.生成集群配置

配置完基本集群参数后,还需要生成一个集群配置文件,用于指定需要在哪几台服务器安装,和指定 master、node 节点分布,以及 etcd 集群等安装在那几台机器上。

# 定义集群IP$ IP=(172.30.33.89 172.30.33.90 172.30.33.91 172.30.33.92 172.30.33.93 172.30.33.94)# 利用kubespray自带的python脚本生成配置$ CONFIG_FILE=~/kubespray/inventory/inventory.cfg python3 ~/kubespray/contrib/inventory_builder/inventory.py ${ IP[*] }

生成的配置如下,最好是在配置上加上ansible_user=root,我最开始在搭建的时候没有指定,报错了

node1 ???ansible_user=root ansible_host=172.30.33.90 ip=172.30.33.90node2 ???ansible_user=root ansible_host=172.30.33.91 ip=172.30.33.91node3 ???ansible_user=root ansible_host=172.30.33.92 ip=172.30.33.92node4 ???ansible_user=root ansible_host=172.30.33.93 ip=172.30.33.93node5 ???ansible_user=root ansible_host=172.30.33.94 ip=172.30.33.94[kube-master]node1node2node3[kube-node]node1node2node3node4node5[etcd]node1node2node3[k8s-cluster:children]kube-nodekube-master[calico-rr]

5.docker,efk,etcd配置修改

提前修改ansbile中有关docker,efk,etcd的配置,因为后面在部署的过程中,ansible会检测docker的版本并下载最新的版本,但是由于墙的原因,导致无法下载,会一直卡在下载的地方,所以这里,我们要提前修改,同时需要升级etcd的版本,默认的3.0.6的版本,存在不稳定因素。

修改docker配置,将下面关于docker安装的部分全部注释掉

vim ~/kubespray/roles/docker/tasks/main.yml# - name: ensure docker repository public key is installed# ?action: "{ { docker_repo_key_info.pkg_key } }"# ?args:# ???id: "{ {item} }"# ???keyserver: "{ {docker_repo_key_info.keyserver} }"# ???state: present# ?register: keyserver_task_result# ?until: keyserver_task_result|succeeded# ?retries: 4# ?delay: "{ { retry_stagger | random + 3 } }"# ?with_items: "{ { docker_repo_key_info.repo_keys } }"# ?when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic)# - name: ensure docker repository is enabled# ?action: "{ { docker_repo_info.pkg_repo } }"# ?args:# ???repo: "{ {item} }"# ???state: present# ?with_items: "{ { docker_repo_info.repos } }"# ?when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) and (docker_repo_info.repos|length > 0)# - name: Configure docker repository on RedHat/CentOS# ?template:# ???src: "rh_docker.repo.j2"# ???dest: "/etc/yum.repos.d/docker.repo"# ?when: ansible_distribution in ["CentOS","RedHat"] and not is_atomic# - name: ensure docker packages are installed# ?action: "{ { docker_package_info.pkg_mgr } }"# ?args:# ???pkg: "{ {item.name} }"# ???force: "{ {item.force|default(omit)} }"# ???state: present# ?register: docker_task_result# ?until: docker_task_result|succeeded# ?retries: 4# ?delay: "{ { retry_stagger | random + 3 } }"# ?with_items: "{ { docker_package_info.pkgs } }"# ?notify: restart docker# ?when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) and (docker_package_info.pkgs|length > 0)# 如果你是自己安装的docker,记得将这段注释掉,除非你觉得template中的docker.service你能用#- name: Set docker systemd config# ?include: systemd.yml

修改efk配置,注释掉KIBANA_BASE_URL这段,否则后面你搭建efk之后,无法访问kibana

vim ~/kubespray/roles/kubernetes-apps/efk/kibana/templates/kibana-deployment.yml.j2# ?????????- name: "KIBANA_BASE_URL"# ???????????value: "{ { kibana_base_url } }"

修改download配置,更改etcd和kubedns版本

1.6.7版本中,为了使用更高版本的calico node,我自己多添加了一个变量calico_node_version

vim ~/kubespray/roles/download/defaults/main.ymletcd_version: v3.1.10calico_node_version: "v2.4.1"kubedns_version: 1.14.4calico_policy_version: "v0.7.0"

注意: 如果你修改了kubedns_version版本,那么也需要修改/root/kubespray/roles/kubernetes-apps/ansible/defaults/main.yml文件中的kubedns_version版本

(可选)6.修改docker.service

# 如果你的docker.service中没有MountFlags则不需要这一步# 注释掉/usr/lib/systemd/system/docker.service 中的MountFlags=slave

7.在ansible-client上一键部署

$ ansible-playbook -i ~/kubespray/inventory/inventory.cfg cluster.yml -b -v --private-key=~/.ssh/id_rsa

部署成功后如下 相关node信息 相关pod信息 

参考:

https://kevinguo.me/2017/07/06/kubespray-deploy-kubernetes-1/#1%E5%87%86%E5%A4%87%E5%A5%BD%E8%A6%81%E9%83%A8%E7%BD%B2%E7%9A%84%E6%9C%BA%E5%99%A8

kubespray容器化部署kubernetes高可用集群

原文地址:https://www.cnblogs.com/kevinX/p/8276238.html

知识推荐

我的编程学习网——分享web前端后端开发技术知识。 垃圾信息处理邮箱 tousu563@163.com 网站地图
icp备案号 闽ICP备2023006418号-8 不良信息举报平台 互联网安全管理备案 Copyright 2023 www.wodecom.cn All Rights Reserved