分享web开发知识

注册/登录|最近发布|今日推荐

主页 IT知识网页技术软件开发前端开发代码编程运营维护技术分享教程案例
当前位置:首页 > 前端开发

glusterfs + heketi实现kubernetes的共享存储

发布时间:2023-09-06 01:49责任编辑:胡小海关键词:kubernetes

[toc]

环境

主机名系统ip地址角色
ops-k8s-175ubuntu16.04192.168.75.175k8s-master,glusterfs,heketi
ops-k8s-176ubuntu16.04192.168.75.176k8s-node,glusterfs
ops-k8s-177ubuntu16.04192.168.75.177k8s-node,glusterfs
ops-k8s-178ubuntu16.04192.168.175.178k8s-node,glusterfs

glusterfs配置

安装

# 在所有节点执行:apt-get install glusterfs-server glusterfs-common glusterfs-client fusesystemctl start glusterfs-serversystemctl enable glusterfs-server# 在175上执行:gluster peer probe 192.168.75.176gluster peer probe 192.168.75.176gluster peer probe 192.168.75.176

测试

创建测试卷

# 创建gluster volume create test-volume replica 2 192.168.75.175:/home/gluterfs/data 192.168.75.176:/home/glusterfs/data force# 激活卷gluster volume start test-volume# 挂载mount -t glusterfs 192.168.75.175:/test-volume /mnt/mytest

扩容测试卷

# 向卷中添加brickgluster volume add-brick test-volume 192.168.75.177:/home/gluterfs/data 192.168.75.178:/home/glusterfs/data force

删除测试卷

gluster volume stop test-volumegluster volume delete test-volume

heketi配置

部署

简介

heketi主要用于在gluterfs的基础上提供一个标准的rest api,一般用于与kubernetes集成。

heketi项目地址:https://github.com/heketi/heketi

下载heketi相关包:
https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-client-v5.0.1.linux.amd64.tar.gz
https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-v5.0.1.linux.amd64.tar.gz

修改heketi配置文件

修改heketi配置文件/etc/heketi/heketi.json,内容如下:

......#修改端口,防止端口冲突 ?"port": "18080",......#允许认证 ?"use_auth": true,......#admin用户的key改为adminkey ?????"key": "adminkey"......#修改执行插件为ssh,并配置ssh的所需证书,注意要能对集群中的机器免密ssh登陆,使用ssh-copy-id把pub key拷到每台glusterfs服务器上 ???"executor": "ssh", ???"sshexec": { ?????"keyfile": "/root/.ssh/id_rsa", ?????"user": "root", ?????"port": "22", ?????"fstab": "/etc/fstab" ???},......# 定义heketi数据库文件位置 ???"db": "/var/lib/heketi/heketi.db"......#调整日志输出级别 ???"loglevel" : "warning"

需要说明的是,heketi有三种executor,分别为mock、ssh、kubernetes,建议在测试环境使用mock,生产环境使用ssh,当glusterfs以容器的方式部署在kubernetes上时,才使用kubernetes。我们这里将glusterfs和heketi独立部署,使用ssh的方式。

配置ssh密钥

在上面我们配置heketi的时候使用了ssh的executor,那么就需要heketi服务器能通过ssh密钥的方式连接到所有glusterfs节点进行管理操作,所以需要先生成ssh密钥

ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ‘‘chmod 600 /etc/heketi/heketi_key.pub# ssh公钥传递,这里只以一个节点为例ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.75.175# 验证是否能通过ssh密钥正常连接到glusterfs节点ssh -i /etc/heketi/heketi_key root@192.168.75.175

启动heketi

nohup heketi -config=/etc/heketi/heketi.json &

生产案例

在我实际生产中,使用docker-compose来管理heketi,而不直接手动启动,下面直接给出docker-compose配置示例:

version: "2"services: ?heketi: ???container_name: heketi ???image: dk-reg.op.douyuyuba.com/library/heketi:5 ???volumes: ?????- "/etc/heketi:/etc/heketi" ?????- "/var/lib/heketi:/var/lib/heketi" ?????- "/etc/localtime:/etc/localtime" ???network_mode: host

heketi添加glusterfs

添加cluster

heketi-cli --user admin -server http://192.168.75.175:18080 --secret adminkey --json ?cluster create{"id":"d102a74079dd79aceb3c70d6a7e8b7c4","nodes":[],"volumes":[]}

将4个glusterfs作为node添加到cluster

由于我们开启了heketi认证,所以每次执行heketi-cli操作时,都需要带上一堆的认证字段,比较麻烦,我在这里创建一个别名来避免相关操作:

alias heketi-cli=‘heketi-cli --server "http://192.168.75.175:18080" --user "admin" --secret "adminkey"‘

下面添加节点

heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.175 --storage-host-name 192.168.75.175 --zone 1heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.176 --storage-host-name 192.168.75.176 --zone 1heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.177 --storage-host-name 192.168.75.177 --zone 1heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.178 --storage-host-name 192.168.75.178 --zone 1

看到有些文档说需要在centos上部署时,需要注释每台glusterfs上的/etc/sudoers中的Defaults requiretty,不然加第二个node死活报错,最后把日志级别调高才看到日志里有记录sudo提示require tty。由于我这里直接部署在ubuntu上,所有不存在上述问题。如果有遇到这种问题的,可以照着操作下。

添加device

这里需要特别说明的是,目前heketi仅支持使用裸分区或裸磁盘添加为device,不支持文件系统。

# --node参数给出的id是上一步创建node时生成的,这里只给出一个添加的示例,实际配置中,要添加每一个节点的每一块用于存储的硬盘heketi-cli ?-json device add -name="/dev/vda2" --node "c3638f57b5c5302c6f7cd5136c8fdc5e"

生产实际配置

上面展示了如何手动一步步生成cluster,往cluster中添加节点,添加device的操作,在我们实际生产配置中,可以直接通过配置文件完成。

创建一个/etc/heketi/topology-sample.json的文件,内容如下:

{ ???"clusters": [ ???????{ ???????????"nodes": [ ???????????????{ ???????????????????"node": { ???????????????????????"hostnames": { ???????????????????????????"manage": [ ???????????????????????????????"192.168.75.175" ???????????????????????????], ???????????????????????????"storage": [ ???????????????????????????????"192.168.75.175" ???????????????????????????] ???????????????????????}, ???????????????????????"zone": 1 ???????????????????}, ???????????????????"devices": [ ???????????????????????"/dev/vda2" ???????????????????] ???????????????}, ???????????????{ ???????????????????"node": { ???????????????????????"hostnames": { ???????????????????????????"manage": [ ???????????????????????????????"192.168.75.176" ???????????????????????????], ???????????????????????????"storage": [ ???????????????????????????????"192.168.75.176" ???????????????????????????] ???????????????????????}, ???????????????????????"zone": 1 ???????????????????}, ???????????????????"devices": [ ???????????????????????"/dev/vda2" ???????????????????] ???????????????}, ???????????????{ ???????????????????"node": { ???????????????????????"hostnames": { ???????????????????????????"manage": [ ???????????????????????????????"192.168.75.177" ???????????????????????????], ???????????????????????????"storage": [ ???????????????????????????????"192.168.75.177" ???????????????????????????] ???????????????????????}, ???????????????????????"zone": 1 ???????????????????}, ???????????????????"devices": [ ???????????????????????"/dev/vda2" ???????????????????] ???????????????}, ???????????????{ ???????????????????"node": { ???????????????????????"hostnames": { ???????????????????????????"manage": [ ???????????????????????????????"192.168.75.178" ???????????????????????????], ???????????????????????????"storage": [ ???????????????????????????????"192.168.75.178" ???????????????????????????] ???????????????????????}, ???????????????????????"zone": 1 ???????????????????}, ???????????????????"devices": [ ???????????????????????"/dev/vda2" ???????????????????] ???????????????} ??????????????????????????] ???????} ???]}

创建:

heketi-cli ?topology load --json topology-sample.json

添加volume

这里仅仅是做一个测试,实际使用中,会由kubernetes自动创建pvc

如果添加的volume小的话可能会提示No Space,要解决这一问题要在heketi.json添加"brick_min_size_gb" : 1 ,1为1G

...... ???"brick_min_size_gb" : 1, ???"db": "/var/lib/heketi/heketi.db"......

size要比brick_min_size_gb大,如果设成1还是报min brick limit,replica必须大于1

heketi-cli --json ?volume create ?--size 3 --replica 2

在执行创建的时候,抛出了如下异常:

Error: /usr/sbin/thin_check: execvp failed: No such file or directory ?WARNING: Integrity check of metadata for pool vg_d9fb2bec56cfdf73e21d612b1b3c1feb/tp_e94d763a9b687bfc8769ac43b57fa41e failed. ?/usr/sbin/thin_check: execvp failed: No such file or directory ?Check of pool vg_d9fb2bec56cfdf73e21d612b1b3c1feb/tp_e94d763a9b687bfc8769ac43b57fa41e failed (status:2). Manual repair required! ?Failed to activate thin pool vg_d9fb2bec56cfdf73e21d612b1b3c1feb/tp_e94d763a9b687bfc8769ac43b57fa41e.

这需要在所有glusterfs节点机上安装thin-provisioning-tools包:

apt-get -y install thin-provisioning-tools

成功创建的返回输出如下:

heketi-cli --json volume create ?--size 3 --replica 2{"size":3,"name":"vol_7fc61913851227ca2c1237b4c4d51997","durability":{"type":"replicate","replicate":{"replica":2},"disperse":{"data":4,"redundancy":2}},"snapshot":{"enable":false,"factor":1},"id":"7fc61913851227ca2c1237b4c4d51997","cluster":"dae1ab512dfad0001c3911850cecbd61","mount":{"glusterfs":{"hosts":["10.1.61.175","10.1.61.178"],"device":"10.1.61.175:vol_7fc61913851227ca2c1237b4c4d51997","options":{"backup-volfile-servers":"10.1.61.178"}}},"bricks":[{"id":"004f34fd4eb9e04ca3e1ca7cc1a2dd2c","path":"/var/lib/heketi/mounts/vg_d9fb2bec56cfdf73e21d612b1b3c1feb/brick_004f34fd4eb9e04ca3e1ca7cc1a2dd2c/brick","device":"d9fb2bec56cfdf73e21d612b1b3c1feb","node":"20d14c78691d9caef050b5dc78079947","volume":"7fc61913851227ca2c1237b4c4d51997","size":3145728},{"id":"2876e9a7574b0381dc0479aaa2b64d46","path":"/var/lib/heketi/mounts/vg_b7fd866d3ba90759d0226e26a790d71f/brick_2876e9a7574b0381dc0479aaa2b64d46/brick","device":"b7fd866d3ba90759d0226e26a790d71f","node":"9cddf0ac7899676c86cb135be16649f5","volume":"7fc61913851227ca2c1237b4c4d51997","size":3145728}]}

配置kubernetes使用glusterfs

参考https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

创建storageclass

添加storageclass-glusterfs.yaml文件,内容如下:

apiVersion: storage.k8s.io/v1beta1kind: StorageClassmetadata: ?name: glusterfsprovisioner: kubernetes.io/glusterfsparameters: ?resturl: "http://192.168.75.175:18080" ?restauthenabled: "true" ?restuser: "admin" ?restuserkey: "adminkey" ?volumetype: "replicate:2"kubectl apply -f storageclass-glusterfs.yaml 

这是直接将userkey明文写入配置文件创建storageclass的方式,官方推荐将key使用secret保存。示例如下:

# glusterfs-secret.yaml内容如下:apiVersion: v1kind: Secretmetadata: ?name: heketi-secret ?namespace: defaultdata: ?# base64 encoded password. E.g.: echo -n "mypassword" | base64 ?key: TFRTTkd6TlZJOEpjUndZNg==type: kubernetes.io/glusterfs# storageclass-glusterfs.yaml内容修改如下:apiVersion: storage.k8s.io/v1beta1kind: StorageClassmetadata: ?name: glusterfsprovisioner: kubernetes.io/glusterfsparameters: ?resturl: "http://10.1.61.175:18080" ?clusterid: "dae1ab512dfad0001c3911850cecbd61" ?restauthenabled: "true" ?restuser: "admin" ?secretNamespace: "default" ?secretName: "heketi-secret" ?#restuserkey: "adminkey" ?gidMin: "40000" ?gidMax: "50000" ?volumetype: "replicate:2"

更详细的用法参考:https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs

创建pvc

glusterfs-pvc.yaml内容如下:

kind: PersistentVolumeClaimapiVersion: v1metadata: ?name: glusterfs-mysql1 ?namespace: default ?annotations: ???volume.beta.kubernetes.io/storage-class: "glusterfs"spec: ?accessModes: ?- ReadWriteMany ?resources: ???requests: ?????storage: 2Gi ?????kubectl create -f glusterfs-pvc.yaml

创建pod,使用pvc

mysql-deployment.yaml内容如下:

kind: DeploymentapiVersion: extensions/v1beta1metadata: ?name: mysql ?namespace: defaultspec: ?replicas: 1 ?template: ???metadata: ?????labels: ???????name: mysql ???spec: ?????containers: ?????- name: mysql ???????image: mysql:5.7 ???????imagePullPolicy: IfNotPresent ???????env: ???????- name: MYSQL_ROOT_PASSWORD ?????????value: root123456 ???????ports: ?????????- containerPort: 3306 ???????volumeMounts: ???????- name: gluster-mysql-data ?????????mountPath: "/var/lib/mysql" ?????volumes: ???????- name: glusterfs-mysql-data ?????????persistentVolumeClaim: ???????????claimName: glusterfs-mysql1 ???????????kubectl create -f /etc/kubernetes/mysql-deployment.yaml

需要说明的是,我这里使用的动态pvc的方式来创建glusterfs挂载盘,还有一种手动创建pvc的方式,可以参考:http://rdc.hundsun.com/portal/article/826.html

glusterfs + heketi实现kubernetes的共享存储

原文地址:https://www.cnblogs.com/breezey/p/8849466.html

知识推荐

我的编程学习网——分享web前端后端开发技术知识。 垃圾信息处理邮箱 tousu563@163.com 网站地图
icp备案号 闽ICP备2023006418号-8 不良信息举报平台 互联网安全管理备案 Copyright 2023 www.wodecom.cn All Rights Reserved