GlusterFS是一个开源的横向扩展文件系统。这些示例提供有关如何允许容器使用GlusterFS卷的信息。
该示例假定您已经设置了GlusterFS服务器集群,并且已准备好在容器中使用正在运行的GlusterFS卷。
先决条件
Kubernetes集群已经搭建好。
Glusterfs集群的安装
环境介绍
OS系统:Centos 7.x
Glusterfs两个节点:192.168.22.21,192.168.22.22
- 安装glusterfs
我们直接在物理机上使用yum安装,如果你选择在kubernetes上安装,请参考:
https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md
# 先安装 gluster 源$ yum install centos-release-gluster -y# 安装 glusterfs 组件$ yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma glusterfs-geo-replication glusterfs-devel##创建 glusterfs 目录$ mkdir /opt/glusterd# 修改 glusterd 目录$ sed -i ‘s/var\/lib/opt/g‘ /etc/glusterfs/glusterd.vol# 启动 glusterfs$ systemctl start glusterd.service# 设置开机启动$ systemctl enable glusterd.service#查看状态$ systemctl status glusterd.service
- 配置 glusterfs
[root@k8s-master-01 ~]# vi /etc/hosts192.168.22.21 ??k8s-glusterfs-01192.168.22.22 ??k8s-glusterfs-02# 如果开启了防火墙则 开放端口[root@k8s-master-01 ~]# iptables -I INPUT -p tcp --dport 24007 -j ACCEPT
创建存储目录
[root@k8s-master-01 ~]# mkdir /opt/gfs_data
添加节点到 集群 执行操作的本机不需要probe 本机
[root@k8s-master-01 ~]# gluster peer probe k8s-glusterfs-02
查看集群状态
[root@k8s-glusterfs-01 ~]# gluster peer status
Number of Peers: 1
Hostname: k8s-glusterfs-02
Uuid: b80f012b-cbb6-469f-b302-0722c058ad45
State: Peer in Cluster (Connected)
3. **配置 volume****GlusterFS 几种volume 模式说明**1)、默认模式,既DHT, 也叫 分布卷: 将文件已hash算法随机分布到 一台服务器节点中存储。命令格式:gluster volume create test-volume server1:/exp1 server2:/exp22)、复制模式,既AFR, 创建volume 时带 replica x 数量: 将文件复制到 replica x 个节点中。命令格式:gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp23)、条带模式,既Striped, 创建volume 时带 stripe x 数量: 将文件切割成数据块,分别存储到 stripe x 个节点中 ( 类似raid 0 )。命令格式:gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp24)、分布式条带模式(组合型),最少需要4台服务器才能创建。 创建volume 时 stripe 2 server = 4 个节点: 是DHT 与 Striped 的组合型。命令格式:gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp45)、分布式复制模式(组合型), 最少需要4台服务器才能创建。 创建volume 时 replica 2 server = 4 个节点:是DHT 与 AFR 的组合型。命令格式:gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp46)、条带复制卷模式(组合型), 最少需要4台服务器才能创建。 创建volume 时 stripe 2 replica 2 server = 4 个节点: 是 Striped 与 AFR 的组合型。命令格式:gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp47)、三种模式混合, 至少需要8台 服务器才能创建。 stripe 2 replica 2 , 每4个节点 组成一个 组。命令格式:gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8
#创建GlusterFS磁盘:
[root@k8s-glusterfs-01 ~]# gluster volume create models replica 2 k8s-glusterfs-01:/opt/gfs_data ?k8s-glusterfs-02:/opt/gfs_data force
volume create: models: success: please start the volume to access data
查看volume状态
[root@k8s-glusterfs-01 ~]# gluster volume info
Volume Name: k8s-volume
Type: Distribute
Volume ID: 340d94ee-7c3d-451d-92c9-ad0e19d24b7d
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: k8s-glusterfs-01:/opt/gfs_data
Brick2: k8s-glusterfs-02:/opt/gfs_data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
4. **Glusterfs调优**
开启 指定 volume 的配额
$ gluster volume quota k8s-volume enable
限制 指定 volume 的配额
$ gluster volume quota k8s-volume limit-usage / 1TB
设置 cache 大小, 默认32MB
$ gluster volume set k8s-volume performance.cache-size 4GB
设置 io 线程, 太大会导致进程崩溃
$ gluster volume set k8s-volume performance.io-thread-count 16
设置 网络检测时间, 默认42s
$ gluster volume set k8s-volume network.ping-timeout 10
设置 写缓冲区的大小, 默认1M
$ gluster volume set k8s-volume performance.write-behind-window-size 1024MB
# 客户端使用Glusterfs 物理机上使用Gluster的volume
yum install -y glusterfs glusterfs-fuse
mkdir -p /opt/gfsmnt
mount -t glusterfs k8s-glusterfs-01:k8s-volume /opt/gfsmnt/
##df查看挂载状态:
df -h |grep k8s-volume
k8s-glusterfs-01:k8s-volume ??46G ?1.6G ??44G ???4% /opt/gfsmnt
# Kubernetes配置使用glusterfs:官方文档对配置过程进行了介绍:https://github.com/kubernetes/examples/blob/master/staging/volumes/glusterfs/README.md注:以下操作在kubernetes集群中任意一个可以执行kubectl的master上操作!1. **第一步在Kubernetes中创建GlusterFS端点定义**这是glusterfs-endpoints.json的片段:
"{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"subsets": [
{
"addresses": [
{
"ip": "192.168.22.21"
}
],
"ports": [
{
"port": 20
}
]
},
{
"addresses": [
{
"ip": "192.168.22.22"
}
],
"ports": [
{
"port": 20
}
]
}
]
}
备:该subsets字段应填充GlusterFS集群中节点的地址。可以在port字段中提供任何有效值(从1到65535)。
##创建端点:
[root@k8s-master-01 ~]# kubectl create -f ?glusterfs-endpoints.json
##验证是否已成功创建端点
[root@k8s-master-01 ~]# kubectl get ep |grep glusterfs-cluster
glusterfs-cluster ??192.168.22.21:20,192.168.22.22:20
2. **配置 service**我们还需要为这些端点创建服务,以便它们能够持久存在。我们将在没有选择器的情况下添加此服务,以告知Kubernetes我们想要手动添加其端点
[root@k8s-master-01 ]# cat glusterfs-service.json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"spec": {
"ports": [
{"port": 20}
]
}
}
##创建服务
[root@k8s-master-01 ]# kubectl create -f ?glusterfs-service.json
##查看service
[root@k8s-master-01 ]# kubectl get service | grep glusterfs-cluster
glusterfs-cluster ??ClusterIP ??10.68.114.26 ??<none> ???????20/TCP ???6m
3. **配置PersistentVolume(简称pv)**创建glusterfs-pv.yaml文件,指定storage容量和读写属性
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "k8s-volume"
readOnly: false
然后执行:
# kubectl create -f glusterfs-pv.yamlkubectl get pvNAME ?????CAPACITY ??ACCESS ?MODES ??RECLAIM POLICY ??STATUS ???CLAIM ???????????STORAGECLASS ??REASON ???AGEpv001 ?????10Gi ????????????RWX ????????Retain ??????Bound ????????????????????????????????
- 配置PersistentVolumeClaim(简称pvc)
创建glusterfs-pvc.yaml文件,指定请求资源大小apiVersion: v1kind: PersistentVolumeClaimmetadata:name: pvc001spec:accessModes:- ReadWriteManyresources:requests: ?storage: 2Gi
执行:
# kubectl create -f glusterfs-pvc.yaml# kubectl get pvcNAME ?????STATUS ???VOLUME ???CAPACITY ??ACCESS MODES ??STORAGECLASS ??AGEpvc001 ???Bound ????pv001 ????10Gi ??????RWX ??????????????????????????1h
- 部署应用挂载pvc
以创建nginx,把pvc挂载到容器内的/usr/share/nginx/html文件夹为例:
nginx_deployment.yaml文件如下
apiVersion: extensions/v1beta1 kind: Deployment metadata: ??name: nginx-dmspec: ??replicas: 2 ?template: ????metadata: ??????labels: ????????name: nginx ????spec: ??????containers: ????????- name: nginx ??????????image: nginx ?????????ports: ????????????- containerPort: 80 ?????????volumeMounts: ???????????- name: storage001 ?????????????mountPath: "/usr/share/nginx/html" ?????volumes: ?????- name: storage001 ???????persistentVolumeClaim: ?????????claimName: pvc001
执行:
# kubectl create -f nginx_deployment.yaml查看nginx是否部署成功 # kubectl get podsNAME ?????????????????????????READY ????STATUS ???RESTARTS ??AGEnginx-dm-5fbdb54795-77f7v ????1/1 ??????Running ??0 ?????????1hnginx-dm-5fbdb54795-rnqwd ????1/1 ??????Running ??0 ?????????1h查看挂载:# kubectl exec -it nginx-dm-5fbdb54795-77f7v ?-- df -h |grep k8s-volume192.168.22.21:k8s-volume ??46G ?1.6G ??44G ??4% /usr/share/nginx/html创建文件:# kubectl exec -it nginx-dm-5fbdb54795-77f7v -- touch /usr/share/nginx/html/123.txt查看文件属性:# kubectl exec -it nginx-dm-5fbdb54795-77f7v -- ls -lt ?/usr/share/nginx/html/123.txt -rw-r--r-- 1 root root 0 Jul ?9 06:25 /usr/share/nginx/html/123.txt
再回到glusterfs的服务器的数据目录/opt/gfs_data查看是否有123.txt文件
##192.168.22.21上查看:[root@k8s-glusterfs-01 ~]# ls -lt /opt/gfs_data/总用量 0-rw-r--r-- 2 root root 0 7月 ??9 14:25 123.txt##192.168.22.22上查看:[root@k8s-glusterfs-02 ~]# ls -lt /opt/gfs_data/总用量 0-rw-r--r-- 2 root root 0 7月 ??9 14:25 123.txt
至此部署完成。
Kubernetes使用Glusterfs做存储持久化
原文地址:http://blog.51cto.com/passed/2139299