分享web开发知识

注册/登录|最近发布|今日推荐

主页 IT知识网页技术软件开发前端开发代码编程运营维护技术分享教程案例
当前位置:首页 > 软件开发

kubernetes云平台管理实战: 自动加载到负载均衡(七)

发布时间:2023-09-06 02:33责任编辑:白小东关键词:kubernetes

一、如何实现外界能访问

外界访问不了

1、启动svc

[root@k8s-master ~]# cat myweb-svc.yaml apiVersion: v1kind: Servicemetadata: ?name: nginxspec: ?type: NodePort ?ports: ???- port: 80 ?????nodePort: 30001 ?selector: ???app: myweb ???[root@k8s-master ~]# kubectl create -f myweb-svc.yaml service "nginx" created

2、查看svc状态

[root@k8s-master ~]# kubectl get allNAME ??????DESIRED ??CURRENT ??READY ????AGErc/myweb ??3 ????????3 ????????3 ????????19mNAME ????????????CLUSTER-IP ??????EXTERNAL-IP ??PORT(S) ???????AGEsvc/kubernetes ??10.254.0.1 ??????<none> ???????443/TCP ???????22hsvc/nginx ???????10.254.205.175 ??<nodes> ??????80:30001/TCP ??27sNAME ????????????READY ????STATUS ???RESTARTS ??AGEpo/myweb-7m76h ??1/1 ??????Running ??0 ?????????18mpo/myweb-kzq8c ??1/1 ??????Running ??0 ?????????18mpo/myweb-mnf7x ??1/1 ??????Running ??0 ?????????18m

3、被外界访问原理图

二、为什么是30001?

1、修改为3000看看是否正常?

[root@k8s-master ~]# vim myweb-svc.yaml更改端口为:3000[root@k8s-master ~]# kubectl delete svc/nginxservice "nginx" deleted[root@k8s-master ~]# kubectl create -f myweb-svc.yaml The Service "nginx" is invalid: spec.ports[0].nodePort: Invalid value: 3000: provided port is not in the valid range. The range of valid ports is 30000-32767

2、端口更改为30001

[root@k8s-master ~]# vim myweb-svc.yaml 更改端口为:30001[root@k8s-master ~]# kubectl create -f myweb-svc.yaml service "nginx" created[root@k8s-master ~]# kubectl get allNAME ??????DESIRED ??CURRENT ??READY ????AGErc/myweb ??3 ????????3 ????????3 ????????25mNAME ????????????CLUSTER-IP ?????EXTERNAL-IP ??PORT(S) ???????AGEsvc/kubernetes ??10.254.0.1 ?????<none> ???????443/TCP ???????22hsvc/nginx ???????10.254.145.15 ??<nodes> ??????80:30027/TCP ??8sNAME ????????????READY ????STATUS ???RESTARTS ??AGEpo/myweb-7m76h ??1/1 ??????Running ??0 ?????????24mpo/myweb-kzq8c ??1/1 ??????Running ??0 ?????????24mpo/myweb-mnf7x ??1/1 ??????Running ??0 ?????????25m

默认不填写,自动分配30000-32767内任意一端口

三、自动加载到负载均衡里面

1、修改svc副本数为1

[root@k8s-master ~]# kubectl get pod -o wideNAME ?????????READY ????STATUS ???RESTARTS ??AGE ??????IP ???????????NODEmyweb-7m76h ??1/1 ??????Running ??0 ?????????26m ??????172.16.10.2 ??k8s-node1myweb-kzq8c ??1/1 ??????Running ??0 ?????????26m ??????172.16.48.4 ??k8s-node2myweb-mnf7x ??1/1 ??????Running ??0 ?????????26m ??????172.16.48.2 ??k8s-node2[root@k8s-master ~]# kubectl edit rc myweb ?replicas: 1 replicationcontroller "myweb" edited[root@k8s-master ~]# kubectl get pod -o wideNAME ?????????READY ????STATUS ???RESTARTS ??AGE ??????IP ???????????NODEmyweb-mnf7x ??1/1 ??????Running ??0 ?????????28m ??????172.16.48.2 ??k8s-node2[root@k8s-master ~]# kubectl ?describe svc nginxName:nginxNamespace:defaultLabels:<none>Selector:app=mywebType:NodePortIP:10.254.145.15Port:<unset>80/TCPNodePort:<unset>30027/TCPEndpoints:172.16.48.2:80Session Affinity:NoneNo events.

2、修改svc副本数为5

[root@k8s-master ~]# kubectl edit rc myweb ?replicas: 5replicationcontroller "myweb" edited[root@k8s-master ~]# kubectl ?describe svc nginxName:nginxNamespace:defaultLabels:<none>Selector:app=mywebType:NodePortIP:10.254.145.15Port:<unset>80/TCPNodePort:<unset>30027/TCPEndpoints:172.16.10.2:80,172.16.10.3:80,172.16.48.2:80 + 1 more...Session Affinity:NoneNo events.[root@k8s-master ~]# kubectl get pod -o wideNAME ?????????READY ????STATUS ???RESTARTS ??AGE ??????IP ???????????NODEmyweb-415zs ??1/1 ??????Running ??0 ?????????9s ???????172.16.48.3 ??k8s-node2myweb-7bw5f ??1/1 ??????Running ??0 ?????????9s ???????172.16.10.3 ??k8s-node1myweb-7kzh2 ??1/1 ??????Running ??0 ?????????9s ???????172.16.48.4 ??k8s-node2myweb-j45xb ??1/1 ??????Running ??0 ?????????9s ???????172.16.10.2 ??k8s-node1myweb-mnf7x ??1/1 ??????Running ??0 ?????????30m ??????172.16.48.2 ??k8s-node2

 四、上网测试截图

1、node1 web截图

2、node2 web截图

kubernetes云平台管理实战: 自动加载到负载均衡(七)

原文地址:https://www.cnblogs.com/luoahong/p/10300385.html

知识推荐

我的编程学习网——分享web前端后端开发技术知识。 垃圾信息处理邮箱 tousu563@163.com 网站地图
icp备案号 闽ICP备2023006418号-8 不良信息举报平台 互联网安全管理备案 Copyright 2023 www.wodecom.cn All Rights Reserved