k8s csi-driver-nfs 动态扩容
1.前置条件[*]有一套完整的Kubernetes 集群,且版本 ≥ 1.18(推荐 1.20+)。
[*]已部署好NFS 服务:共享目录: /nfs_share/k8s/nfs-csi,NFS服务器:172.16.4.60。
2.部署 NFS CSI 驱动(csi-driver-nfs)
2.1 克隆官方仓库并进入目录
git clone https://github.com/kubernetes-csi/csi-driver-nfs.git
cd csi-driver-nfs/deploy# ls
crd-csi-snapshot.yaml csi-nfs-node.yaml install-driver.sh snapshotclass.yaml v3.0.0v4.1.0 v4.2.0v4.5.0v4.8.0
csi-nfs-controller.yamlcsi-snapshot-controller.yamlrbac-csi-nfs.yaml storageclass.yaml v3.1.0v4.10.0v4.3.0v4.6.0v4.9.0
csi-nfs-driverinfo.yamlexample rbac-snapshot-controller.yamluninstall-driver.shv4.0.0v4.11.0v4.4.0v4.7.0
[*]可以在deploy部署目录下看到csi-driver-nfs最新版本为v4.11.0,我们就使用这个版本,进入到这个目录,yaml文件如下:
# cd /root/statefulset/csi-driver-nfs/deploy/v4.11.0
# ls
crd-csi-snapshot.yaml csi-nfs-driverinfo.yamlcsi-snapshot-controller.yamlrbac-snapshot-controller.yamlstorageclass.yaml
csi-nfs-controller.yamlcsi-nfs-node.yaml rbac-csi-nfs.yaml snapshotclass.yaml2.2 下载镜像
[*]查看csi-driver-nfs v4.11.0版本中需要哪些镜像
# grep -w image *
csi-nfs-controller.yaml: image: registry.k8s.io/sig-storage/csi-provisioner:v5.2.0
csi-nfs-controller.yaml: image: registry.k8s.io/sig-storage/csi-resizer:v1.13.1
csi-nfs-controller.yaml: image: registry.k8s.io/sig-storage/csi-snapshotter:v8.2.0
csi-nfs-controller.yaml: image: registry.k8s.io/sig-storage/livenessprobe:v2.15.0
csi-nfs-controller.yaml: image: registry.k8s.io/sig-storage/nfsplugin:v4.11.0
csi-nfs-node.yaml: image: registry.k8s.io/sig-storage/livenessprobe:v2.15.0
csi-nfs-node.yaml: image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0
csi-nfs-node.yaml: image: registry.k8s.io/sig-storage/nfsplugin:v4.11.0
csi-snapshot-controller.yaml: image: registry.k8s.io/sig-storage/snapshot-controller:v8.2.0
[*]可以看到csi-nfs-controller.yaml、csi-nfs-node.yaml、csi-snapshot-controller.yaml 文件需要镜像
[*]由于我是内网部署,所以需要在有网的服务器拉取对应镜像后,再推到内网harbor仓库
[*]另外 registry.k8s.io 地址不太好访问,如果有需要的朋友,可以联系我获取
docker pull registry.k8s.io/sig-storage/csi-resizer:v1.13.1
docker pull registry.k8s.io/sig-storage/csi-snapshotter:v8.2.0
docker pull registry.k8s.io/sig-storage/livenessprobe:v2.15.0
docker pull registry.k8s.io/sig-storage/nfsplugin:v4.11.0
docker pull registry.k8s.io/sig-storage/csi-provisioner:v5.2.0
docker pull registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0
docker pull registry.k8s.io/sig-storage/snapshot-controller:v8.2.0
[*]将下载的镜像保存为tar.gz,上传到内网服务器,并加载镜像
ls *.tar.gz | xargs -i docker load -i {}
[*]修改tag并推送到内网私有harbor,我的harbor仓库地址:172.16.4.177:8090,项目:k8s-csi-nfs
docker tag registry.k8s.io/sig-storage/csi-resizer:v1.13.1 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/csi-resizer:v1.13.1
docker push 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/csi-resizer:v1.13.1
docker tag registry.k8s.io/sig-storage/csi-snapshotter:v8.2.0 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/csi-snapshotter:v8.2.0
docker push 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/csi-snapshotter:v8.2.0
docker tag registry.k8s.io/sig-storage/livenessprobe:v2.15.0 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/livenessprobe:v2.15.0
docker push 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/livenessprobe:v2.15.0
docker tag registry.k8s.io/sig-storage/nfsplugin:v4.11.0 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/nfsplugin:v4.11.0
docker push 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/nfsplugin:v4.11.0
docker tag registry.k8s.io/sig-storage/csi-provisioner:v5.2.0172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/csi-provisioner:v5.2.0
docker push 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/csi-provisioner:v5.2.0
docker tag registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0
docker push 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0
docker tag registry.k8s.io/sig-storage/snapshot-controller:v8.2.0172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/snapshot-controller:v8.2.0
docker push 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/snapshot-controller:v8.2.02.3 修改csi-nfs-controller.yaml、csi-nfs-node.yaml、csi-snapshot-controller.yaml 文件中的镜像地址为内网harbor地址
# grep -w image *
csi-nfs-controller.yaml: image: 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/csi-provisioner:v5.2.0
csi-nfs-controller.yaml: image: 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/csi-resizer:v1.13.1
csi-nfs-controller.yaml: image: 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/csi-snapshotter:v8.2.0
csi-nfs-controller.yaml: image: 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/livenessprobe:v2.15.0
csi-nfs-controller.yaml: image: 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/nfsplugin:v4.11.0
csi-nfs-node.yaml: image: 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/livenessprobe:v2.15.0
csi-nfs-node.yaml: image: 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0
csi-nfs-node.yaml: image: 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/nfsplugin:v4.11.0
csi-snapshot-controller.yaml: image: 172.16.4.177:8090/k8s-csi-nfs/registry.k8s.io/sig-storage/snapshot-controller:v8.2.02.4 部署,依次执行rbac、driverinfo、controller、node四个yaml文件
kubectl apply -f rbac-csi-nfs.yaml
kubectl apply -f csi-nfs-driverinfo.yaml
kubectl apply -f csi-nfs-controller.yaml
kubectl apply -f csi-nfs-node.yaml2.5 查看pod状态
# kubectl -n kube-system get pod -o wide -l app=csi-nfs-node
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-nfs-node-4j8r6 3/3 Running 0 82m 172.16.4.89 node3 <none> <none>
csi-nfs-node-5w28w 3/3 Running 0 82m 172.16.4.86 node1 <none> <none>
csi-nfs-node-5z4vv 3/3 Running 0 82m 172.16.4.92 master2 <none> <none>
csi-nfs-node-gcbsn 3/3 Running 0 82m 172.16.4.85 master1 <none> <none>
csi-nfs-node-hpqvh 3/3 Running 0 82m 172.16.4.90 node4 <none> <none>
csi-nfs-node-zpsx6 3/3 Running 0 82m 172.16.4.87 node2 <none> <none>
# kubectl -n kube-system get pod -o wide -l app=csi-nfs-controller
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-nfs-controller-84f8ff88c5-x8c8b 5/5 Running 0 83m 172.16.4.92 master2 <none> <none>3.配置支持动态扩容的 StorageClass
[*]编辑并修改StorageClass-nfs.yaml文件,根据实际情况结合注释说明进行修改,直至符合自己的环境
[*]storageclass-nfs.yaml: provisioner: nfs.csi.k8s.io 和 csi-nfs-driverinfo.yaml: name: nfs.csi.k8s.io 保持一直
[*]allowVolumeExpansion: true 允许动态扩容,必须为true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
annotations:
# 此操作是1.25的以上的一个alpha的新功能,是将此storageclass设置为默认
storageclass.kubernetes.io/is-default-class: "true"
# 此处指定了csidrivers的名称
provisioner: nfs.csi.k8s.io
parameters:
# NFS的Server
server: 172.16.4.60
# NFS的存储路径
share: /nfs_share/k8s/nfs-csi
# 是否允许动态扩容 PVC,true允许、false拒绝
allowVolumeExpansion: true
# 定义回收策略,删除 PVC 后保留 PV 和数据(需手动清理)
reclaimPolicy: Retain
#挂载模式,默认为Immediate立即挂载
volumeBindingMode: Immediate
mountOptions:
# 这里不只可以配置nfs的版本
- nfsvers=4.1kubectl apply -f storageclass-nfs.yaml
[*]查看sc
# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi (default) nfs.csi.k8s.io Retain Immediate true 69m4.测试自动创建 PVC和PV
4.1 编辑测试yaml文件
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: busybox-test
spec:
serviceName: "busybox-service"
replicas: 3
selector:
matchLabels:
app: busybox-test
template:
metadata:
labels:
app: busybox-test
spec:
containers:
- name: busybox
image: 172.16.4.177:8090/ltzx/busybox:latest# 替换成你的私有仓库地址
command: ["/bin/sh", "-c", "sleep infinity"]# 保持容器运行
volumeMounts:
- name: data
mountPath: /mnt/storage# 测试挂载点
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: nfs-csi# 指向你的NFS StorageClass
resources:
requests:
storage: 1Gikubectl apply -f busybox-test.yaml4.2 查看pods状态
# kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-test-0 1/1 Running 0 13s
busybox-test-1 1/1 Running 0 9s
busybox-test-2 1/1 Running 0 5s4.3 查看PVC和PV是否自动创建
[*]通过kubectl可以看到pvc和pv已经自动创建完成
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-busybox-test-0 Bound pvc-c4505878-0d72-4955-a5b9-1f678b25b8cf 2Gi RWO nfs-csi 44s
data-busybox-test-1 Bound pvc-c84c5a63-bf69-4e50-ad2b-63193889d236 2Gi RWO nfs-csi 40s
data-busybox-test-2 Bound pvc-ae781199-fb84-4468-9dc8-69791c00793f 2Gi RWO nfs-csi 36s
# kubectl get pv |grep nfs-csi
pvc-ae781199-fb84-4468-9dc8-69791c00793f 2Gi RWO Retain Bound default/data-busybox-test-2 nfs-csi 45s
pvc-c4505878-0d72-4955-a5b9-1f678b25b8cf 2Gi RWO Retain Bound default/data-busybox-test-0 nfs-csi 53s
pvc-c84c5a63-bf69-4e50-ad2b-63193889d236 2Gi RWO Retain Bound default/data-busybox-test-1 nfs-csi 49s
[*] 在NFS共享存储上(172.16.4.60)查看是否有pv目录生成
# ls /nfs_share/k8s/nfs-csi/
pvc-ae781199-fb84-4468-9dc8-69791c00793fpvc-c4505878-0d72-4955-a5b9-1f678b25b8cfpvc-c84c5a63-bf69-4e50-ad2b-63193889d2365.测试动态扩容PVC和PV
[*]确保 nfs-csi StorageClass 的 allowVolumeExpansion 设置为 true
kubectl get storageclass nfs-csi -o jsonpath='{.allowVolumeExpansion}'
# 输出应为 true5.1 通过【步骤4】可以看到当前启动的busybox-test pod的PVC和PV容量大小为2Gi,也就是占用NFS 2Gi的存储
5.2 触发 PVC 动态扩容,将 2Gi 扩容到 3Gi
[*]直接编辑PVC请求,将 spec.resources.requests.storage 从 2Gi 改为 3Gi,保存退出
kubectl edit pvc data-busybox-test-0# 查看PVC事件
# kubectl describe pvc data-busybox-test-0
Name: data-busybox-test-0
Namespace: default
StorageClass:nfs-csi
Status: Bound
Volume: pvc-c4505878-0d72-4955-a5b9-1f678b25b8cf
Labels: app=busybox-test
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
volume.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
Finalizers:
Capacity: 3Gi
Access Modes:RWO
VolumeMode: Filesystem
Used By: busybox-test-0
Events:
Type Reason Age From Message
---- ------ -------- -------
Normal Provisioning 22m nfs.csi.k8s.io_master2_1a93a622-c800-48ca-bbb0-458f20466424External provisioner is provisioning volume for claim "default/data-busybox-test-0"
Normal ExternalProvisioning 22m persistentvolume-controller waiting for a volume to be created, either by external provisioner "nfs.csi.k8s.io" or manually created by system administrator
Normal ProvisioningSucceeded 22m nfs.csi.k8s.io_master2_1a93a622-c800-48ca-bbb0-458f20466424Successfully provisioned volume pvc-c4505878-0d72-4955-a5b9-1f678b25b8cf
WarningExternalExpanding 3s volume_expand Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.
Normal Resizing 3s external-resizer nfs.csi.k8s.io External resizer is resizing volume pvc-c4505878-0d72-4955-a5b9-1f678b25b8cf
Normal VolumeResizeSuccessful3s external-resizer nfs.csi.k8s.io Resize volume succeeded
[*]使用kubectl patch 命令
kubectl patch pvc data-busybox-test-0 -p '{"spec":{"resources":{"requests":{"storage":"3Gi"}}}}'# 查看PVC事件
# kubectl describe pvc data-busybox-test-1
Name: data-busybox-test-1
Namespace: default
StorageClass:nfs-csi
Status: Bound
Volume: pvc-c84c5a63-bf69-4e50-ad2b-63193889d236
Labels: app=busybox-test
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
volume.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
Finalizers:
Capacity: 3Gi
Access Modes:RWO
VolumeMode: Filesystem
Used By: busybox-test-1
Events:
Type Reason Age From Message
---- ------ -------- -------
Normal ExternalProvisioning 25m persistentvolume-controller waiting for a volume to be created, either by external provisioner "nfs.csi.k8s.io" or manually created by system administrator
Normal Provisioning 25m nfs.csi.k8s.io_master2_1a93a622-c800-48ca-bbb0-458f20466424External provisioner is provisioning volume for claim "default/data-busybox-test-1"
Normal ProvisioningSucceeded 25m nfs.csi.k8s.io_master2_1a93a622-c800-48ca-bbb0-458f20466424Successfully provisioned volume pvc-c84c5a63-bf69-4e50-ad2b-63193889d236
WarningExternalExpanding 4s volume_expand Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.
Normal Resizing 4s external-resizer nfs.csi.k8s.io External resizer is resizing volume pvc-c84c5a63-bf69-4e50-ad2b-63193889d236
Normal VolumeResizeSuccessful4s external-resizer nfs.csi.k8s.io Resize volume succeeded
[*] 使用kubectl patch 同时给多个PVC扩容,建议不要一次扩容超过3个以上的PVC,可能会出现问题
#!/bin/bash
for i in {0..2}; do
kubectl patch pvc data-busybox-test-$i -p '{"spec":{"resources":{"requests":{"storage":"3Gi"}}}}'
done5.3 查看扩容后的PVC和PV状态
[*]有两个PVC和对应PV已经变成了3Gi
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-busybox-test-0 Bound pvc-c4505878-0d72-4955-a5b9-1f678b25b8cf 3Gi RWO nfs-csi 27m
data-busybox-test-1 Bound pvc-c84c5a63-bf69-4e50-ad2b-63193889d236 3Gi RWO nfs-csi 27m
data-busybox-test-2 Bound pvc-ae781199-fb84-4468-9dc8-69791c00793f 2Gi RWO nfs-csi 26m
my-mysql-ss-0 Bound mysql-pv 10Gi RWO 6d23h
my-s-mysql-s-ss-0 Bound mysql-s-pv 10Gi RWO 6d23h
# kubectl get pv |grep nfs-csi
pvc-ae781199-fb84-4468-9dc8-69791c00793f 2Gi RWO Retain Bound default/data-busybox-test-2 nfs-csi 27m
pvc-c4505878-0d72-4955-a5b9-1f678b25b8cf 3Gi RWO Retain Bound default/data-busybox-test-0 nfs-csi 27m
pvc-c84c5a63-bf69-4e50-ad2b-63193889d236 3Gi RWO Retain Bound default/data-busybox-test-1 nfs-csi 27m6.参考文档
https://zhuanlan.zhihu.com/p/24321695061
至此就完成了自动创建PVC、PV和动态扩容PVC、PV的操作!!!
来源:程序园用户自行投稿发布,如果侵权,请联系站长删除
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!
页:
[1]