2019独角兽企业重金招聘Python工程师标准>>>
使用Docker快速部署Ceph集群 , 然后使用这个Ceph集群作为Kubernetes的动态分配持久化存储。
Kubernetes集群要使用Ceph集群需要在每个Kubernetes节点上安装ceph-common
1. 为kubernetes创建一个存储池
1 2 | # ceph osd pool create k8s 128
pool 'k8s' created
|
2. 创建用户
1 | # ceph auth add client.k8s mon 'allow rx' osd 'allow rwx pool=k8s'
|
k8s用户只能对k8s这个存储池有读写权限,注意一定要有执行权限才能执行ceph命令
通过ceph auth list 查看
1 2 3 4 | client.k8s
key: AQC3Hm5Zan9LDhAAXZHCdAF39bXcEwdpV6y/cA==
caps: [mon] allow r
caps: [osd] allow rw pool=k8s
|
在存储池k8s下创建一个镜像测试下k8s这个用户是否可以操作
1 2 3 4 | # rbd create k8s/foo --size 1G --id k8s
# rbd map k8s/foo --id k8s
/dev/rbd0
|
k8s这个ceph用户可以对k8s这个存储池进行操作了
3.为ceph添加一个kubernetes secret
1 2 | # echo "$(ceph auth get-key client.k8s)"|base64
QVFDM0htNVphbjlMRGhBQVhaSENkQUYzOWJYY0V3ZHBWNnkvY0E9PQo=
|
ceph-secret.yaml
1 2 3 4 5 6 7 8 | apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: kube-system
type: "kubernetes.io/rbd"
data:
key: "QVFDM0htNVphbjlMRGhBQVhaSENkQUYzOWJYY0V3ZHBWNnkvY0E9PQo="
|
type这一行一定要有
1 | # kubectl create -f ceph-secret.yaml
|
1 2 | # kubectl get secret -n=kube-system|grep ceph
ceph-secret kubernetes.io /rbd 1 1m
|
4.创建一个StorageClass
ceph-rbd-storageclass.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 | apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/rbd
parameters:
monitors: 172.30.30.215:6789,172.30.30.217:6789,172.30.30.219:6789
adminId: k8s
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
pool: k8s
userId: k8s
userSecretName: ceph-secret
|
Kubernetes 1.6 以上使用 storage.k8s.io/v1
1 | # kubectl create -f ceph-rbd-storageclass.yaml
|
1 2 3 | # kubectl get storageclass
NAME TYPE
fast kubernetes.io /rbd
|
5.测试
ceph-pvc.json
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | {
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "claim1",
"annotations": {
"volume.beta.kubernetes.io/storage-class": "fast"
}
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "3Gi"
}
}
}
}
|
1 2 3 4 5 6 | # kubectl create -f ceph-pvc.json
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
claim1 Bound pvc-28b66dcb-6c82-11e7-94da-02672b869d7f 3Gi RWO 11m
|
现在就可以使用Ceph RBD作为Kubernetes的动态分配持久化存储了。