容器技术
ubuntu使用APT安装docker并指定版本
Helm部署与使用
Helm常用命令
从Helm仓库创建应用流程示例
Helm部署与使用
K8S中部署mysql-ha高可用集群
helm启动mysql-ha
helm几个常用仓库
Kubernetes使用helm部署Mysql-Ha
k8s入门:Helm 构建 MySQL
docker批量修改tag(批量push)
k8s之yaml文件详解
将 MySQL 通过 bitpoke/mysql-operator 部署到 k8s 内部
k8s pvc扩容:pvc创建后扩容
K8S性能分析
部署Metrics Server
Kubernetes集群搭建
kubespray 部署常见问题和优化汇总
kubernetes-sigs/kubespray at release-2.15
K8S-pod配置文件详解
KubeSphere知识库
在 Kubernetes 上最小化安装 KubeSphere
卸载 KubeSphere 和 Kubernetes
KubeSphere 应用商店
修改pod中容器的时区
k8s之Pod安全策略
Harbor 登陆失败,用户名或者密码不正确。405 Not Allowed
Docker-leanote_n1
kubesphere/kubekey
Kubernetes Static Pod (静态Pod)
kubernets kube-proxy的代理 iptables和ipvs - 30岁再次出发 - 博客园
k8s生产实践之获取客户端真实IP - SSgeek - 博客园
kube-proxy ip-tables故障解决
k8s入门:Helm 构建 MySQL
docker批量修改tag(批量push)
prometheus operator 监控redis-exporter
Helm3 安装 ElasticSearch & Kibana 7.x 版本
kubernete强力删除namespace_redis删除namespace命令
EFK (Elasticsearch + Fluentd + Kibana) 日志分析系统
k8s日志收集实战(无坑)
fluentd收集k8s集群pod日志
Elasticsearch+Fluentd+Kibana 日志收集系统的搭建
TKE/EKS之configmap,secret只读挂载
K8s基于Reloader的ConfigMap/Secret热更新
使用 Reloader 实现热部署_k8s reloader
k8s使用Reloader实现更新configmap后自动重启pod
在 Kubernetes 上对 gRPC 服务器进行健康检查 | Kubernetes
Kubernetes ( k8s ) gRPC服务 健康检查 ( livenessProbe ) 与 就绪检查 ( readinessProbe )
排查kubernetes中高磁盘占用pod
helm 安装 MongoDB 集群
helm 安装 Redis 1 主 2 从 3哨兵
【k8s】使用 Reloader 实现热部署
k8s证书过期,更新后kubelet启动失败
kubeadm证书/etcd证书过期处理
三种监控 Kubernetes 集群证书过期方案
K8s 集群(kubeadm) CA 证书过期解决方案
k8s调度、污点、容忍、不可调度、排水、数据卷挂载
5分钟搞懂K8S的污点和容忍度(理论+实战)
Kubernetes进阶-8基于Istio实现微服务治理
macvlan案例配置
快速解决Dockerhub镜像站无法访问问题
info_scan开源漏洞扫描主系统部署
本文档使用 MrDoc 发布
-
+
首页
kubespray 部署常见问题和优化汇总
kubespray v2.16 版本即将发布,整理一下自己在使用 kubespray 过程中遇到的问题和一些优化建议。 ## 二进制文件 在 kubespray 上游的 [#7561](https://github.com/kubernetes-sigs/kubespray/pull/7561) PR 中实现了根据 kubespray 的源码生成需要的文件列表和镜像列表。只需要在 repo 的 `contrib/offline` 目录下执行 `bash generate_list.sh` 就可以生成一个 files.list 和一个 images.list 文件。然后就可以根据这个文件来下载依赖的文件和镜像。如下 $ cd contrib/offline $ bash generate\_list.sh $ tree temp temp ├── files.list ├── generate.sh └── images.list $ cat temp/files.list https://get.helm.sh/helm-v3.5.4-linux-amd64.tar.gz https://github.com/containerd/nerdctl/releases/download/v0.8.0/nerdctl-0.8.0-linux-amd64.tar.gz https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz https://github.com/containers/crun/releases/download/0.19/crun-0.19-linux-amd64 https://github.com/coreos/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz https://github.com/kata-containers/runtime/releases/download/1.12.1/kata-static-1.12.1-x86\_64.tar.xz https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.20.0/crictl-v1.20.0-linux-amd64.tar.gz https://github.com/kubernetes-sigs/krew/releases/download/v0.4.1/krew.tar.gz https://github.com/projectcalico/calico/archive/v3.17.4.tar.gz https://github.com/projectcalico/calicoctl/releases/download/v3.17.4/calicoctl-linux-amd64 https://storage.googleapis.com/kubernetes-release/release/v1.20.6/bin/linux/amd64/kubeadm https://storage.googleapis.com/kubernetes-release/release/v1.20.6/bin/linux/amd64/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.20.6/bin/linux/amd64/kubelet 然后通过 wget 进行下载 $ wget -x -P temp/files -i temp/files.list tree temp/files temp/files ├── get.helm.sh │ └── helm-v3.5.4-linux-amd64.tar.gz ├── github.com │ ├── containerd │ │ └── nerdctl │ │ └── releases │ │ └── download │ │ └── v0.8.0 │ │ └── nerdctl-0.8.0-linux-amd64.tar.gz │ ├── containernetworking │ │ └── plugins │ │ └── releases │ │ └── download │ │ └── v0.9.1 │ │ └── cni-plugins-linux-amd64-v0.9.1.tgz │ ├── containers │ │ └── crun │ │ └── releases │ │ └── download │ │ └── 0.19 │ │ └── crun-0.19-linux-amd64 │ ├── coreos │ │ └── etcd │ │ └── releases │ │ └── download │ │ └── v3.4.13 │ │ └── etcd-v3.4.13-linux-amd64.tar.gz │ ├── kata-containers │ │ └── runtime │ │ └── releases │ │ └── download │ │ └── 1.12.1 │ │ └── kata-static-1.12.1-x86\_64.tar.xz │ ├── kubernetes-sigs │ │ ├── cri-tools │ │ │ └── releases │ │ │ └── download │ │ │ └── v1.20.0 │ │ │ └── crictl-v1.20.0-linux-amd64.tar.gz │ │ └── krew │ │ └── releases │ │ └── download │ │ └── v0.4.1 │ │ └── krew.tar.gz │ └── projectcalico │ ├── calico │ │ └── archive │ │ └── v3.17.4.tar.gz │ └── calicoctl │ └── releases │ └── download │ └── v3.17.4 │ └── calicoctl-linux-amd64 └── storage.googleapis.com └── kubernetes-release └── release └── v1.20.6 └── bin └── linux └── amd64 ├── kubeadm ├── kubectl └── kubelet 保持这个目录结构不变,把它们上传到自己的文件服务器上,然后再修改这个文件的下载参数,只需要在前面加上文件服务器的 URL 即可,比如我的配置: \# Download URLs download\_url: "https://dl.k8s.li" kubelet\_download\_url: "{{ download\_url }}/storage.googleapis.com/kubernetes-release/release/{{ kube\_version }}/bin/linux/{{ image\_arch }}/kubelet" kubectl\_download\_url: "{{ download\_url }}/storage.googleapis.com/kubernetes-release/release/{{ kube\_version }}/bin/linux/{{ image\_arch }}/kubectl" kubeadm\_download\_url: "{{ download\_url }}/storage.googleapis.com/kubernetes-release/release/{{ kubeadm\_version }}/bin/linux/{{ image\_arch }}/kubeadm" etcd\_download\_url: "{{ download\_url }}/github.com/coreos/etcd/releases/download/{{ etcd\_version }}/etcd-{{ etcd\_version }}-linux-{{ image\_arch }}.tar.gz" cni\_download\_url: "{{ download\_url }}/github.com/containernetworking/plugins/releases/download/{{ cni\_version }}/cni-plugins-linux-{{ image\_arch }}-{{ cni\_version }}.tgz" calicoctl\_download\_url: "{{ download\_url }}/github.com/projectcalico/calicoctl/releases/download/{{ calico\_ctl\_version }}/calicoctl-linux-{{ image\_arch }}" calico\_crds\_download\_url: "{{ download\_url }}/github.com/projectcalico/calico/archive/{{ calico\_version }}.tar.gz" crictl\_download\_url: "{{ download\_url }}/github.com/kubernetes-sigs/cri-tools/releases/download/{{ crictl\_version }}/crictl-{{ crictl\_version }}-{{ ansible\_system | lower }}-{{ image\_arch }}.tar.gz" helm\_download\_url: "{{ download\_url }}/get.helm.sh/helm-{{ helm\_version }}-linux-{{ image\_arch }}.tar.gz" crun\_download\_url: "{{ download\_url }}/github.com/containers/crun/releases/download/{{ crun\_version }}/crun-{{ crun\_version }}-linux-{{ image\_arch }}" kata\_containers\_download\_url: "{{ download\_url }}/github.com/kata-containers/runtime/releases/download/{{ kata\_containers\_version }}/kata-static-{{ kata\_containers\_version }}-{{ ansible\_architecture }}.tar.xz" nerdctl\_download\_url: "{{ download\_url }}/github.com/containerd/nerdctl/releases/download/v{{ nerdctl\_version }}/nerdctl-{{ nerdctl\_version }}-{{ ansible\_system | lower }}-{{ image\_arch }}.tar.gz" - images.list 是 kubespray 所有可能会用到的镜像,如下: \# cat temp/images.list docker.io/amazon/aws-alb-ingress-controller:v1.1.9 docker.io/amazon/aws-ebs-csi-driver:v0.5.0 docker.io/cloudnativelabs/kube-router:v1.2.2 docker.io/integratedcloudnative/ovn4nfv-k8s-plugin:v1.1.0 docker.io/k8scloudprovider/cinder-csi-plugin:v1.20.0 docker.io/kubeovn/kube-ovn:v1.6.2 docker.io/kubernetesui/dashboard-amd64:v2.2.0 docker.io/kubernetesui/metrics-scraper:v1.0.6 docker.io/library/haproxy:2.3 docker.io/library/nginx:1.19 docker.io/library/registry:2.7.1 docker.io/nfvpe/multus:v3.7 docker.io/rancher/local-path-provisioner:v0.0.19 docker.io/weaveworks/weave-kube:2.8.1 docker.io/weaveworks/weave-npc:2.8.1 docker.io/xueshanf/install-socat:latest k8s.gcr.io/addon-resizer:1.8.11 k8s.gcr.io/coredns:1.7.0 k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3 k8s.gcr.io/dns/k8s-dns-node-cache:1.17.1 k8s.gcr.io/ingress-nginx/controller:v0.43.0 k8s.gcr.io/kube-apiserver:v1.20.6 k8s.gcr.io/kube-controller-manager:v1.20.6 k8s.gcr.io/kube-proxy:v1.20.6 k8s.gcr.io/kube-registry-proxy:0.4 k8s.gcr.io/kube-scheduler:v1.20.6 k8s.gcr.io/metrics-server/metrics-server:v0.4.2 k8s.gcr.io/pause:3.3 quay.io/calico/cni:v3.17.4 quay.io/calico/kube-controllers:v3.17.4 quay.io/calico/node:v3.17.4 quay.io/calico/typha:v3.17.4 quay.io/cilium/cilium-init:2019-04-05 quay.io/cilium/cilium:v1.8.9 quay.io/cilium/operator:v1.8.9 quay.io/coreos/etcd:v3.4.13 quay.io/coreos/flannel:v0.13.0-amd64 quay.io/datawire/ambassador-operator:v1.2.9 quay.io/external\_storage/cephfs-provisioner:v2.1.0-k8s1.11 quay.io/external\_storage/local-volume-provisioner:v2.3.4 quay.io/external\_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/jetstack/cert-manager-cainjector:v1.0.4 quay.io/jetstack/cert-manager-controller:v1.0.4 quay.io/jetstack/cert-manager-webhook:v1.0.4 quay.io/k8scsi/csi-attacher:v2.2.0 quay.io/k8scsi/csi-node-driver-registrar:v1.3.0 quay.io/k8scsi/csi-provisioner:v1.6.0 quay.io/k8scsi/csi-resizer:v0.5.0 quay.io/k8scsi/csi-snapshotter:v2.1.1 quay.io/k8scsi/snapshot-controller:v2.0.1 quay.io/l23network/k8s-netchecker-agent:v1.0 quay.io/l23network/k8s-netchecker-server:v1.0 可使用 skopeo 将镜像同步到自己的 registry 中,如下: for image in $(cat temp/images.list); do skopeo copy docker://${image} docker://hub.k8s.li/${image#\*/}; done 当时写这个脚本的时候一堆蛇皮 sed 替换操作写得想 ,比如有些变量会有 ansible 的 if else 判断,这就意味着也要用 shell 去实现它的判断逻辑。比如使用 shell 处理的时候需要将这下面坨转换成 shell 的 if else,而且还不能换行: coredns\_image\_repo: "{{ kube\_image\_repo }}{{'/coredns/coredns' if (coredns\_image\_is\_namespaced | bool) else '/coredns' }}" coredns\_image\_tag: "{{ coredns\_version if (coredns\_image\_is\_namespaced | bool) else (coredns\_version | regex\_replace('^v', '')) }}" \# special handling for https://github.com/kubernetes-sigs/kubespray/pull/7570 sed -i 's#^coredns\_image\_repo=.\*#coredns\_image\_repo=${kube\_image\_repo}$(if printf "%s\\\\n%s\\\\n" v1.21 ${kube\_version%.\*} | sort --check=quiet --version-sort; then echo -n /coredns/coredns;else echo -n /coredns; fi)#' ${TEMP\_DIR}/generate.sh sed -i 's#^coredns\_image\_tag=.\*#coredns\_image\_tag=$(if printf "%s\\\\n%s\\\\n" v1.21 ${kube\_version%.\*} | sort --check=quiet --version-sort; then echo -n ${coredns\_version};else echo -n ${coredns\_version/v/}; fi)#' ${TEMP\_DIR}/generate.sh 当时还学会了一手,在 shell 中使用 `printf "%s\\n%s\\n" $v1 $v2 | sort --check=quiet --version-sort` 这种方式可以判断两个版本号的大小,而且是最简单便捷的。 ## 镜像仓库 之前提到的是根据镜像列表将需要的镜像同步到自己的 registry 中,但对于本地开发测试来讲,这种手动导入比较费事费力。在看了大佬写的 [Docker 镜像加速教程](https://fuckcloudnative.io/posts/docker-registry-proxy/) 和 [如何搭建一个私有的镜像仓库 mirror](https://www.chenshaowen.com/blog/how-to-run-a-private-registry-mirror.html) 就想到了可以使用 docker registry 的 proxy 特性来部署几个 kubespray 需要的镜像仓库。如下 | origin | mirror | | --- | --- | | docker.io | hub.k8s.li | | k8s.gcr.io | gcr.k8s.li | | quay.io | quay.k8s.li | version: 0.1 log: fields: service: registry storage: cache: blobdescriptor: inmemory oss: accesskeyid: xxxx # 这里配置阿里云 OSS 的 accesskeyid accesskeysecret: xxxx # 这里配置阿里云 OSS 的 accesskeysecret region: oss-cn-beijing # 配置 OSS bucket 的区域,比如 oss-cn-beijing internal: false bucket: fileserver # 配置存储 bucket 的名称 rootdirectory: /kubespray/registry # 配置路径 delete: enabled: true http: headers: X-Content-Type-Options: \[nosniff\] health: storagedriver: enabled: true interval: 10s threshold: 3 version: '3' services: gcr-registry: image: registry:2 container\_name: gcr-registry restart: always volumes: - ./config.yml:/etc/docker/registry/config.yml ports: - 127.0.0.1:5001:5001 environment: - REGISTRY\_HTTP\_ADDR=0.0.0.0:5001 - REGISTRY\_PROXY\_REMOTEURL=https://k8s.gcr.io hub-registry: image: registry:2 container\_name: hub-registry restart: always volumes: - ./config.yml:/etc/docker/registry/config.yml ports: - 127.0.0.1:5002:5002 environment: - REGISTRY\_HTTP\_ADDR=0.0.0.0:5002 - REGISTRY\_PROXY\_REMOTEURL=https://docker.io quay-registry: image: registry:2 container\_name: quay-registry restart: always volumes: - ./config.yml:/etc/docker/registry/config.yml ports: - 127.0.0.1:5003:5003 environment: - REGISTRY\_HTTP\_ADDR=0.0.0.0:5003 - REGISTRY\_PROXY\_REMOTEURL=https://quay.io server { listen 443 ssl; listen \[::\]:443; server\_name gcr.k8s.li; ssl\_certificate domain.crt; ssl\_certificate\_key domain.key; gzip\_static on; client\_max\_body\_size 100000m; if ($request\_method !~\* GET|HEAD) { return 403; } location / { proxy\_redirect off; proxy\_set\_header Host $host; proxy\_set\_header X-Real-IP $remote\_addr; proxy\_set\_header X-Forwarded-For $proxy\_add\_x\_forwarded\_for; proxy\_pass http://localhost:5001; } } server { listen 443 ssl; listen \[::\]:443; server\_name hub.k8s.li; ssl\_certificate domain.crt; ssl\_certificate\_key domain.key; gzip\_static on; client\_max\_body\_size 100000m; if ($request\_method !~\* GET|HEAD) { return 403; } location / { proxy\_redirect off; proxy\_set\_header Host $host; proxy\_set\_header X-Real-IP $remote\_addr; proxy\_set\_header X-Forwarded-For $proxy\_add\_x\_forwarded\_for; proxy\_pass http://localhost:5002; } } server { listen 443 ssl; listen \[::\]:443; server\_name quay.k8s.li; ssl\_certificate domain.crt; ssl\_certificate\_key domain.key; gzip\_static on; client\_max\_body\_size 100000m; if ($request\_method !~\* GET|HEAD) { return 403; } location / { proxy\_redirect off; proxy\_set\_header Host $host; proxy\_set\_header X-Real-IP $remote\_addr; proxy\_set\_header X-Forwarded-For $proxy\_add\_x\_forwarded\_for; proxy\_pass http://localhost:5003; } } 相关配置文件在 [registry-mirrors](https://github.com/muzi502/registry-mirrors) 这个 repo 中。 ## 优化 kubespray 镜像大小 kubespray v1.25.1 版本官方构建的镜像大小为 `1.41GB` ,对于一些场景下希望镜像小一些,可以通过如下方法构建一个体积较小的镜像。 - 首先构建一个 base 镜像,对于不经常变动的我们把它封装在一个 base 镜像里,只有当相关依赖更新了才需要重新构建这个 base 镜像, `Dockerfile.base` 如下: FROM python:3.6-slim ENV KUBE\_VERSION v1.20.6 RUN apt update -y \\ && apt install -y \\ libssl-dev sshpass apt-transport-https jq moreutils vim moreutils iputils-ping \\ ca-certificates curl gnupg2 software-properties-common rsync wget tcpdump \\ && rm -rf /var/lib/apt/lists/\* \\ && wget -q https://dl.k8s.io/$KUBE\_VERSION/bin/linux/amd64/kubectl -O /usr/local/bin/kubectl \\ && chmod a+x /usr/local/bin/kubectl WORKDIR /kubespray COPY . . RUN python3 -m pip install -r requirements.txt 构建 kubespray 镜像:FROM 的 base 镜像就使用我们刚刚构建好的镜像,对于 kubespray 来讲,相关依赖已经在 base 镜像中安装好了,这里构建的时候只需要把 repo 复制到 /kubespray 目录下即可,如下: FROM kubespray:v2.16.0-base-kube-v1.20.6 COPY . /kubespray 这样构建出来的镜像大小不到 600MB,比之前小了很多,而且每次构建镜像的时候也比较快。只不过当 `requirements.txt` 文件更新后需要重新构建 base 镜像,并修改 kubespray 的 FROM 镜像为新的 base 镜像。 kubespray v2.15.1 73294562105a 1.41GB kubespray v2.16-kube-v1.20.6-1.0 80b735995e48 579MB - kubespray 默认没有加如 `.dockerignore` ,这就意味着构建镜像的时候会把当前目录下的所有内容复制到镜像里,会导致镜像工作目录下可能很混乱,在容器里 debug 的时候不太美观,强迫症患者可以在 repo 中加入如下的 `.dockerignore` 文件。 .ansible-lint .editorconfig .git .github .gitignore .gitlab-ci .gitlab-ci.yml .gitmodules .markdownlint.yaml .nojekyll CNAME CONTRIBUTING.md Dockerfile Makefile OWNERS README.md RELEASE.md SECURITY\_CONTACTS build code-of-conduct.md docs index.html logo ## docker registry 禁止 push 镜像 默认直接使用 docker registry 来部署镜像仓库的话,比如我的 hub.k8s.li ,因为没有权限限制会导致任何可访问该镜像仓库的客户端可以 push 镜像,这有点不安全,需要安全加固一下。因为 pull 镜像的时候客户端走的都是 HTTP GET 请求,可以通过 nginx 禁止 POST、PUT 这种请求方法,这样就可以禁止 push 镜像。在 nginx 的server 字段中添加如下内容: server { if ($request\_method !~\* GET) { return 403; } } 这样在 push 镜像的时候会返回 403 的错误 [\[email protected\]](/cdn-cgi/l/email-protection):/root # docker pull hub.k8s.li/calico/node:v3.17.3 v3.17.3: Pulling from calico/node 282bf12aa8be: Pull complete 4ac1bb9354ad: Pull complete Digest: sha256:3595a9a945a7ba346a12ee523fc7ae15ed35f1e6282b76bce7fec474d28d68bb Status: Downloaded newer image for hub.k8s.li/calico/node:v3.17.3 [\[email protected\]](/cdn-cgi/l/email-protection):/root # docker push !$ [\[email protected\]](/cdn-cgi/l/email-protection):/root # docker push hub.k8s.li/calico/node:v3.17.3 The push refers to repository \[hub.k8s.li/calico/node\] bc19ae092bb4: Preparing 94333d52d45d: Preparing error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html>\\r\\n<head><title>403 Forbidden</title></head>\\r\\n<body bgcolor=\\"white\\">\\r\\n<center><h1>403 Forbidden</h1></center>\\r\\n<hr><center>nginx</center>\\r\\n</body>\\r\\n</html>\\r\\n" 那么需要 push 镜像的时候怎么办? docker registry 启动的时候 bind 在 127.0.0.1 上,而不是 0.0.0.0,通过 localhost:5000 来 push 镜像。 ## 镜像仓库自签证书 如果镜像仓库使用的是自签证书,可以跑下面这个 playbook 将自签证书添加到所有节点的 trusted CA dir 中,这样无需配置 `insecure-registries` 也能拉取镜像。 `add-registry-ca.yml` \--- - hosts: all gather\_facts: False tasks: - name: Gen\_certs | target ca-certificate store file set\_fact: ca\_cert\_path: |- {% if ansible\_os\_family == "Debian" -%} /usr/local/share/ca-certificates/registry-ca.crt {%- elif ansible\_os\_family == "RedHat" -%} /etc/pki/ca-trust/source/anchors/registry-ca.crt {%- elif ansible\_os\_family in \["Flatcar Container Linux by Kinvolk"\] -%} /etc/ssl/certs/registry-ca.pem {%- elif ansible\_os\_family == "Suse" -%} /etc/pki/trust/anchors/registry-ca.pem {%- elif ansible\_os\_family == "ClearLinux" -%} /usr/share/ca-certs/registry-ca.pem {%- endif %} tags: - facts - name: Gen\_certs | add CA to trusted CA dir copy: src: "{{ registry\_cert\_path }}" dest: "{{ ca\_cert\_path }}" register: registry\_ca\_cert - name: Gen\_certs | update ca-certificates (Debian/Ubuntu/SUSE/Flatcar) # noqa 503 command: update-ca-certificates when: registry\_ca\_cert.changed and ansible\_os\_family in \["Debian", "Flatcar Container Linux by Kinvolk", "Suse"\] - name: Gen\_certs | update ca-certificates (RedHat) # noqa 503 command: update-ca-trust extract when: registry\_ca\_cert.changed and ansible\_os\_family == "RedHat" - name: Gen\_certs | update ca-certificates (ClearLinux) # noqa 503 command: clrtrust add "{{ ca\_cert\_path }}" when: registry\_ca\_cert.changed and ansible\_os\_family == "ClearLinux" - 将自签的 registry 证书放到本地,执行 playbook 并指定 `registry_cert_path` 为正确的路径 [\[email protected\]](/cdn-cgi/l/email-protection):/kubespray# ansible-playbook -i deploy/inventory -e registry\_cert\_path=/kubespray/registry\_ca.pem add-registry-ca.yml PLAY \[all\] \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Thursday 29 April 2021 08:18:25 +0000 (0:00:00.077) 0:00:00.077 \*\*\*\*\*\*\*\* TASK \[Gen\_certs | target ca-certificate store file\] \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* ok: \[kube-control-2\] ok: \[kube-control-3\] ok: \[kube-control-1\] ok: \[kube-node-1\] Thursday 29 April 2021 08:18:25 +0000 (0:00:00.389) 0:00:00.467 \*\*\*\*\*\*\*\* TASK \[Gen\_certs | add CA to trusted CA dir\] \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* changed: \[kube-control-2\] changed: \[kube-control-3\] changed: \[kube-control-1\] changed: \[kube-node-1\] Thursday 29 April 2021 08:18:29 +0000 (0:00:04.433) 0:00:04.901 \*\*\*\*\*\*\*\* Thursday 29 April 2021 08:18:30 +0000 (0:00:00.358) 0:00:05.259 \*\*\*\*\*\*\*\* TASK \[Gen\_certs | update ca-certificates (RedHat)\] \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* changed: \[kube-control-1\] changed: \[kube-control-3\] changed: \[kube-control-2\] changed: \[kube-node-1\] Thursday 29 April 2021 08:18:33 +0000 (0:00:02.938) 0:00:08.197 \*\*\*\*\*\*\*\* PLAY RECAP \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* kube-control-1 : ok=3 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 kube-control-2 : ok=3 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 kube-control-3 : ok=3 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 kube-node-1 : ok=3 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 Thursday 29 April 2021 08:18:33 +0000 (0:00:00.355) 0:00:08.553 \*\*\*\*\*\*\*\* \================================================================ Gen\_certs | add CA to trusted CA dir ------------------------------------------------------------- 4.43s Gen\_certs | update ca-certificates (RedHat) ------------------------------------------------------------- 2.94s Gen\_certs | target ca-certificate store file ------------------------------------------------------------- 0.39s Gen\_certs | update ca-certificates (Debian/Ubuntu/SUSE/Flatcar) ------------------------------------------------------------- 0.36s Gen\_certs | update ca-certificates (ClearLinux) -------------------------------------------------------------- 0.36s ## containerd 无法加载 CNI 配置导致节点 NotReady 偶现问题,重启一下 containerd 就可以了,具体原因还没排查出来 [\[email protected\]](/cdn-cgi/l/email-protection):/kubespray\# ansible all -i deploy/inventory -m service -a "name=containerd state=restarted" ## 优化部署速度 Kubespray 部署的时候有个 task 专门用来下载部署需要的镜像,由于是操作的所有节点,会将一些不需要的镜像拉取到该节点上。比如 kube-apiserver、kube-controller-manager、kube-scheduler 这些在 node 节点上不会用到的镜像也会在 node 节点上拉取,这样会导致 download 的 task 比较耗时。 TASK \[download : set\_container\_facts | Display the name of the image being processed\] \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* ok: \[kube-control-3\] => { "msg": "gcr.k8s.li/kube-controller-manager" } ok: \[kube-control-2\] => { "msg": "gcr.k8s.li/kube-controller-manager" } ok: \[kube-control-1\] => { "msg": "gcr.k8s.li/kube-controller-manager" } ok: \[kube-node-1\] => { "msg": "gcr.k8s.li/kube-controller-manager" } ok: \[kube-control-3\] => { "msg": "gcr.k8s.li/kube-scheduler" } ok: \[kube-control-2\] => { "msg": "gcr.k8s.li/kube-scheduler" } ok: \[kube-control-1\] => { "msg": "gcr.k8s.li/kube-scheduler" } ok: \[kube-node-1\] => { "msg": "gcr.k8s.li/kube-scheduler" 可用通过 `download_container: false` 这个参数来禁用 download container 这个 task,这样在 pod 启动的时候只拉取需要的镜像,可以节省一些部署耗时。 ## 启用插件 Kubespray 官方支持的插件列表如下,默认是 false 禁用了插件。 \# Kubernetes dashboard # RBAC required. see docs/getting-started.md for access details. dashboard\_enabled: false # Addons which can be enabled helm\_enabled: false krew\_enabled: false registry\_enabled: false metrics\_server\_enabled: false enable\_network\_policy: true local\_path\_provisioner\_enabled: false local\_volume\_provisioner\_enabled: false local\_volume\_provisioner\_directory\_mode: 0700 cinder\_csi\_enabled: false aws\_ebs\_csi\_enabled: false azure\_csi\_enabled: false gcp\_pd\_csi\_enabled: false vsphere\_csi\_enabled: false persistent\_volumes\_enabled: false cephfs\_provisioner\_enabled: false rbd\_provisioner\_enabled: false ingress\_nginx\_enabled: false ingress\_ambassador\_enabled: false ingress\_alb\_enabled: false cert\_manager\_enabled: false expand\_persistent\_volumes: false metallb\_enabled: false # containerd official CLI tool nerdctl\_enabled: false 在部署的时候如果想启动某些插件可以在自己本地对应的 inventory 目录下的 `group_vars/k8s_cluster/addons.yml` 文件中选择开启相应的插件,比如 `inventory/sample/group_vars/k8s_cluster/addons.yml` 。
adouk
2023年2月12日 23:46
转发文档
收藏文档
上一篇
下一篇
手机扫码
复制链接
手机扫一扫转发分享
复制链接
Markdown文件
分享
链接
类型
密码
更新密码