(十一)集群搭建:生产环境部署K8s集群的两种方式

(十一)集群搭建:生产环境部署K8s集群的两种方式

两种方式

  • kubeadm

    1
    2
    Kubeadm是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
    部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
  • 二进制

    1
    2
    推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
    下载地址:https://qithub.com/kubernetes/kubernetes/releases

机器配置建议

安装准备

  • 关闭防火墙

  • 交换分区设置

    1
    2
    3
    4
    swapoff -a
    sed -ri 's/.*swap.*/#&/' /etc/fstab
    echo "vm.swappiness=0" >> /etc/sysctl.conf
    sysctl -p
  • 主机名设置

    1
    hostnamectl set-hostname xxx
  • 主机与IP地址解析

    1
    2
    3
    4
    5
    6
    7
    8
    cat >> /etc/hosts << EOF
    192.168.10.10 ha1
    192.168.10.11 ha2
    192.168.10.12 k8s-master1
    192.168.10.13 k8s-master2
    192.168.10.14 k8s-master3
    192.168.10.15 k8s-worker1
    EOF
  • 主机系统时间同步

    1
    2
    3
    4
    5
    6
    安装软件
    yum -y install ntpdate

    制定时间同步计划任务
    crontab -e
    0 */1 * * * ntpdate time1.aliyun.com
  • 安装Docker/kubeadm/kubelet【所有节点】

    Kubernetes默认CR(容器运行时)为Docker,因此先安装Docker

    • 安装Docker

      1
      2
      3
      wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -0 /etc/yum.repos.d/docker-ce.repo
      yum -y install docker-ce
      systemctl enable docker&&systemctl start docker
    • 配置镜像下载速器

      1
      2
      3
      4
      5
      cat /etc/docker/daemon.json <EOF
      "registry-mirrors":["https://b9pmyelo.mirror.aliyuncs.com"]
      EOF
      systemctl restart docker
      docker info
    • 添加阿里云YUM软件源

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      cat /etc/yum.repos.d/kubernetes.repo <EOF
      [kubernetes]
      name=Kubernetes
      baseur1=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
      enabled=1
      gpgcheck=0
      repo_gpgcheck=0
      gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-
      package-key.gpg
      EOF
    • 安装kubeadm,kubelet(k8s唯一没有被容器化管理的组件)和kubectl

      由于版本更新频繁,这里指定版本号部署:

      1
      yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubect1-1.19.0 systemct1 enable kubelet

部署Kubernetes Master

1
2
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node

在192.168.31.61(Master)执行

1
2
3
4
5
6
7
kubeadm init\
--apiserver-advertise-address=192.168.31.61\
--image-repository registry.aliyuncs.com/google_containers
--kubernetes-version v1.19.0\
--service-cidr=10.96.0.0/12\
--pod-network-cidr=10.244.0.0/16\
--ignore-preflight-errors=all
  • -apiserver-advertise-address集群通告地址

  • -mage-repository由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址

  • -kubernetes-version K8s版本,与上面安装的一致

  • -service-cidr集群内部虚拟网络,Pod统一访问入口

  • -pod-network-cidr Pod网络,,与下面部署的C网络组件yaml中保持一致或者使用配置文件引导

    1
    vim kubeadm.conf
    1
    2
    3
    4
    5
    6
    7
    apiVersion:kubeadm.k8s.io/vlbeta2
    kind:ClusterConfiguration
    kubernetesVersion:v1.18.0
    imageRepository:registry.aliyuncs.com/google_containers
    networking:
    podSubnet:10.244.0.0/16
    serviceSubnet:10.96.0.0/12
    1
    kubeadm init --config kubeadm.conf --ignore-preflight-errors=all

    拷贝kubectl使用的连接k8s认证文件到默认路径:

    1
    2
    3
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm init初始化工作
  1. [preflight]环境检查和拉取镜像kubeadm config images pull

  2. [certs]生成k8s证书和etcd证书/etc/kubernetes/pki

  3. [kubeconfig]生成kubeconfig文件

  4. [kubelet-start]生成kubeleti配置文件

  5. [control-plane]部署管理节点组件,用镜像启动容器kubectl get pods -n kube-system

  6. [etcd]部署etcd数据库,用镜像启动容器

  7. [upload-config] [kubelet] [upload-certs]上传配置文件到k8s中

  8. [mark-control-plane]给管理节点添加一个标签node-role.kubernetes.io/master=’ ‘,再添加一个污点[node-role.kubernetes.io/master:NoSchedule]

  9. [bootstrap-token]自动为kubelet颁发证书

  10. [addons]部署插件,CoreDNS、kube-proxy

重置kubelet环境

重置k8s

kubeadm reset

查看部署的所有命名空间

kubectl get namespaces

加入Kubernetes Node

在192.168.31.62/63(Node)执行。

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

1
2
kubeadm join 192.168.31.61:6443 --token esce21.q6hetwm8si29qxwn \
--discovery-token-ca-cert-hash sha256:00603a05805807501d7181c3d60b478788408cfe6cedefedb1f97569708be9c5

默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:

1
2
3
4
5
6
kubeadm token create
kubeadm token list
openss1 x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openss1 rsa -pubin -outform der 2>/dev/null openss1 dgst
sha256 -hex sed 's/^.*//' 63bca849e0e01691ae14eab449570284f0c3 ddeea590f8da988c07fe2729e924
kubeadm join 192.168.31.61:6443 --token nuja6n.o3jrhsffigs9swnu --discovery-token-ca-cert-hash \
sha256:63bca849e0e01691ae14eab449570284f0c3 ddeea590f8da988c07fe2729e924

或者直接命令快捷生成

kubeadm token create-print-join-command

部署容器网络(CNI)

官方网络组件

注意:只需要部署下面其中一个,推荐Calico。

Calico是一个纯三层的数据中心网络方案,Calico支持广泛的平台,包括Kubernetes、OpenStack等。
Calico在每一个计算节点利用Linux Kernel实现了一个高效的虚拟路由器(vRouter)来负责数据转发,而每个vRouter通过BGP协议负责把自己上运行的workload的路由信息向整个Calico网络内传播。此外,Calico项目还实现了Kubernetes网络策略,提供ACL功能。

Calico教程

wget https://docs.projectcalico.org/manifests/calico.yaml

下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init指定的一样。修改完后应用清单

1
2
kubectl apply -f calico.yaml
kubectl get pods -n kube-system

测试kubernetes集群

  • 验证Pod工作

  • 验证Pod网络通信I

  • 验证DNS解析

在Kubernetest集群中创建一个pod,验证是否正常运行

1
2
3
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc

访问地址:http://NodelP:Pot

部署Dashboard

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
type: NodePort

---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0-beta8
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.1
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}

访问地址:https//NodelP:30001

创建service account并绑定默认cluster–admin管理员集群角色

1
2
3
4
5
6
7
8
9
10
#获取所有命名空间
kubectl get namespaces
#获取用户Token
kubectl get secret -n kubernetes-dashboard
#创建用户
kubectl create serviceaccount dashboard-admin -n kube-system
#用户授权
$kubectl create clusterrolebinding dashboard-admin--clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
#获取用户Token
kubect1 describe secrets -n kube-system $(kubectl -n kube-system get secret awk | '/dashboard-admin/{print $1}')

使用输出的token登录Dashboard