本篇类容
1. 官方提供的三种部署方式
Kubernetes 平台环境规划
3. 自签 SSL 证书
Etcd 数据库集群部署
Node 节点安装 Docker
Flannel 容器集群网络部署
7. 部署 Master 组件
8. 部署 Node 组件
9. 部署一个测试示例
10. 部署 Web UI (Dashboard )
11. 部署集群内部 DNS 解析服务(CoreDNS)
官方提供的三种部署方式
minikube
Minikube 是一个工具, 可以在本地快速运行 - 一个单点的 Kubernetes, 仅用子尝试 Kubemnetes 或日常开发的用户使用. 部署地址: htps://kubernetese io/docs/setup/minikube/
kubeadm
Kubeadm 也是一个工具, 揭供 kubeadm init 和 ukubeadm join, 用于快速部署 Kubermnetes 集群, 部署地址: htpst/:/ubee/es.cs/do/s/cference/scetup tos/kubedm/kubeadm/
二进制包
推荐, 从官方下载发打版的二进制包, 手动部署每个组件, 组成 Kubermetes 集群. 下载地址: htpts//GitHub.com/kubemetes/kuberetes/teleases
Kubernetes 平台环境规划
单 Master 集群架构图
多 Master 集群架构图
自签 SSL 证书
组件 | 使用的证书 |
---|---|
etcd | capem, server.pem, server-key.pem |
flannel | ca.pem,server.pem, server-key.pem |
kube-apiserver | ca.pem. server.pem. server-key.pem |
kubelet | ca.pem, ca-key.pem |
kube-proxy | ca.pem, kube-proxy pem, kube-proxy-key.pem |
kubectl | ca.pem, admin.pem, admin-key.pem |
Etcd 数据库集群部署
etcd 简介
etcd 是 CoreOS 团队于 2013 年 6 月发起的开源项目, 它的目标是构建一个高可用的分布式键值 (key-value) 数据库. etcd 内部采用 raft 协议作为一致性算法, etcd 基于 Go 语言实现.
etcd 作为服务发现系统, 有以下的特点:
简单: 安装配置简单, 而且提供了 HTTP API 进行交互, 使用也很简单
安全: 支持 SSL 证书验证
快速: 根据官方提供的 benchmark 数据, 单实例支持每秒 2k + 读操作
可靠: 采用 raft 算法, 实现分布式系统数据的可用性和一致性
Etcd 三大支柱
一个强一致性, 高可用的服务存储目录.
基于 Ralf 算法的 etcd 天生就是这样一个强一致性, 高可用的服务存储目录.
一种注册服务和健康服务健康状况的机制.
用户可以在 etcd 中注册服务, 并且对注册的服务配置 key TTL, 定时保持服务的心跳以达到监控健康状态的效果.
一种查找和连接服务的机制.
通过在 etcd 指定的主题下注册的服务业能在对应的主题下查找到. 为了确保连接, 我们可以在每个服务机器上都部署一个 proxy 模式的 etcd, 这样就可以确保访问 etcd 集群的服务都能够互相连接.
Etcd 部署方式
二进制包下载地址
https://github.com/etcd-io/etcd/releases
查看集群状态
- /opt/etcd/bin/etcdctl \
- --a-file=ca.pem -crt-file=server.pem --key-file= server-key.pem \
- --endpoints=*https://192.168.0.x:2379.https://192.168.0.x:2379,https://192.168.0x:2379" \
- cluster-health
Node 安装 Docker
实例演示
环境部署
主机 | 需要安装的软件 |
---|---|
master(192.168.142.129/24) | kube-apiserver、kube-controller-manager、kube-scheduler、etcd |
node01(192.168.142.130/24) | kubelet、kube-proxy、docker、flannel、etcd |
node02(192.168.142.131/24) | kubelet、kube-proxy、docker 、flannel 、etcd |
k8s 官网地址, 点击获取噢!
ETCD 二进制包地址, 点击即可获取噢! https://github.com/etcd-io/etcd/releases
将上述的压缩包复制到 centos7 的下面即将创建的 k8s 目录中
一, Etcd 数据库集群部署
在 master 端的操作
- mkdir k8s
- cd k8s/
- mkdir etcd-cert
- mv etcd-cert.sh etcd-cert
编辑脚本下载 cfssl 官方包
- VIM cfssl.sh
- curl -L https:#pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
- curl -L https:#pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
- curl -L https:#pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
- chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
执行脚本下载 cfssl 官方包
bash cfssl.sh
cfssl 生成证书工具
cfssljson 通过传入 JSON 文件生成证书
cfssl-certinfo 查看证书信息
cd etcd-cert/
定义 ca 证书
- cat> ca-config.JSON <<EOF
- {
- "signing": {
- "default": {
- "expiry": "87600h"
- },
- "profiles": {
- "www": {
- "expiry": "87600h",
- "usages": [
- "signing",
- "key encipherment",
- "server auth",
- "client auth"
- ]
- }
- }
- }
- }
- EOF
实现证书签名
- cat> ca-csr.JSON <<EOF
- {
- "CN": "etcd CA",
- "key": {
- "algo": "rsa",
- "size": 2048
- },
- "names": [
- {
- "C": "CN",
- "L": "Beijing",
- "ST": "Beijing"
- }
- ]
- }
- EOF
生产证书, 生成 ca-key.pem,ca.pem
cfssl gencert -initca ca-csr.JSON | cfssljson -bare ca -
指定 etcd 三个节点之间的通信验证
- cat> server-csr.JSON <<EOF
- {
- "CN": "etcd",
- "hosts": [
- "192.168.142.129",
- "192.168.142.130",
- "192.168.142.131"
- ],
- "key": {
- "algo": "rsa",
- "size": 2048
- },
- "names": [
- {
- "C": "CN",
- "L": "BeiJing",
- "ST": "BeiJing"
- }
- ]
- }
- EOF
生成 ETCD 证书 server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.JSON -profile=www server-csr.JSON | cfssljson -bare server
解压 ETCD 二进制包
tar zxvf etcd-v3.3.10-Linux-amd64.tar.gz
配置文件, 命令文件, 证书
- mkdir /opt/etcd/{
- cfg,bin,ssl
- } -p
- mv etcd-v3.3.10-Linux-amd64/etcd etcd-v3.3.10-Linux-amd64/etcdctl /opt/etcd/bin/
证书拷贝
cp etcd-cert/*.pem /opt/etcd/ssl/
进入卡住状态等待其他节点加入(k8s 目录)
bash etcd.sh etcd01 192.168.142.129 etcd02=https:#192.168.142.130:2380,etcd03=https:#192.168.142.131:2380
使用另外一个会话打开, 会发现 etcd 进程已经开启
ps -ef | grep etcd
拷贝证书去其他节点
- scp -r /opt/etcd/ root@192.168.142.130:/opt/
- scp -r /opt/etcd/ root@192.168.142.131:/opt/
启动脚本拷贝其他节点
- scp /usr/lib/systemd/system/etcd.service root@192.168.142.130:/usr/lib/systemd/system/
- scp /usr/lib/systemd/system/etcd.service root@192.168.142.131:/usr/lib/systemd/system/
在 node01 节点的操作
修改 etcd 文件
VIM /opt/etcd/cfg/etcd
修改名称和地址
- #[Member]
- ETCD_NAME="etcd02"
- ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
- ETCD_LISTEN_PEER_URLS="https:#192.168.142.130:2380"
- ETCD_LISTEN_CLIENT_URLS="https:#192.168.142.130:2379"
- #[Clustering]
- ETCD_INITIAL_ADVERTISE_PEER_URLS="https:#192.168.142.130:2380"
- ETCD_ADVERTISE_CLIENT_URLS="https:#192.168.142.130:2379"
- ETCD_INITIAL_CLUSTER="etcd01=https:#192.168.142.129:2380,etcd02=https:#192.168.142.130:2380,etcd03=https:#192.168.142.131:2380"
- ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
- ETCD_INITIAL_CLUSTER_STATE="new"
启动服务
- systemctl start etcd
- systemctl status etcd
在 node02 节点的操作
修改 etcd 文件
VIM /opt/etcd/cfg/etcd
修改名称和地址
- #[Member]
- ETCD_NAME="etcd03"
- ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
- ETCD_LISTEN_PEER_URLS="https:#192.168.142.131:2380"
- ETCD_LISTEN_CLIENT_URLS="https:#192.168.142.131:2379"
- #[Clustering]
- ETCD_INITIAL_ADVERTISE_PEER_URLS="https:#192.168.142.131:2380"
- ETCD_ADVERTISE_CLIENT_URLS="https:#192.168.142.131:2379"
- ETCD_INITIAL_CLUSTER="etcd01=https:#192.168.142.129:2380,etcd02=https:#192.168.142.130:2380,etcd03=https:#192.168.142.131:2380"
- ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
- ETCD_INITIAL_CLUSTER_STATE="new"
启动服务
- systemctl start etcd
- systemctl status etcd
在 master 端检查群集状态(k8s/etcd-cert / 目录)
- /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https:#192.168.142.129:2379,https:#192.168.142.130:2379,https:#192.168.142.131:2379" cluster-health
- member 3eae9a550e2e3ec is healthy: got healthy result from https:#192.168.142.129:2379
- member 26cd4dcf17bc5cbd is healthy: got healthy result from https:#192.168.142.130:2379
- member 2fcd2df8a9411750 is healthy: got healthy result from https:#192.168.142.131:2379
- cluster is healthy
二, Node 节点安装 Docker
- # 安装依赖包
- yum install yum-utils device-mapper-persistent-data lvm2 -y
- # 设置阿里云镜像源
- yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
- # 安装 Docker-ce
- yum install -y docker-ce
- # 关闭防火墙及增强型安全功能
- systemctl stop firewalld.service
- setenforce 0
- # 启动 Docker 并设置为开机自启动
- systemctl start docker.service
- systemctl enable docker.service
- # 检查相关进程开启情况
- ps aux | grep docker
- # 重载守护进程
- systemctl daemon-reload
- # 重启服务
- systemctl restart docker
三, Flannel 容器集群网络部署
master 端写入分配的子网段到 ETCD 中, 供 flannel 使用(k8s/etcd-cert / 目录)
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.142.129:2379,https://192.168.142.130:2379,https://192.168.142.131:2379" set /coreos.com/network/config '{"Network":"172.17.0.0/16","Backend": {"Type":"vxlan"}}'
查看写入的信息
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.142.129:2379,https://192.168.142.130:2379,https://192.168.142.131:2379" get /coreos.com/network/config
拷贝到所有 node 节点(只需要部署在 node 节点即可)
- cd /root/k8s
- scp flannel-v0.10.0-Linux-amd64.tar.gz root@192.168.142.130:/root
- scp flannel-v0.10.0-Linux-amd64.tar.gz root@192.168.142.131:/root
所有 node 节点操作解压
tar zxvf flannel-v0.10.0-Linux-amd64.tar.gz
建立 k8s 工作目录
- mkdir /opt/kubernetes/{
- cfg,bin,ssl
- } -p
- mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
- VIM flannel.sh
- #!/bin/bash
- ETCD_ENDPOINTS=${
- 1:-"http://127.0.0.1:2379"
- }
- cat <<EOF>/opt/kubernetes/cfg/flanneld
- FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
- -etcd-cafile=/opt/etcd/ssl/ca.pem \
- -etcd-certfile=/opt/etcd/ssl/server.pem \
- -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
- EOF
- cat <<EOF>/usr/lib/systemd/system/flanneld.service
- [Unit]
- Description=Flanneld overlay address etcd agent
- After=network-online.target network.target
- Before=docker.service
- [Service]
- Type=notify
- EnvironmentFile=/opt/kubernetes/cfg/flanneld
- ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
- ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
- Restart=on-failure
- [Install]
- WantedBy=multi-user.target
- EOF
- systemctl daemon-reload
- systemctl enable flanneld
- systemctl restart flanneld
开启 flannel 网络功能
bash flannel.sh https://root@192.168.142.129:2379,https://root@192.168.142.130:2379,https://root@192.168.142.131:2379
配置 docker 连接 flannel
- VIM /usr/lib/systemd/system/docker.service
- [Service]
- Type=notify
- # the default is not to use systemd for cgroups because the delegate issues still
- # exists and systemd currently does not support the cgroup feature set required
- # for containers run by docker
- #14 行的准启动前插入以下条目
- EnvironmentFile=/run/flannel/subnet.env
- # 引用参数 $DOCKER_NETWORK_OPTIONS
- ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:# --containerd=/run/containerd/containerd.sock
- ExecReload=/bin/kill -s HUP $MAINPID
- TimeoutSec=0
- RestartSec=2
- Restart=always
- # 查看网络信息
- cat /run/flannel/subnet.env
- DOCKER_OPT_BIP="--bip=172.17.15.1/24"
- DOCKER_OPT_IPMASQ="--ip-masq=false"
- DOCKER_OPT_MTU="--mtu=1450"
- # 说明: bip 指定启动时的子网
- DOCKER_NETWORK_OPTIONS="--bip=172.17.15.1/24 --ip-masq=false --mtu=1450"
重启 docker 服务
- systemctl daemon-reload
- systemctl restart docker
查看 flannel 网络信息
- [root@localhost ~]# ifconfig
- docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu1500
- .NET 172.17.56.1 netmask 255.255.255.0broadcast 172.17.56.255
- ether 02:42:74:32:33:e3 txqueuelen 0 (Ethernet)
- RX packets 0 bytes 0 (0.0 B)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 0 bytes 0 (0.0 B)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
- ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
- .NET 192.168.142.130 netmask 255.255.255.0 broadcast 192.168.142.255
- inet6 fe80::8cb8:16f4:91a1:28d5 prefixlen 64 scopeid 0x20<link>
- ether 00:0c:29:04:f1:1f txqueuelen 1000 (Ethernet)
- RX packets 436817 bytes 153162687 (146.0 MiB)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 375079 bytes 47462997 (45.2 MiB)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
- flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
- .NET 172.17.56.0 netmask 255.255.255.255 broadcast 0.0.0.0
- inet6 fe80::249c:c8ff:fec0:4baf prefixlen 64 scopeid 0x20<link>
- ether 26:9c:c8:c0:4b:af txqueuelen 0 (Ethernet)
- RX packets 0 bytes 0 (0.0 B)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 0 bytes 0 (0.0 B)
- TX errors 0 dropped 26 overruns 0 carrier 0 collisions 0
- lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
- .NET 127.0.0.1 netmask 255.0.0.0
- inet6 ::1 prefixlen 128 scopeid 0x10<host>
- loop txqueuelen 1 (Local Loopback)
- RX packets 1915 bytes 117267 (114.5 KiB)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 1915 bytes 117267 (114.5 KiB)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
- virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
- .NET 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
- ether 52:54:00:61:63:f2 txqueuelen 1000 (Ethernet)
- RX packets 0 bytes 0 (0.0 B)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 0 bytes 0 (0.0 B)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
测试 ping 通对方 docker0 网卡 证明 flannel 起到路由作用
- docker run -it CentOS:7 /bin/bash
- yum install.NET-tools -y
查看容器内的 flannel 网络信息
- [root@5f9a65565b53 /]# ifconfig
- eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
- .NET 172.17.56.2 netmask 255.255.255.0broadcast 172.17.56.255
- ether 02:42:ac:11:38:02 txqueuelen 0 (Ethernet)
- RX packets 15632 bytes 13894772 (13.2 MiB)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 7987 bytes 435819 (425.6 KiB)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
- lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
- .NET 127.0.0.1 netmask 255.0.0.0
- loop txqueuelen 1 (Local Loopback)
- RX packets 0 bytes 0 (0.0 B)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 0 bytes 0 (0.0 B)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
- 再次测试 ping 通两个 node 中的 CentOS:7 容器
- ```bash
- [root@f1e937618b50 /]# ping 172.17.15.2
- PING 172.17.15.2 (172.17.15.2) 56(84) bytes of data.
- 64 bytes from 172.17.15.2: icmp_seq=1 ttl=62 time=0.420 ms
- 64 bytes from 172.17.15.2: icmp_seq=2 ttl=62 time=0.302 ms
- 64 bytes from 172.17.15.2: icmp_seq=3 ttl=62 time=0.420 ms
- 64 bytes from 172.17.15.2: icmp_seq=4 ttl=62 time=0.364 ms
- 64 bytes from 172.17.15.2: icmp_seq=5 ttl=62 time=0.114 ms
未完待续...
来源: http://blog.51cto.com/14449521/2468337