简介
由于现在 Centos 已经把 kubernetes 加入到了源, 所以本次安装是通过 yum 方式来安装的
操作环境
设备和环境
本次实验使用 3 台设备, 1 主 2 个节点的方式安装.
系统是 centos7.4 , 关闭了防火墙 firewalld,selinux,iptables,
master 包含 kube-apiserver kube-scheduler kube-controller-manager etcd 四个组件
node 包含 kube-proxy kubelet flannel 3 个组件
kube-apiserver: 位于 master 节点, 接受用户请求.
kube-scheduler: 位于 master 节点, 负责资源调度, 即 pod 建在哪个 node 节点.
kube-controller-manager: 位于 master 节点, 包含 ReplicationManager,Endpointscontroller,Namespacecontroller,and Nodecontroller 等.
etcd: 分布式键值存储系统, 共享整个集群的资源对象信息.
kubelet: 位于 node 节点, 负责维护在特定主机上运行的 pod.
kube-proxy: 位于 node 节点, 它起的作用是一个服务代理的角色
安装部署
要保证三台设备的时间是一直同步的, 所以要安装 NTP(3 台设备都要执行下面的命令)
- # yum -y install ntp
- # systemctl start ntpd
- # systemctl enable ntpd
- # ntpdate time1.aliyun.com
master 部署
安装主服务
yum -y install kubernetes-master etcd
配置主上 etcd
- [root@localhost ~]# cat /etc/etcd/etcd.conf
- #[Member]
- #ETCD_CORS=""ETCD_DATA_DIR="/var/lib/etcd/default.etcd"# 指定节点的数据存储目录, 包括节点 ID, 集群 ID, 集群初始化配置, Snapshot 文件, 若未指定 - wal-dir, 还会存储 WAL 文件;
- #ETCD_WAL_DIR=""# 指定节点的 was 文件的存储目录, 若指定了该参数, wal 文件会和其他数据文件分开存储
- ETCD_LISTEN_PEER_URLS="http://192.168.56.200:2380"# 监听 URL, 用于与其他节点通讯
- ETCD_LISTEN_CLIENT_URLS="http://192.168.56.200:2379,http://127.0.0.1:2379"# 告知客户端 url, 也就是服务的 url
- ETCD_MAX_SNAPSHOTS="5"
- #ETCD_MAX_WALS="5"
- ETCD_NAME="etcd1"# 节点名称
- #ETCD_SNAPSHOT_COUNT="100000"
- #ETCD_HEARTBEAT_INTERVAL="100"
- #ETCD_ELECTION_TIMEOUT="1000"
- #ETCD_QUOTA_BACKEND_BYTES="0"
- #ETCD_MAX_REQUEST_BYTES="1572864"
- #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
- #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
- #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
- #
- #[Clustering]
- ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.56.200:2380"# 告知集群其他节点 url
- ETCD_ADVERTISE_CLIENT_URLS="http://192.168.56.200:2379"
- #ETCD_DISCOVERY=""#ETCD_DISCOVERY_FALLBACK="proxy"#ETCD_DISCOVERY_PROXY=""
- #ETCD_DISCOVERY_SRV=""ETCD_INITIAL_CLUSTER="etcd1=http://192.168.56.200:2380,etcd2=http://192.168.56.201:2380,etcd3=http://192.168.56.202:2380"# 集群中所有节点
- #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"# 集群的 ID
- #ETCD_INITIAL_CLUSTER_STATE="new"
- #ETCD_STRICT_RECONFIG_CHECK="true"
- #ETCD_ENABLE_V2="true"
- #
- #[Proxy]
- #ETCD_PROXY="off"
- #ETCD_PROXY_FAILURE_WAIT="5000"
- #ETCD_PROXY_REFRESH_INTERVAL="30000"
- #ETCD_PROXY_DIAL_TIMEOUT="1000"
- #ETCD_PROXY_WRITE_TIMEOUT="5000"
- #ETCD_PROXY_READ_TIMEOUT="0"
- #
- #[Security]
- #ETCD_CERT_FILE=""#ETCD_KEY_FILE=""
- #ETCD_CLIENT_CERT_AUTH="false"
- #ETCD_TRUSTED_CA_FILE=""#ETCD_AUTO_TLS="false"#ETCD_PEER_CERT_FILE=""
- #ETCD_PEER_KEY_FILE=""#ETCD_PEER_CLIENT_CERT_AUTH="false"#ETCD_PEER_TRUSTED_CA_FILE=""
- #ETCD_PEER_AUTO_TLS="false"
- #
- #[Logging]
- #ETCD_DEBUG="false"
- #ETCD_LOG_PACKAGE_LEVELS=""#ETCD_LOG_OUTPUT="default"
- #
- #[Unsafe]
- #ETCD_FORCE_NEW_CLUSTER="false"
- #
- #[Version]
- #ETCD_VERSION="false"
- #ETCD_AUTO_COMPACTION_RETENTION="0"
- #
- #[Profiling]
- #ETCD_ENABLE_PPROF="false"
- #ETCD_METRICS="basic"
- #
- #[Auth]
- #ETCD_AUTH_TOKEN="simple"
配置 kubu-apiserver
- [root@localhost ~]# cat /etc/kubernetes/apiserver
- ###
- # kubernetes system config
- #
- # The following values are used to configure the kube-apiserver
- #
- # The address on the local server to listen to.
- #KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
- KUBE_API_ADDRESS="--address=0.0.0.0"
- # The port on the local server to listen on.
- KUBE_API_PORT="--port=8080"
- # Port minions listen on
- KUBELET_PORT="--kubelet-port=10250"
- # Comma separated list of nodes in the etcd cluster
- KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.56.200:2379,http://192.168.56.201:2379,http://192.168.56.202:2379"
- # Address range to use for services
- KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
- # default admission control policies
- #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
- KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
配置 controller-manager(可选, 实验中没有配置)
- # Add your own!
- #KUBE_CONTROLLER_MANAGER_ARGS=""KUBE_CONTROLLER_MANAGER_ARGS="--node-monitor-grace-period=10s --pod-eviction-timeout=10s"
配置 config
- [root@localhost ~]# cat /etc/kubernetes/config
- ###
- # kubernetes system config
- #
- # The following values are used to configure various aspects of all
- # kubernetes services, including
- #
- # kube-apiserver.service
- # kube-controller-manager.service
- # kube-scheduler.service
- # kubelet.service
- # kube-proxy.service
- # logging to stderr means we get it in the systemd journal
- KUBE_LOGTOSTDERR="--logtostderr=true"
- # journal message level, 0 is debug
- KUBE_LOG_LEVEL="--v=0"
- # Should this cluster be allowed to run privileged docker containers
- KUBE_ALLOW_PRIV="--allow-privileged=false"
- # How the controller-manager, scheduler, and proxy find the apiserver
- KUBE_MASTER="--master=http://127.0.0.1:8080"
如果端口 8080 呗占用, 可以使用其他的
启动服务
systemctl enable etcd kube-apiserver kube-scheduler kube-controller-manager
systemctl start etcd kube-apiserver kube-scheduler kube-controller-manager
etcd 网络配置
定义 etcd 中的网络配置, nodeN 中的 flannel service 会拉取此配置
- etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'
- # 这里 coreos.com 目录自己可以创建
或者写脚本
- [root@localhost ~]# cat etc.sh
- etcdctl mkdir /atomic.io/network
- etcdctl mk /atomic.io/network/config "{ \"Network\": \"172.17.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\"} }"
为 flannel 创建分配的网络
- # 只在 master 上 etcd 执行
- etcdctl mk /coreos.com/network/config '{"Network":"10.1.0.0/16"}'
- # 若要重新建, 先删除
etcdctl rm /coreos.com/network/ --recursive
Node 部署
yum -y install kubernetes-node etcd flannel docker
修改 kube-node
所有节点的配置都一样
- [root@localhost ~]# cat /etc/kubernetes/config
- ###
- # kubernetes system config
- #
- # The following values are used to configure various aspects of all
- # kubernetes services, including
- #
- # kube-apiserver.service
- # kube-controller-manager.service
- # kube-scheduler.service
- # kubelet.service
- # kube-proxy.service
- # logging to stderr means we get it in the systemd journal
- KUBE_LOGTOSTDERR="--logtostderr=true"
- # journal message level, 0 is debug
- KUBE_LOG_LEVEL="--v=0"
- # Should this cluster be allowed to run privileged docker containers
- KUBE_ALLOW_PRIV="--allow-privileged=false"
- # How the controller-manager, scheduler, and proxy find the apiserver
- KUBE_MASTER="--master=http://192.168.56.200:8080"
- kubelet
- [root@localhost ~]# cat /etc/kubernetes/kubelet
- ###
- # kubernetes kubelet (minion) config
- # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
- KUBELET_ADDRESS="--address=127.0.0.1"
- # The port for the info server to serve on
- KUBELET_PORT="--port=10250"
- # You may leave this blank to use the actual hostname
- KUBELET_HOSTNAME="--hostname-override=192.168.56.201"
- # location of the api-server
- KUBELET_API_SERVER="--api-servers=http://192.168.56.200:8080"
- # pod infrastructure container
- KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
- # Add your own!
- KUBELET_ARGS=""
- #Add your own!
- #KUBELET_ARGS=""KUBELET_ARGS="--pod-infra-container-image=kubernetes/pause"
修改 flannel
为 etcd 服务配置 flannel, 修改配置文件 /etc/sysconfig/flanneld
- [root@localhost ~]# cat /etc/sysconfig/flanneld
- # Flanneld configuration options
- # etcd url location. Point this to the server where etcd runs
- FLANNEL_ETCD_ENDPOINTS="http://192.168.56.200:2379"
- # etcd config key. This is the configuration key that flannel queries
- # For address range assignment
- FLANNEL_ETCD_PREFIX="/atomic.io/network" #这里的名字和上面 etc 的里面设置的名字必须一直
- # Any additional options that you want to pass
- #FLANNEL_OPTIONS=""
- Example:
- [root@localhost ~]# cat /etc/sysconfig/flanneld
- # Flanneld configuration options
- # etcd url location. Point this to the server where etcd runs
- FLANNEL_ETCD_ENDPOINTS="http://192.168.56.200:2379"
- # etcd config key. This is the configuration key that flannel queries
- # For address range assignment
- FLANNEL_ETCD_PREFIX="/atomic.io/network"
- # Any additional options that you want to pass
- #FLANNEL_OPTIONS=""
启动服务
systemctl restart flanneld docker
systemctl start kubelet kube-proxy
systemctl enable flanneld kubelet kube-proxy
通过查看网卡发现有 docker 和 flannel 两个网卡
- [root@localhost ~]# ifconfig
- docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
- inet 172.30.31.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::42:ddff:fe03:591c prefixlen 64 scopeid 0x20<link>
- ether 02:42:dd:03:59:1c txqueuelen 0 (Ethernet)
- RX packets 69 bytes 4596 (4.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
- eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
- inet 192.168.56.201 netmask 255.255.255.0 broadcast 192.168.56.255
inet6 fe80::d203:ac67:53b0:897b prefixlen 64 scopeid 0x20<link>
inet6 fe80::30c1:3975:3246:cc1f prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:f1:de:cb txqueuelen 1000 (Ethernet)
RX packets 810383 bytes 126454887 (120.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 796437 bytes 163368198 (155.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
- flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
- inet 172.30.31.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::54ee:7aff:fe11:ba95 prefixlen 64 scopeid 0x20<link>
ether 56:ee:7a:11:ba:95 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
- lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
- inet 127.0.0.1 netmask 255.0.0.0
- inet6 ::1 prefixlen 128 scopeid 0x10<host>
- loop txqueuelen 1 (Local Loopback)
- RX packets 1719 bytes 89584 (87.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1719 bytes 89584 (87.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
验证
- [root@localhost ~]# kubectl get node
- NAME STATUS AGE
- 192.168.56.201 Ready 4d
- 192.168.56.202 Ready 4d
来源: http://blog.51cto.com/sgk2011/2106775