通过将 Elasticsearch 的 data,ingest,master 角色进行分离, 搭建起高性能 + 高可用的 ES 架构
目录
▪ 用途
▪ 架构
▪ 步骤说明
▪ Elasticsearch-data 部署
▪ Elasticsearch-ingest 部署
▪ Elasticsearch-master 部署
用途
在第一篇《EFK 教程 - 快速入门指南》中, 阐述了 EFK 的安装部署, 其中 ES 的架构为三节点, 即 master,ingest,data 角色同时部署在三台服务器上.
在本文中, 将进行角色分离部署, 并且每个角色分别部署三节点, 在实现性能最大化的同时保障高可用.
Elasticsearch 的 master 节点: 用于调度, 采用普通性能服务器来部署
Elasticsearch 的 ingest 节点: 用于数据预处理, 采用性能好的服务器来部署
Elasticsearch 的 data 节点: 用于数据落地存储, 采用存储性能好的服务器来部署
若不知道去哪找《EFK 教程 - 快速入门指南》, 可在主流搜索引擎里搜索:
小慢哥 EFK 教程 快速入门指南
或者
小慢哥 EFK 教程 基于多节点 ES 的 EFK 安装部署配置
架构
服务器配置
注意: 此处的架构是之前的文章《EFK 教程 - 快速入门指南》的拓展, 因此请先按照《EFK 教程 - 快速入门指南》完成部署
步骤说明
1⃣ 部署 3 台 data 节点, 加入原集群
2⃣ 部署 3 台 ingest 节点, 加入原集群
3⃣ 将原有的 es 索引迁移到 data 节点
4⃣ 将原有的 es 节点改造成 master 节点
Elasticsearch-data 部署
之前已完成了基础的 Elasticsearch 架构, 现需要新增三台存储节点加入集群, 同时关闭 master 和 ingest 功能
Elasticsearch-data 安装: 3 台均执行相同的安装步骤
- tar -zxvf Elasticsearch-7.3.2-Linux-x86_64.tar.gz
- mv Elasticsearch-7.3.2 /opt/Elasticsearch
- useradd Elasticsearch -d /opt/Elasticsearch -s /sbin/nologin
- mkdir -p /opt/logs/Elasticsearch
- chown Elasticsearch.Elasticsearch /opt/Elasticsearch -R
- chown Elasticsearch.Elasticsearch /opt/logs/Elasticsearch -R
- # 数据盘需要 Elasticsearch 写权限
- chown Elasticsearch.Elasticsearch /data/SAS -R
- # 限制一个进程可以拥有的 VMA(虚拟内存区域) 的数量要超过 262144, 不然 Elasticsearch 会报 max virtual memory areas vm.max_map_count [65535] is too low, increase to at least [262144]
- echo "vm.max_map_count = 655350">> /etc/sysctl.conf
- sysctl -p
Elasticsearch-data 配置
- 192.168.1.51 /opt/Elasticsearch/config/Elasticsearch.YAML
- cluster.name: my-application
- node.name: 192.168.1.51
- # 数据盘位置, 如果有多个硬盘位置, 用 "," 隔开
- path.data: /data/SAS
- path.logs: /opt/logs/Elasticsearch
- network.host: 192.168.1.51
- discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- # 关闭 master 功能
- node.master: false
- # 关闭 ingest 功能
- node.ingest: false
- # 开启 data 功能
- node.data: true
- 192.168.1.52 /opt/Elasticsearch/config/Elasticsearch.YAML
- cluster.name: my-application
- node.name: 192.168.1.52
- # 数据盘位置, 如果有多个硬盘位置, 用 "," 隔开
- path.data: /data/SAS
- path.logs: /opt/logs/Elasticsearch
- network.host: 192.168.1.52
- discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- # 关闭 master 功能
- node.master: false
- # 关闭 ingest 功能
- node.ingest: false
- # 开启 data 功能
- node.data: true
- 192.168.1.53 /opt/Elasticsearch/config/Elasticsearch.YAML
- cluster.name: my-application
- node.name: 192.168.1.53
- # 数据盘位置, 如果有多个硬盘位置, 用 "," 隔开
- path.data: /data/SAS
- path.logs: /opt/logs/Elasticsearch
- network.host: 192.168.1.53
- discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- # 关闭 master 功能
- node.master: false
- # 关闭 ingest 功能
- node.ingest: false
- # 开启 data 功能
- node.data: true
Elasticsearch-data 启动
sudo -u Elasticsearch /opt/Elasticsearch/bin/Elasticsearch
Elasticsearch 集群状态
curl "http://192.168.1.31:9200/_cat/health?v"
Elasticsearch-data 状态
curl "http://192.168.1.31:9200/_cat/nodes?v"
Elasticsearch-data 参数说明
- status: green # 集群健康状态
- node.total: 6 # 有 6 台机子组成集群
- node.data: 6 # 有 6 个节点的存储
- node.role: d # 只拥有 data 角色
- node.role: i # 只拥有 ingest 角色
- node.role: m # 只拥有 master 角色
- node.role: mid # 拥 master,ingest,data 角色
Elasticsearch-ingest 部署
现需要新增三台 ingest 节点加入集群, 同时关闭 master 和 data 功能
Elasticsearch-ingest 安装: 3 台 es 均执行相同的安装步骤
- tar -zxvf Elasticsearch-7.3.2-Linux-x86_64.tar.gz
- mv Elasticsearch-7.3.2 /opt/Elasticsearch
- useradd Elasticsearch -d /opt/Elasticsearch -s /sbin/nologin
- mkdir -p /opt/logs/Elasticsearch
- chown Elasticsearch.Elasticsearch /opt/Elasticsearch -R
- chown Elasticsearch.Elasticsearch /opt/logs/Elasticsearch -R
- # 限制一个进程可以拥有的 VMA(虚拟内存区域) 的数量要超过 262144, 不然 Elasticsearch 会报 max virtual memory areas vm.max_map_count [65535] is too low, increase to at least [262144]
- echo "vm.max_map_count = 655350">> /etc/sysctl.conf
- sysctl -p
Elasticsearch-ingest 配置
- 192.168.1.41 /opt/Elasticsearch/config/Elasticsearch.YAML
- cluster.name: my-application
- node.name: 192.168.1.41
- path.logs: /opt/logs/Elasticsearch
- network.host: 192.168.1.41
- discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- # 关闭 master 功能
- node.master: false
- # 开启 ingest 功能
- node.ingest: true
- # 关闭 data 功能
- node.data: false
- 192.168.1.42 /opt/Elasticsearch/config/Elasticsearch.YAML
- cluster.name: my-application
- node.name: 192.168.1.42
- path.logs: /opt/logs/Elasticsearch
- network.host: 192.168.1.42
- discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- # 关闭 master 功能
- node.master: false
- # 开启 ingest 功能
- node.ingest: true
- # 关闭 data 功能
- node.data: false
- 192.168.1.43 /opt/Elasticsearch/config/Elasticsearch.YAML
- cluster.name: my-application
- node.name: 192.168.1.43
- path.logs: /opt/logs/Elasticsearch
- network.host: 192.168.1.43
- discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- # 关闭 master 功能
- node.master: false
- # 开启 ingest 功能
- node.ingest: true
- # 关闭 data 功能
- node.data: false
Elasticsearch-ingest 启动
sudo -u Elasticsearch /opt/Elasticsearch/bin/Elasticsearch
Elasticsearch 集群状态
curl "http://192.168.1.31:9200/_cat/health?v"
Elasticsearch-ingest 状态
curl "http://192.168.1.31:9200/_cat/nodes?v"
Elasticsearch-ingest 参数说明
- status: green # 集群健康状态
- node.total: 9 # 有 9 台机子组成集群
- node.data: 6 # 有 6 个节点的存储
- node.role: d # 只拥有 data 角色
- node.role: i # 只拥有 ingest 角色
- node.role: m # 只拥有 master 角色
- node.role: mid # 拥 master,ingest,data 角色
Elasticsearch-master 部署
首先, 将上一篇《EFK 教程 - 快速入门指南》中部署的 3 台 es(192.168.1.31,192.168.1.32,192.168.1.33) 改成只有 master 的功能, 因此需要先将这 3 台上的索引数据迁移到本次所做的 data 节点中
1⃣ 索引迁移: 一定要做这步, 将之前的索引放到 data 节点上
- curl -X PUT "192.168.1.31:9200/*/_settings?pretty" -H 'Content-Type: application/json' -d'
- {
- "index.routing.allocation.include._ip": "192.168.1.51,192.168.1.52,192.168.1.53"
- }'
2⃣ 确认当前索引存储位置: 确认所有索引不在 192.168.1.31,192.168.1.32,192.168.1.33 节点上
curl "http://192.168.1.31:9200/_cat/shards?h=n"
Elasticsearch-master 配置
注意事项: 修改配置, 重启进程, 需要一台一台执行, 要确保第一台成功后, 再执行下一台. 重启进程的方法: 由于上一篇文章《EFK 教程 - 快速入门指南》里, 是执行命令跑在前台, 因此直接 ctrl - c 退出再启动即可, 启动命令如下
- sudo -u Elasticsearch /opt/Elasticsearch/bin/Elasticsearch
- 192.168.1.31 /opt/Elasticsearch/config/Elasticsearch.YAML
- cluster.name: my-application
- node.name: 192.168.1.31
- path.logs: /opt/logs/Elasticsearch
- network.host: 192.168.1.31
- discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- # 开启 master 功能
- node.master: true
- # 关闭 ingest 功能
- node.ingest: false
- # 关闭 data 功能
- node.data: false
- 192.168.1.32 /opt/Elasticsearch/config/Elasticsearch.YAML
- cluster.name: my-application
- node.name: 192.168.1.32
- path.logs: /opt/logs/Elasticsearch
- network.host: 192.168.1.32
- discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- # 开启 master 功能
- node.master: true
- # 关闭 ingest 功能
- node.ingest: false
- # 关闭 data 功能
- node.data: false
- 192.168.1.33 /opt/Elasticsearch/config/Elasticsearch.YAML
- cluster.name: my-application
- node.name: 192.168.1.33
- path.logs: /opt/logs/Elasticsearch
- network.host: 192.168.1.33
- discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- # 开启 master 功能
- node.master: true
- # 关闭 ingest 功能
- node.ingest: false
- # 关闭 data 功能
- node.data: false
Elasticsearch 集群状态
curl "http://192.168.1.31:9200/_cat/health?v"
Elasticsearch-master 状态
curl "http://192.168.1.31:9200/_cat/nodes?v"
至此, 当 node.role 里所有服务器都不再出现 "mid", 则表示一切顺利完成.
来源: https://www.cnblogs.com/fzxiaomange/p/efk-mid-seprate.html