JoyLau's Blog

JoyLau 的技术学习与思考

背景

使用 docker stack 部署一组服务时,docker 会根据集群的每个节点的资源的情况来进行分配,作为使用者无法参与其中的分配,该怎么解决呢?

环境

  1. docker 1.13.0+
  2. compose version 3+

deploy mode

  1. replicated 默认模式,可自定义服务的副本数,此模式不能决定服务部署到哪个节点上
1
2
3
deploy:
mode: replicated
replicas: 2
  1. global 定义每个节点均部署一个服务的副本
1
2
deploy:
mode: global

node labels

该方法是通过给节点添加标签,然后在 yaml 文件里通过配置标签来决定服务部署到哪些节点

  1. docker node ls 查看节点
  2. docker node update –label-add role=service-1 nodeId 给 nodeId 的节点添加 label role=service-1, label 的形式是 map 的键值对形式
  3. docker node inspect nodeId 查看节点的 labels 信息
  4. docker node update –label-rm role=service-1 nodeId 删除 label

service 部署

1
2
3
4
docker service create \
--name nginx \
--constraint 'node.labels.role == service-1' \
nginx

stack 部署

1
2
3
4
deploy:
placement:
constraints:
- node.labels.role == service-2

constraints 填写多个时,它们之间的关系是 AND;constraints 可以匹配 node 标签和 engine 标签
例如

1
2
3
deploy:
placement:
constraints: [node.role == manager]
1
2
3
4
5
deploy:
placement:
constraints:
- node.role == manager
- engine.labels.operatingsystem == ubuntu 14.04

环境

  1. docker 18.09

说明

  1. 本篇文章中的搭建过程有多台物理机,如果说是自己测试使用的话,或者只有一台机器,可以使用 docker-machine 来创建多个 docker 主机
  2. 比如创建一个主机名为 work 的 docker 主机 : docker-machine create -d virtualbox worker
  3. 之后进入刚才创建的主机 : docker-machine ssh worker
  4. 然后就当成是一台独立机器来执行以下的操作

步骤

  1. 初始化 swarm 集群 : docker swarm init --advertise-addr 34.0.7.183
    1. 机器有多个网卡的指定 IP 地址 –advertise-addr
    2. 默认创建的是管理节点
  2. 加入刚才创建 swarm 集群
1
docker swarm join --token SWMTKN-1-1o1yfsquxasw7c7ah4t7lmd4i89i62u74tutzhtcbgb7wx6csc-1hf4tjv9oz9vpo937955mi0z2 34.0.7.183:2377

如果说忘了集群管理节点的 token, 可以使用 docker swarm join-token work/manage 来查看加入该集群的命令

  1. 查看集群节点: docker node list

服务部署

  1. 单服务部署 docker service create --name nginx -p 80:80 --replaces 4 containAddress
    上述命令部署了4个 nginx 服务,如果集群有2台主机的话,会在每台主机上部署 2 个服务

  2. 多服务部署, 使用 yml 配置文件,具体语法参看 https://docs.docker.com/compose/compose-file/

命令

docker swarm

docker swarm init 初始化集群
docker swarm join-token worker 查看工作节点的 token
docker swarm join-token manager 查看管理节点的 token
docker swarm join 加入集群中

docker stack

docker stack deploy 部署新的服务或更新现有服务
docker stack ls 列出现有服务
docker stack ps 列出服务中的任务
docker stack rm 删除服务
docker stack services 列出服务中的具体项
docker stack down 移除某个服务(不会删除数据)

docker node

docker node ls 查看所有集群节点
docker node rm 删除某个节点(-f强制删除)
docker node inspect 查看节点详情
docker node demote 节点降级,由管理节点降级为工作节点
docker node promote 节点升级,由工作节点升级为管理节点
docker node update 更新节点
docker node ps 查看节点中的 Task 任务

docker service

docker service create 部署服务
docker service inspect 查看服务详情
docker service logs 产看某个服务日志
docker service ls 查看所有服务详情
docker service rm 删除某个服务(-f强制删除)
docker service scale 设置某个服务个数
docker service update 更新某个服务

docker machine

docker-machine create 创建一个 Docker 主机(常用-d virtualbox)
docker-machine ls 查看所有的 Docker 主机
docker-machine ssh SSH 到主机上执行命令
docker-machine env 显示连接到某个主机需要的环境变量
docker-machine inspect 输出主机更多信息
docker-machine kill 停止某个主机
docker-machine restart 重启某台主机
docker-machine rm 删除某台主机
docker-machine scp 在主机之间复制文件
docker-machine start 启动一个主机
docker-machine status 查看主机状态
docker-machine stop 停止一个主机

swarm 集群节点可视化工具

portainer : 很强大的工具,可以监控本机和远程服务器或者集群环境,远程 docker 主机的话需要远程 docker 主机开启在 2375 端口的服务

https://www.portainer.io/installation/

1
2
3
4
5
6
7
8
9
10
version: '3'
services:
portainer:
image: 34.0.7.183:5000/joylau/portainer:latest
container_name: portainer
ports:
- 80:9000
restart: always
volumes:
- /home/liufa/portainer/data:/data

@Valid 和 @Validated

  1. @Valid@Validated 注解都用于字段校验
  2. @Valid 所属包为:javax.validation.Valid ; @Validated 所属包为 org.springframework.validation.annotation.Validated
  3. @Validated@Valid 的一次封装,是Spring提供的校验机制使用。@Valid 不提供分组功能

@Validated的特殊用法

当一个实体类需要多种验证方式时,例:对于一个实体类的id来说,新增的时候是不需要的,对于更新时是必须的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
public class Attachment {
@Id
@NotBlank(message = "id can not be blank!", groups = {All.class, Update.class})
private String id;

@NotBlank(message = "fileName can not be blank!", groups = {All.class})
private String fileName;

@NotBlank(message = "filePath can not be blank!", groups = {All.class})
private String filePath;

@Field
private byte[] data;

@NotBlank(message = "metaData can not be empty!", groups = {All.class})
private String metaData;

@NotBlank(message = "uploadTime can not be blank!", groups = {All.class})
private String uploadTime;

public Attachment(@NotBlank(message = "id can not be blank!", groups = {All.class, Update.class}) String id) {
this.id = id;
}

public interface All {
}

public interface Update {
}
}

单独对 groups 进行校验

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
/**
* 添加附件
*/
@PostMapping("addAttachment")
public MessageBody addAttachment(@RequestParam("file") final MultipartFile multipartFile,
@Validated(Attachment.All.class) Attachment attachment,
BindingResult results){
return attachmentApiService.addAttachment(multipartFile,attachment,results);
}

/**
* 更新单个附件
*/
@PostMapping("updateAttachment")
public MessageBody updateAttachment(@RequestParam(value = "file",required = false) final MultipartFile multipartFile,
@Validated(Attachment.Update.class) Attachment attachment){
return attachmentApiService.updateAttachment(multipartFile,attachment);
}

使用注意

  1. 校验的注解中不分配 groups,默认每次都要进行验证
  2. @Validated 没有添加 groups 属性时,默认验证没有分组的验证属性
  3. @Validated 添加特定 groups 属性时,只校验该注解中分配了该 groups 的属性
  4. 一个功能方法上处理多个模型对象时,需添加多个验证结果对象,如下所示
1
2
3
@RequestMapping("/addPeople")  
public @ResponseBody String addPeople(@Validated People p,BindingResult result,@Validated Person p2,BindingResult result2) {
}

错误信息

因为一些不正确的操作,导致容器的状态变成了 dead

1
2
3
4
5
6
7
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS                      PORTS                                                              NAMES
c21c993c5107 34.0.7.183:5000/joylau/traffic-service:2.1.7 "java -Djava.secur..." 2 weeks ago Dead traffic-service
dfbd1cdb31c2 34.0.7.183:5000/joylau/traffic-service-admin:1.2.1 "java -Djava.secur..." 2 weeks ago Dead traffic-service-admin
8778a28ab120 34.0.7.183:5000/joylau/traffic-service-data:2.0.4 "java -Djava.secur..." 2 weeks ago Dead traffic-service-data
65a3885e08b5 34.0.7.183:5000/joylau/traffic-service-node:1.2.3 "/bin/sh -c './nod..." 2 weeks ago Dead traffic-service-node
90700440e1df 34.0.7.183:5000/joylau/traffic-service-server:1.2.1 "java -Djava.secur..." 2 weeks ago Dead traffic-service-server

这类的容器删除时会报错

1
2
# docker rm c21c993c5107
Error response from daemon: Driver overlay2 failed to remove root filesystem c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64: remove /var/lib/docker/overlay2/099974dbeef827a3bbd932b7b36502763482ae8df25bd80f61a288b71b0ab810/merged: device or resource busy

解决方式

找到 filesystem 后面的字符串

1
# grep c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64 /proc/*/mountinfo

得到如下输出:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/proc/28032/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28033/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28034/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28035/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28036/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28037/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28038/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28039/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28040/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28041/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28042/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28043/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28044/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28045/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28046/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28047/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28048/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k

proc 和 mountinfo 中间的数字将其 kill 掉即可

写一个批量处理的脚本列出所有的 pid

1
grep c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64 /proc/*/mountinfo | awk '{print substr($1,7,5)}'

再 kill 掉

1
grep c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64 /proc/*/mountinfo | awk '{print substr($1,7,5)}' | xargs kill -9

print 是awk打印指定内容的主要命令

$0 表示整个当前行
$1 每行第一个字段,每个字段以空格隔开
substr($1,7,5) 每行第一个字段,第7个字符开始,截取5个字符

然后在 docker rm container

完美解决.

环境

  • elasticsearch 6.4.3

示例

下面一段文字用 ik 进行分词

http://34.0.7.184:9200/_analyze/ POST

1
2
3
4
{
"analyzer": "ik_smart",
"text": "关于加快建设合肥地铁七号线的通知说明"
}

分词结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
{
"tokens": [
{
"token": "关于",
"start_offset": 0,
"end_offset": 2,
"type": "CN_WORD",
"position": 0
}
,
{
"token": "加快",
"start_offset": 2,
"end_offset": 4,
"type": "CN_WORD",
"position": 1
}
,
{
"token": "建设",
"start_offset": 4,
"end_offset": 6,
"type": "CN_WORD",
"position": 2
}
,
{
"token": "合肥",
"start_offset": 6,
"end_offset": 8,
"type": "CN_WORD",
"position": 3
}
,
{
"token": "地铁",
"start_offset": 8,
"end_offset": 10,
"type": "CN_WORD",
"position": 4
}
,
{
"token": "七号",
"start_offset": 10,
"end_offset": 12,
"type": "CN_WORD",
"position": 5
}
,
{
"token": "线",
"start_offset": 12,
"end_offset": 13,
"type": "CN_CHAR",
"position": 6
}
,
{
"token": "的",
"start_offset": 13,
"end_offset": 14,
"type": "CN_CHAR",
"position": 7
}
,
{
"token": "通知",
"start_offset": 14,
"end_offset": 16,
"type": "CN_WORD",
"position": 8
}
,
{
"token": "说明",
"start_offset": 16,
"end_offset": 18,
"type": "CN_WORD",
"position": 9
}
]
}
  • 这个时候如果配置的 analyzer 为 ik_smart 或者 analyzer 和 search_analyzer 都为 ik_smart, 则短语中每一个字都能搜到结果,还可以设置高亮信息来着重看一下

  • 如果配置的 analyzer 为 ik search_analyzer 为 standard ,则 通知,说明,七号 这样的词是搜不到的,而 线 这样的词可以搜到,理解一下

http://34.0.7.184:9200/attachment_libs/_search POST

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
{
"query": {
"multi_match": {
"query": "关于",
"fields": [
"fileName^1.0"
],
"type": "best_fields",
"operator": "OR",
"slop": 0,
"prefix_length": 0,
"max_expansions": 50,
"zero_terms_query": "NONE",
"auto_generate_synonyms_phrase_query": true,
"fuzzy_transpositions": true,
"boost": 1
}
},
"_source": {
"includes": [
"fileName"
],
"excludes": [
"data"
]
},
"highlight": {
"pre_tags": [
"<span style = 'color:red'>"
],
"post_tags": [
"</span>"
],
"fields": {
"*": {}
}
}
}

返回的结果为:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": [ ]
}
}

而搜索 线 返回的结果为:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
{
"took": 5,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0.2876821,
"hits": [
{
"_index": "attachment_libs",
"_type": "attachment_info",
"_id": "fd45d5be-c314-488a-99d3-041acc015377",
"_score": 0.2876821,
"_source": {
"fileName": "关于加快建设合肥地铁七号线的通知说明"
},
"highlight": {
"fileName": [
"关于加快建设合肥地铁七号<span style = 'color:red'>线</span>的通知说明"
]
}
}
]
}
}

总结

  • 分析器主要有两种情况会被使用,一种是插入文档时,将text类型的字段做分词然后插入倒排索引,第二种就是在查询时,先对要查询的text类型的输入做分词,再去倒排索引搜索
  • 如果想要让 索引 和 查询 时使用不同的分词器,ElasticSearch也是能支持的,只需要在字段上加上search_analyzer参数
    1. 在索引时,只会去看字段有没有定义analyzer,有定义的话就用定义的,没定义就用ES预设的
    2. 在查询时,会先去看字段有没有定义search_analyzer,如果没有定义,就去看有没有analyzer,再没有定义,才会去使用ES预设的

.env

1
2
3
PRIVATE_REPO=34.0.7.183:5000
ES_VERSION=6.4.3
ELASTICSEARCH_CLUSTER_DIR=/Users/joylau/dev/idea-project/dev-app/es-doc-office/elasticsearch-cluster

docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
version: '2.2'
services:
node-0:
image: ${PRIVATE_REPO}/joylau/es-doc:${ES_VERSION}
container_name: node-0
ports:
- 9200:9200
- 9300:9300
restart: always
volumes:
- ${ELASTICSEARCH_CLUSTER_DIR}/data/node-0:/usr/share/elasticsearch/data
- ${ELASTICSEARCH_CLUSTER_DIR}/logs/node-0:/usr/share/elasticsearch/logs
environment:
- bootstrap.memory_lock=true
- cluster.name=es-doc-office
- node.name=node-0
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
node-1:
image: ${PRIVATE_REPO}/joylau/es-doc:${ES_VERSION}
container_name: node-1
restart: always
ports:
- 9201:9200
- 9301:9300
volumes:
- ${ELASTICSEARCH_CLUSTER_DIR}/data/node-1:/usr/share/elasticsearch/data
- ${ELASTICSEARCH_CLUSTER_DIR}/logs/node-1:/usr/share/elasticsearch/logs
environment:
- bootstrap.memory_lock=true
- cluster.name=es-doc-office
- node.name=node-1
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- "discovery.zen.ping.unicast.hosts=node-0"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
node-2:
image: ${PRIVATE_REPO}/joylau/es-doc:${ES_VERSION}
container_name: node-2
ports:
- 9202:9200
- 9302:9300
restart: always
volumes:
- ${ELASTICSEARCH_CLUSTER_DIR}/data/node-2:/usr/share/elasticsearch/data
- ${ELASTICSEARCH_CLUSTER_DIR}/logs/node-2:/usr/share/elasticsearch/logs
environment:
- bootstrap.memory_lock=true
- cluster.name=es-doc-office
- node.name=node-2
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- "discovery.zen.ping.unicast.hosts=master,node-1"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
node-3:
image: ${PRIVATE_REPO}/joylau/es-doc:${ES_VERSION}
container_name: node-3
ports:
- 9203:9200
- 9303:9300
restart: always
volumes:
- ${ELASTICSEARCH_CLUSTER_DIR}/data/node-3:/usr/share/elasticsearch/data
- ${ELASTICSEARCH_CLUSTER_DIR}/logs/node-3:/usr/share/elasticsearch/logs
environment:
- bootstrap.memory_lock=true
- cluster.name=es-doc-office
- node.name=node-3
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- "discovery.zen.ping.unicast.hosts=master,node-1,node-2"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
node-4:
image: ${PRIVATE_REPO}/joylau/es-doc:${ES_VERSION}
container_name: node-4
ports:
- 9204:9200
- 9304:9300
restart: always
volumes:
- ${ELASTICSEARCH_CLUSTER_DIR}/data/node-4:/usr/share/elasticsearch/data
- ${ELASTICSEARCH_CLUSTER_DIR}/logs/node-4:/usr/share/elasticsearch/logs
environment:
- bootstrap.memory_lock=true
- cluster.name=es-doc-office
- node.name=node-4
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- "discovery.zen.ping.unicast.hosts=master,node-1,node-3"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
networks:
esnet:

问题

  1. 挂载的日志和数据文件的权限
  2. vm.max_map_count 数目的设置
  3. mac 环境下注意配置 docker 的内存大小设置

env.init

1
2
3
4
5
6
7
8
9
#!/usr/bin/env bash
mkdir -p /home/liufa/es-data/data/{node-0,node-1,node-2,node-3,node-4} && echo es-data directory created success || echo es-data directory created failure && \
mkdir -p /home/liufa/es-data/logs/{node-0,node-1,node-2,node-3,node-4} && echo es-logs directory created success || echo es-logs directory created failure && \
groupadd elasticsearch && \
useradd elasticsearch -g elasticsearch && \
chown -R elasticsearch:elasticsearch /home/liufa/es-data/* && \
chmod -R 777 /home/liufa/es-data/* && \
echo 'vm.max_map_count=262144' >> /etc/sysctl.conf && \
sysctl -p

版本环境

  1. spring boot : 2.1.2.RELEASE
  2. spring-data-elasticsearch :3.1.4.RELEASE
  3. elasticsearch: 6.4.3

问题描述

使用 spring data elasticsearch 来连接使用 elasticsearch, 配置如下:

1
2
3
4
5
spring:
data:
elasticsearch:
cluster-name: docker-cluster
cluster-nodes: 192.168.10.68:9300

已经确认 elasticsearch 的 9300 和 9200 端口无任何问题,均可进行连接

可是在启动项目是报出如下错误:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
2019-01-16 17:17:35.376  INFO 36410 --- [           main] o.elasticsearch.plugins.PluginsService   : no modules loaded
2019-01-16 17:17:35.378 INFO 36410 --- [ main] o.elasticsearch.plugins.PluginsService : loaded plugin [org.elasticsearch.index.reindex.ReindexPlugin]
2019-01-16 17:17:35.378 INFO 36410 --- [ main] o.elasticsearch.plugins.PluginsService : loaded plugin [org.elasticsearch.join.ParentJoinPlugin]
2019-01-16 17:17:35.378 INFO 36410 --- [ main] o.elasticsearch.plugins.PluginsService : loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
2019-01-16 17:17:35.378 INFO 36410 --- [ main] o.elasticsearch.plugins.PluginsService : loaded plugin [org.elasticsearch.script.mustache.MustachePlugin]
2019-01-16 17:17:35.378 INFO 36410 --- [ main] o.elasticsearch.plugins.PluginsService : loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2019-01-16 17:17:36.045 INFO 36410 --- [ main] o.s.d.e.c.TransportClientFactoryBean : Adding transport node : 192.168.10.68:9300
2019-01-16 17:17:36.740 INFO 36410 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-01-16 17:17:36.987 INFO 36410 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 15 endpoint(s) beneath base path '/actuator'
2019-01-16 17:17:37.041 INFO 36410 --- [ main] org.xnio : XNIO version 3.3.8.Final
2019-01-16 17:17:37.049 INFO 36410 --- [ main] org.xnio.nio : XNIO NIO Implementation Version 3.3.8.Final
2019-01-16 17:17:37.091 INFO 36410 --- [ main] o.s.b.w.e.u.UndertowServletWebServer : Undertow started on port(s) 8080 (http) with context path ''
2019-01-16 17:17:37.094 INFO 36410 --- [ main] cn.joylau.code.EsDocOfficeApplication : Started EsDocOfficeApplication in 3.517 seconds (JVM running for 4.124)
2019-01-16 17:17:37.641 INFO 36410 --- [on(4)-127.0.0.1] io.undertow.servlet : Initializing Spring DispatcherServlet 'dispatcherServlet'
2019-01-16 17:17:37.641 INFO 36410 --- [on(4)-127.0.0.1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2019-01-16 17:17:37.660 INFO 36410 --- [on(4)-127.0.0.1] o.s.web.servlet.DispatcherServlet : Completed initialization in 19 ms
2019-01-16 17:17:37.704 WARN 36410 --- [on(5)-127.0.0.1] s.b.a.e.ElasticsearchRestHealthIndicator : Elasticsearch health check failed

java.net.ConnectException: Connection refused
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:943) ~[elasticsearch-rest-client-6.4.3.jar:6.4.3]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:227) ~[elasticsearch-rest-client-6.4.3.jar:6.4.3]
at org.springframework.boot.actuate.elasticsearch.ElasticsearchRestHealthIndicator.doHealthCheck(ElasticsearchRestHealthIndicator.java:61) ~[spring-boot-actuator-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at org.springframework.boot.actuate.health.AbstractHealthIndicator.health(AbstractHealthIndicator.java:84) ~[spring-boot-actuator-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at org.springframework.boot.actuate.health.CompositeHealthIndicator.health(CompositeHealthIndicator.java:98) [spring-boot-actuator-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at org.springframework.boot.actuate.health.HealthEndpoint.health(HealthEndpoint.java:50) [spring-boot-actuator-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_131]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_131]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_131]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_131]
at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:246) [spring-core-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.boot.actuate.endpoint.invoke.reflect.ReflectiveOperationInvoker.invoke(ReflectiveOperationInvoker.java:76) [spring-boot-actuator-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at org.springframework.boot.actuate.endpoint.annotation.AbstractDiscoveredOperation.invoke(AbstractDiscoveredOperation.java:61) [spring-boot-actuator-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at org.springframework.boot.actuate.endpoint.jmx.EndpointMBean.invoke(EndpointMBean.java:126) [spring-boot-actuator-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at org.springframework.boot.actuate.endpoint.jmx.EndpointMBean.invoke(EndpointMBean.java:99) [spring-boot-actuator-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819) [na:1.8.0_131]
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) [na:1.8.0_131]
at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468) [na:1.8.0_131]
at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76) [na:1.8.0_131]
at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309) [na:1.8.0_131]
at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401) [na:1.8.0_131]
at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829) [na:1.8.0_131]
at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_131]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_131]
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:346) [na:1.8.0_131]
at sun.rmi.transport.Transport$1.run(Transport.java:200) [na:1.8.0_131]
at sun.rmi.transport.Transport$1.run(Transport.java:197) [na:1.8.0_131]
at java.security.AccessController.doPrivileged(Native Method) [na:1.8.0_131]
at sun.rmi.transport.Transport.serviceCall(Transport.java:196) [na:1.8.0_131]
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568) [na:1.8.0_131]
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826) [na:1.8.0_131]
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683) [na:1.8.0_131]
at java.security.AccessController.doPrivileged(Native Method) [na:1.8.0_131]
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131]
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_131]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_131]
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:171) ~[httpcore-nio-4.4.10.jar:4.4.10]
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:145) ~[httpcore-nio-4.4.10.jar:4.4.10]
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) ~[httpcore-nio-4.4.10.jar:4.4.10]
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221) ~[httpasyncclient-4.1.4.jar:4.1.4]
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ~[httpasyncclient-4.1.4.jar:4.1.4]
... 1 common frames omitted


连接被拒绝???

发现无法进行 elasticsearch 的健康检查,于是想到我使用了 actuator 进行端点健康监控

经过调试发现如下代码为返回数据:
ElasticsearchRestHealthIndicator 类中

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
Response response = this.client
.performRequest(new Request("GET", "/_cluster/health/"));
StatusLine statusLine = response.getStatusLine();
if (statusLine.getStatusCode() != HttpStatus.SC_OK) {
builder.down();
builder.withDetail("statusCode", statusLine.getStatusCode());
builder.withDetail("reasonPhrase", statusLine.getReasonPhrase());
return;
}
try (InputStream inputStream = response.getEntity().getContent()) {
doHealthCheck(builder,
StreamUtils.copyToString(inputStream, StandardCharsets.UTF_8));
}
}

new Request("GET", "/_cluster/health/") 正是 elasticsearch 健康的请求,但是没有看到 host 和 port

于是用抓包工具发现其请求的是 127.0.0.1:9200

那这肯定是 springboot 的默认配置了

问题解决

查看 spring-boot-autoconfigure-2.1.2.RELEASE.jar
找到 elasticsearch 的配置 org.springframework.boot.autoconfigure.elasticsearch
在找到类 RestClientProperties
看到如下源码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
@ConfigurationProperties(prefix = "spring.elasticsearch.rest")
public class RestClientProperties {

/**
* Comma-separated list of the Elasticsearch instances to use.
*/
private List<String> uris = new ArrayList<>(
Collections.singletonList("http://localhost:9200"));

/**
* Credentials username.
*/
private String username;

/**
* Credentials password.
*/
private String password;

public List<String> getUris() {
return this.uris;
}

public void setUris(List<String> uris) {
this.uris = uris;
}

public String getUsername() {
return this.username;
}

public void setUsername(String username) {
this.username = username;
}

public String getPassword() {
return this.password;
}

public void setPassword(String password) {
this.password = password;
}

}

Collections.singletonList("http://localhost:9200")); 没错了,这就是错误的起因

顺藤摸瓜, 根据 spring.elasticsearch.rest 的配置,配置好 uris 即可

于是进行如下配置:

1
2
3
4
5
6
7
8
spring:
data:
elasticsearch:
cluster-name: docker-cluster
cluster-nodes: 192.168.10.68:9300
elasticsearch:
rest:
uris: ["http://192.168.10.68:9200"]

集群中的多个节点就写多个

启动,没有出现错误

还有一种方式也可以解决,但是并不是一种好的解决方式,那就是关闭 actuator 对 elasticsearch 的健康检查

1
2
3
4
management:
health:
elasticsearch:
enabled: false

  1. 按照官网的说法, gradle 的配置如下:
1
2
3
compile ('com.dangdang:elastic-job-lite-core:2.1.5')

compile ('com.dangdang:elastic-job-lite-spring:2.1.5')
  1. 这样配置后,写好示例代码,发现始终连接不上 zookeeper,抛出以下错误:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
***************************
APPLICATION FAILED TO START
***************************

Description:

An attempt was made to call the method org.apache.curator.framework.api.CreateBuilder.creatingParentsIfNeeded()Lorg/apache/curator/framework/api/ProtectACLCreateModePathAndBytesable; but it does not exist. Its class, org.apache.curator.framework.api.CreateBuilder, is available from the following locations:

jar:file:/Users/joylau/.gradle/caches/modules-2/files-2.1/org.apache.curator/curator-framework/4.0.1/3da85d2bda41cb43dc18c089820b67d12ba38826/curator-framework-4.0.1.jar!/org/apache/curator/framework/api/CreateBuilder.class

It was loaded from the following location:

file:/Users/joylau/.gradle/caches/modules-2/files-2.1/org.apache.curator/curator-framework/4.0.1/3da85d2bda41cb43dc18c089820b67d12ba38826/curator-framework-4.0.1.jar


Action:

Correct the classpath of your application so that it contains a single, compatible version of org.apache.curator.framework.api.CreateBuilder
  1. 一开始我以为是搭建的 zookeeper 环境有问题,但是用其他工具可以连接的上

  2. 又怀疑是 zookeeper 的版本问题,查看了 com.dangdang:elastic-job-common-core:2.1.5 , 发现其依赖的 zookeeper 版本是 org.apache.zookeeper:zookeeper:3.5.3-beta

  3. 于是又用 docker 搭建了个 3.5.3-beta 的版本的 zookeeper 单机版

  4. 结果问题依旧…….

  5. 中间查找问题花费了很长的时间…..

  6. 后来把官方的 demo clone 到本地跑次看看,官方的 demo 仅仅依赖一个包 com.dangdang:elastic-job-lite-core:2.1.5

  7. 发现这个 demo 没有问题,可以连接的上 zookeeper

  8. 对比发现2个项目的依赖版本号不一致

对比图

  1. 看到 demo 里依赖的 org.apache.curator:curator-frameworkorg.apache.curator:curator-recipes 都是 2.10.0, 而我引入的版本却是gradle 上的最新版 4.0.1, 而且也能看到2者的 zookeeper 的版本也不一致,一个是 3.4.6,一个是 3.5.3-beta

  2. 问题所在找到了

  3. 解决问题

1
2
3
4
5
6
7
compile ('com.dangdang:elastic-job-lite-core:2.1.5')

compile ('com.dangdang:elastic-job-lite-spring:2.1.5')

compile ('org.apache.curator:curator-framework:2.10.0')

compile ('org.apache.curator:curator-recipes:2.10.0')
  1. 手动声明版本为 2.10.0

  2. 问题解决,但是为什么 gradle 会造成这样的问题? 为什么传递依赖时, gradle 会去找最新的依赖版本? 这些问题我还没搞清楚….

  3. 日后搞清楚了,或者有眉目了,再来更新这篇文章.

0%