JoyLau's Blog

JoyLau 的技术学习与思考

配置前

配置前

@ 导包的类无法点击跳转,也不识别

配置

在项目根目录添加配置文件 webpack.config.js

1
2
3
4
5
6
7
8
9
10
11
/**
* 不是真实的 webpack 配置,仅为兼容 webstorm 和 intellij idea 代码跳转
*/

module.exports = {
resolve: {
alias: {
'@': require('path').resolve(__dirname, 'src'), // eslint-disable-line
},
},
};

然后,在 idea 的 preference -> language & frameworks -> javascript -> webpack 路径到更目录下的webpack.config.js

完成

为什么没有 Windows 下的编译安装

因为官网已经提供的编译好的 exe 包,双击运行就会解压到特定的目录了,除此之外官网还提供了 ios 版和 安卓版
这里着重记录下 CentOS , Ubuntu 和 Mac OS 下的安装,因为官网没有提供编译好的包

条件

  1. GCC 4.4.x or later
  2. CMake 2.8.7 or higher
  3. Git
  4. GTK+2.x or higher, including headers (libgtk2.0-dev)
  5. pkg-config
  6. Python 2.6 or later and Numpy 1.5 or later with developer packages (python-dev, python-numpy)
  7. ffmpeg or libav development packages: libavcodec-dev, libavformat-dev, libswscale-dev
  8. [optional] libtbb2 libtbb-dev
  9. [optional] libdc1394 2.x
  10. [optional] libjpeg-dev, libpng-dev, libtiff-dev, libjasper-dev, libdc1394-22-dev
  11. [optional] CUDA Toolkit 6.5 or higher

步骤

  1. 安装常用的开发编译工具包, Centos 的命令为: yum groupinstall “Development Tools”, Ubuntu 的命令为: apt-get install build-essential
  2. 安装 cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
  3. mkdir opencv4; cd opencv4
  4. git clone https://github.com/opencv/opencv.git
  5. git clone https://github.com/opencv/opencv_contrib.git
  6. cd opencv
  7. mkdir build
  8. cd build
  9. cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local .. (如果不工作的话,删除 -D的空格,cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local ..)
  10. make -j7 # runs 7 jobs in parallel 使用7个并行任务来编译
  11. 生成文档 cd ~/opencv/build/doc/; make -j7 doxygen
  12. make install

编译好的包

  1. centos7 版: http://cloud.joylau.cn:1194/s/kUoNelmj1SX810K 或者 https://pan.baidu.com/s/1qaZ-TbF0xP0DxaEJKbdt-A 提取码: jkir
  2. Ubuntu 16.04 版: http://cloud.joylau.cn:1194/s/TsNRKwxJhM0v0HE 或者 https://pan.baidu.com/s/1ha6nATLrSt5WPL1iQlmWSg 提取码: gduu
  3. java 调用所需 opencv-410.jar 包: //s3.joylau.cn:9000/blog/opencv-410.jar

Mac OS 上

  1. AppStore 上安装 XCode, 安装完成打开 XCode , 同意 license
  2. 安装 HomeBrew
  3. 安装必要依赖: Python 3, CMake and Qt 5
1
2
3
brew install python3
brew install cmake
brew install qt5
  1. 安装环境
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
mkdir ~/opencv4
git clone https://github.com/opencv/opencv.git
git clone https://github.com/opencv/opencv_contrib.git

# 变量定义
cwd=$(pwd)
cvVersion="master"
QT5PATH=/usr/local/Cellar/qt/5.12.2

rm -rf opencv/build
rm -rf opencv_contrib/build

# Create directory for installation
mkdir -p installation/OpenCV-"$cvVersion"

sudo -H pip3 install -U pip numpy
# Install virtual environment
sudo -H python3 -m pip install virtualenv virtualenvwrapper
VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3
echo "VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3" >> ~/.bash_profile
echo "# Virtual Environment Wrapper" >> ~/.bash_profile
echo "source /usr/local/bin/virtualenvwrapper.sh" >> ~/.bash_profile
cd $cwd
source /usr/local/bin/virtualenvwrapper.sh

############ For Python 3 ############
# create virtual environment 由于 mac OS 本身使用的是 Python 2.7 , 而一些本身的应用依赖于 Python 2 ,为了不影响原来的环境,这里创建一个 Python3 的虚拟环境来进行编译
mkvirtualenv OpenCV-"$cvVersion"-py3 -p python3
workon OpenCV-"$cvVersion"-py3

# now install python libraries within this virtual environment
pip install cmake numpy scipy matplotlib scikit-image scikit-learn ipython dlib

# quit virtual environment
deactivate
######################################

cd opencv
mkdir build
cd build

cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=$cwd/installation/OpenCV-"$cvVersion" \
-D INSTALL_C_EXAMPLES=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D WITH_TBB=ON \
-D WITH_V4L=ON \
-D OPENCV_SKIP_PYTHON_LOADER=ON \
-D CMAKE_PREFIX_PATH=$QT5PATH \
-D CMAKE_MODULE_PATH="$QT5PATH"/lib/cmake \
-D OPENCV_PYTHON3_INSTALL_PATH=~/.virtualenvs/OpenCV-"$cvVersion"-py3/lib/python3.7/site-packages \
-D WITH_QT=ON \
-D WITH_OPENGL=ON \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
-D BUILD_EXAMPLES=ON ..

make -j$(sysctl -n hw.physicalcpu)
make install

  1. cmake 后输出如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
--   OpenCV modules:
-- To be built: aruco bgsegm bioinspired calib3d ccalib core cvv datasets dnn dnn_objdetect dpm face features2d flann freetype fuzzy gapi hfs highgui img_hash imgcodecs imgproc java java_bindings_generator line_descriptor ml objdetect optflow phase_unwrapping photo plot python2 python3 python_bindings_generator quality reg rgbd saliency shape stereo stitching structured_light superres surface_matching text tracking ts video videoio videostab xfeatures2d ximgproc xobjdetect xphoto
-- Disabled: world
-- Disabled by dependency: -
-- Unavailable: cnn_3dobj cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev hdf js matlab ovis sfm viz
-- Applications: tests perf_tests examples apps
-- Documentation: NO
-- Non-free algorithms: NO
--
-- GUI:
-- QT: YES (ver 5.12.2)
-- QT OpenGL support: YES (Qt5::OpenGL 5.12.2)
-- Cocoa: YES
-- OpenGL support: YES (/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/OpenGL.framework /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/OpenGL.framework)
-- VTK support: NO
--
-- Media I/O:
-- ZLib: build (ver 1.2.11)
-- JPEG: build-libjpeg-turbo (ver 2.0.2-62)
-- WEBP: build (ver encoder: 0x020e)
-- PNG: build (ver 1.6.36)
-- TIFF: build (ver 42 - 4.0.10)
-- JPEG 2000: build (ver 1.900.1)
-- OpenEXR: build (ver 1.7.1)
-- HDR: YES
-- SUNRASTER: YES
-- PXM: YES
-- PFM: YES
--
-- Video I/O:
-- DC1394: NO
-- FFMPEG: YES
-- avcodec: YES (58.35.100)
-- avformat: YES (58.20.100)
-- avutil: YES (56.22.100)
-- swscale: YES (5.3.100)
-- avresample: YES (4.0.0)
-- GStreamer: NO
-- AVFoundation: YES
-- v4l/v4l2: NO
--
-- Parallel framework: GCD
--
-- Trace: YES (with Intel ITT)
--
-- Other third-party libraries:
-- Intel IPP: 2019.0.0 Gold [2019.0.0]
-- at: /Users/joylau/opencv4/opencv/build/3rdparty/ippicv/ippicv_mac/icv
-- Intel IPP IW: sources (2019.0.0)
-- at: /Users/joylau/opencv4/opencv/build/3rdparty/ippicv/ippicv_mac/iw
-- Lapack: YES (/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/Accelerate.framework)
-- Eigen: NO
-- Custom HAL: NO
-- Protobuf: build (3.5.1)
--
-- OpenCL: YES (no extra features)
-- Include path: NO
-- Link libraries: -framework OpenCL
--
-- Python 2:
-- Interpreter: /usr/bin/python2.7 (ver 2.7.10)
-- Libraries: /usr/lib/libpython2.7.dylib (ver 2.7.10)
-- numpy: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include (ver 1.8.0rc1)
-- install path: lib/python2.7/site-packages
--
-- Python 3:
-- Interpreter: /usr/local/bin/python3 (ver 3.7.2)
-- Libraries: /usr/local/Frameworks/Python.framework/Versions/3.7/lib/libpython3.7m.dylib (ver 3.7.2)
-- numpy: /usr/local/lib/python3.7/site-packages/numpy/core/include (ver 1.16.2)
-- install path: /Users/joylau/.virtualenvs/OpenCV-master-py3/lib/python3.7/site-packages
--
-- Python (for build): /usr/bin/python2.7
--
-- Java:
-- ant: /Users/joylau/dev/apache-ant-1.10.5/bin/ant (ver 1.10.5)
-- JNI: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/JavaVM.framework/Headers /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/JavaVM.framework/Headers /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/JavaVM.framework/Headers
-- Java wrappers: YES
-- Java tests: YES
--
-- Install to: /Users/joylau/opencv4/installation/OpenCV-master
-- -----------------------------------------------------------------
--
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/joylau/opencv4/opencv/build
  1. 编译好的安装包: http://cloud.joylau.cn:1194/s/6GMLl09ZAYNAUMU 或者: https://pan.baidu.com/s/1YBxUD_vB1zKOcxHeAtn6Xw 提取码: twsq

遇到的问题

CentOS 上 CMake 版本太低的解决方法

  1. yum 上安装的版本太低,先卸载掉版本低的,yum remove cmake

  2. cd /opt
    tar zxvf cmake-3.10.2-Linux-x86_64.tar.gz

  3. vim /etc/profile
    export CMAKE_HOME=/opt/cmake-3.10.2-Linux-x86_64
    export PATH=$PATH:$CMAKE_HOME/bin

  4. source /etc/profile

没有生成 opencv-410.jar

1
2
3
4
5
6
Java:                          
-- ant: /bin/ant (ver 1.9.4)
-- JNI: /usr/lib/jvm/java-1.8.0-openjdk/include /usr/lib/jvm/java-1.8.0-openjdk/include/linux /usr/lib/jvm/java-1.8.0-openjdk/include
-- Java wrappers: YES
-- Java tests: NO

需要 ant 环境,安装后即可, java 即可进行调用

IDEA 及 Spring Boot 项目中的使用

  1. 下载 opencv-410.jar 包,引入到项目中
1
2
3
4
5
6
7
8
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
compileOnly 'org.projectlombok:lombok'
annotationProcessor 'org.projectlombok:lombok'
testImplementation 'org.springframework.boot:spring-boot-starter-test'

compile fileTree(dir:'libs',include:['*.jar'])
}
  1. 配置动态库路径, vm options: -Djava.library.path=/home/joylau/opencv4/opencv/build/lib

vm options

mac os 下路径为: -Djava.library.path=/Users/joylau/opencv4/installation/OpenCV-master/share/java/opencv4

  1. 加载动态库
1
2
3
4
5
6
7
8
9
@SpringBootApplication
public class OpencvTestApplication {

public static void main(String[] args) {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
System.out.println(Core.VERSION);
SpringApplication.run(OpencvTestApplication.class, args);
}
}
  1. 脸部识别 demo
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
private static void testFace() {
// 1 读取OpenCV自带的人脸识别特征XML文件
CascadeClassifier facebook = new CascadeClassifier("/home/joylau/opencv4/opencv/data/haarcascades/haarcascade_frontalface_alt.xml");
// 2 读取测试图片
Mat image = Imgcodecs.imread("/home/joylau/图片/image-test-4.jpg");
// 3 特征匹配
MatOfRect face = new MatOfRect();
facebook.detectMultiScale(image, face);
// 4 匹配 Rect 矩阵 数组
Rect[] rects = face.toArray();
System.out.println("匹配到 " + rects.length + " 个人脸");
// 5 为每张识别到的人脸画一个框
for (int i = 0; i < rects.length; i++) {
Imgproc.rectangle(image,new Point(rects[i].x, rects[i].y), new Point(rects[i].x + rects[i].width, rects[i].y + rects[i].height), new Scalar(0, 0, 255));
Imgproc.putText(image,"face-" + i, new Point(rects[i].x, rects[i].y),Imgproc.FONT_HERSHEY_SIMPLEX, 1.0, new Scalar(0, 255, 0),1,Imgproc.LINE_AA,false);
}
// 6 展示图片
HighGui.imshow("人脸-匹配", image);
HighGui.waitKey(0);
}

test_face

注: 图片来自微博

  1. 边缘检测 demo
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
private static void testContours() {
//1 获取原图
Mat src = Imgcodecs.imread("/home/joylau/图片/image-test.jpg");
//2 图片灰度化
Mat gary = new Mat();
Imgproc.cvtColor(src, gary, Imgproc.COLOR_RGB2GRAY);
//3 图像边缘处理
Mat edges = new Mat();
Imgproc.Canny(gary, edges, 200, 500, 3, false);
//4 发现轮廓
List<MatOfPoint> list = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.findContours(edges, list, hierarchy, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE);
//5 绘制轮廓
for (int i = 0, len = list.size(); i < len; i++) {
Imgproc.drawContours(src, list, i, new Scalar(0, 255, 0), 1, Imgproc.LINE_AA);
}
HighGui.imshow("边缘检测", src);
HighGui.waitKey(0);
}

test_source
test_contours

  1. 实时人脸识别
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
/**
* OpenCV-4.0.0 实时人脸识别
*
*/
public static void videoFace() {
VideoCapture capture=new VideoCapture(0);
Mat image=new Mat();
int index=0;
if (capture.isOpened()) {
do {
capture.read(image);
HighGui.imshow("实时人脸识别", getFace(image));
index = HighGui.waitKey(1);
} while (index != 27);
}
}

/**
* OpenCV-4.0.0 人脸识别
* @param image 待处理Mat图片(视频中的某一帧)
* @return 处理后的图片
*/
public static Mat getFace(Mat image) {
// 1 读取OpenCV自带的人脸识别特征XML文件
CascadeClassifier facebook=new CascadeClassifier("/Users/joylau/opencv4/opencv/data/haarcascades/haarcascade_frontalface_alt.xml");
// 2 特征匹配类
MatOfRect face = new MatOfRect();
// 3 特征匹配
facebook.detectMultiScale(image, face);
Rect[] rects=face.toArray();
log.info("匹配到 "+rects.length+" 个人脸");
// 4 为每张识别到的人脸画一个圈
for (Rect rect : rects) {
Imgproc.rectangle(image, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(0, 255, 0));
Imgproc.putText(image, "Human", new Point(rect.x, rect.y), Imgproc.FONT_HERSHEY_SIMPLEX, 2.0, new Scalar(0, 255, 0), 1, Imgproc.LINE_AA, false);
//Mat dst=image.clone();
//Imgproc.resize(image, image, new Size(300,300));
}
return image;
}

背景

使用 docker stack 部署一组服务时,docker 会根据集群的每个节点的资源的情况来进行分配,作为使用者无法参与其中的分配,该怎么解决呢?

环境

  1. docker 1.13.0+
  2. compose version 3+

deploy mode

  1. replicated 默认模式,可自定义服务的副本数,此模式不能决定服务部署到哪个节点上
1
2
3
deploy:
mode: replicated
replicas: 2
  1. global 定义每个节点均部署一个服务的副本
1
2
deploy:
mode: global

node labels

该方法是通过给节点添加标签,然后在 yaml 文件里通过配置标签来决定服务部署到哪些节点

  1. docker node ls 查看节点
  2. docker node update –label-add role=service-1 nodeId 给 nodeId 的节点添加 label role=service-1, label 的形式是 map 的键值对形式
  3. docker node inspect nodeId 查看节点的 labels 信息
  4. docker node update –label-rm role=service-1 nodeId 删除 label

service 部署

1
2
3
4
docker service create \
--name nginx \
--constraint 'node.labels.role == service-1' \
nginx

stack 部署

1
2
3
4
deploy:
placement:
constraints:
- node.labels.role == service-2

constraints 填写多个时,它们之间的关系是 AND;constraints 可以匹配 node 标签和 engine 标签
例如

1
2
3
deploy:
placement:
constraints: [node.role == manager]
1
2
3
4
5
deploy:
placement:
constraints:
- node.role == manager
- engine.labels.operatingsystem == ubuntu 14.04

环境

  1. docker 18.09

说明

  1. 本篇文章中的搭建过程有多台物理机,如果说是自己测试使用的话,或者只有一台机器,可以使用 docker-machine 来创建多个 docker 主机
  2. 比如创建一个主机名为 work 的 docker 主机 : docker-machine create -d virtualbox worker
  3. 之后进入刚才创建的主机 : docker-machine ssh worker
  4. 然后就当成是一台独立机器来执行以下的操作

步骤

  1. 初始化 swarm 集群 : docker swarm init --advertise-addr 34.0.7.183
    1. 机器有多个网卡的指定 IP 地址 –advertise-addr
    2. 默认创建的是管理节点
  2. 加入刚才创建 swarm 集群
1
docker swarm join --token SWMTKN-1-1o1yfsquxasw7c7ah4t7lmd4i89i62u74tutzhtcbgb7wx6csc-1hf4tjv9oz9vpo937955mi0z2 34.0.7.183:2377

如果说忘了集群管理节点的 token, 可以使用 docker swarm join-token work/manage 来查看加入该集群的命令

  1. 查看集群节点: docker node list

服务部署

  1. 单服务部署 docker service create --name nginx -p 80:80 --replaces 4 containAddress
    上述命令部署了4个 nginx 服务,如果集群有2台主机的话,会在每台主机上部署 2 个服务

  2. 多服务部署, 使用 yml 配置文件,具体语法参看 https://docs.docker.com/compose/compose-file/

命令

docker swarm

docker swarm init 初始化集群
docker swarm join-token worker 查看工作节点的 token
docker swarm join-token manager 查看管理节点的 token
docker swarm join 加入集群中

docker stack

docker stack deploy 部署新的服务或更新现有服务
docker stack ls 列出现有服务
docker stack ps 列出服务中的任务
docker stack rm 删除服务
docker stack services 列出服务中的具体项
docker stack down 移除某个服务(不会删除数据)

docker node

docker node ls 查看所有集群节点
docker node rm 删除某个节点(-f强制删除)
docker node inspect 查看节点详情
docker node demote 节点降级,由管理节点降级为工作节点
docker node promote 节点升级,由工作节点升级为管理节点
docker node update 更新节点
docker node ps 查看节点中的 Task 任务

docker service

docker service create 部署服务
docker service inspect 查看服务详情
docker service logs 产看某个服务日志
docker service ls 查看所有服务详情
docker service rm 删除某个服务(-f强制删除)
docker service scale 设置某个服务个数
docker service update 更新某个服务

docker machine

docker-machine create 创建一个 Docker 主机(常用-d virtualbox)
docker-machine ls 查看所有的 Docker 主机
docker-machine ssh SSH 到主机上执行命令
docker-machine env 显示连接到某个主机需要的环境变量
docker-machine inspect 输出主机更多信息
docker-machine kill 停止某个主机
docker-machine restart 重启某台主机
docker-machine rm 删除某台主机
docker-machine scp 在主机之间复制文件
docker-machine start 启动一个主机
docker-machine status 查看主机状态
docker-machine stop 停止一个主机

swarm 集群节点可视化工具

portainer : 很强大的工具,可以监控本机和远程服务器或者集群环境,远程 docker 主机的话需要远程 docker 主机开启在 2375 端口的服务

https://www.portainer.io/installation/

1
2
3
4
5
6
7
8
9
10
version: '3'
services:
portainer:
image: 34.0.7.183:5000/joylau/portainer:latest
container_name: portainer
ports:
- 80:9000
restart: always
volumes:
- /home/liufa/portainer/data:/data

@Valid 和 @Validated

  1. @Valid@Validated 注解都用于字段校验
  2. @Valid 所属包为:javax.validation.Valid ; @Validated 所属包为 org.springframework.validation.annotation.Validated
  3. @Validated@Valid 的一次封装,是Spring提供的校验机制使用。@Valid 不提供分组功能

@Validated的特殊用法

当一个实体类需要多种验证方式时,例:对于一个实体类的id来说,新增的时候是不需要的,对于更新时是必须的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
public class Attachment {
@Id
@NotBlank(message = "id can not be blank!", groups = {All.class, Update.class})
private String id;

@NotBlank(message = "fileName can not be blank!", groups = {All.class})
private String fileName;

@NotBlank(message = "filePath can not be blank!", groups = {All.class})
private String filePath;

@Field
private byte[] data;

@NotBlank(message = "metaData can not be empty!", groups = {All.class})
private String metaData;

@NotBlank(message = "uploadTime can not be blank!", groups = {All.class})
private String uploadTime;

public Attachment(@NotBlank(message = "id can not be blank!", groups = {All.class, Update.class}) String id) {
this.id = id;
}

public interface All {
}

public interface Update {
}
}

单独对 groups 进行校验

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
/**
* 添加附件
*/
@PostMapping("addAttachment")
public MessageBody addAttachment(@RequestParam("file") final MultipartFile multipartFile,
@Validated(Attachment.All.class) Attachment attachment,
BindingResult results){
return attachmentApiService.addAttachment(multipartFile,attachment,results);
}

/**
* 更新单个附件
*/
@PostMapping("updateAttachment")
public MessageBody updateAttachment(@RequestParam(value = "file",required = false) final MultipartFile multipartFile,
@Validated(Attachment.Update.class) Attachment attachment){
return attachmentApiService.updateAttachment(multipartFile,attachment);
}

使用注意

  1. 校验的注解中不分配 groups,默认每次都要进行验证
  2. @Validated 没有添加 groups 属性时,默认验证没有分组的验证属性
  3. @Validated 添加特定 groups 属性时,只校验该注解中分配了该 groups 的属性
  4. 一个功能方法上处理多个模型对象时,需添加多个验证结果对象,如下所示
1
2
3
@RequestMapping("/addPeople")  
public @ResponseBody String addPeople(@Validated People p,BindingResult result,@Validated Person p2,BindingResult result2) {
}

错误信息

因为一些不正确的操作,导致容器的状态变成了 dead

1
2
3
4
5
6
7
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS                      PORTS                                                              NAMES
c21c993c5107 34.0.7.183:5000/joylau/traffic-service:2.1.7 "java -Djava.secur..." 2 weeks ago Dead traffic-service
dfbd1cdb31c2 34.0.7.183:5000/joylau/traffic-service-admin:1.2.1 "java -Djava.secur..." 2 weeks ago Dead traffic-service-admin
8778a28ab120 34.0.7.183:5000/joylau/traffic-service-data:2.0.4 "java -Djava.secur..." 2 weeks ago Dead traffic-service-data
65a3885e08b5 34.0.7.183:5000/joylau/traffic-service-node:1.2.3 "/bin/sh -c './nod..." 2 weeks ago Dead traffic-service-node
90700440e1df 34.0.7.183:5000/joylau/traffic-service-server:1.2.1 "java -Djava.secur..." 2 weeks ago Dead traffic-service-server

这类的容器删除时会报错

1
2
# docker rm c21c993c5107
Error response from daemon: Driver overlay2 failed to remove root filesystem c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64: remove /var/lib/docker/overlay2/099974dbeef827a3bbd932b7b36502763482ae8df25bd80f61a288b71b0ab810/merged: device or resource busy

解决方式

找到 filesystem 后面的字符串

1
# grep c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64 /proc/*/mountinfo

得到如下输出:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/proc/28032/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28033/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28034/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28035/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28036/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28037/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28038/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28039/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28040/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28041/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28042/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28043/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28044/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28045/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28046/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28047/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k
/proc/28048/mountinfo:973 957 0:164 / /var/lib/docker/containers/c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64/shm rw,nosuid,nodev,noexec,relatime shared:189 - tmpfs shm rw,size=65536k

proc 和 mountinfo 中间的数字将其 kill 掉即可

写一个批量处理的脚本列出所有的 pid

1
grep c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64 /proc/*/mountinfo | awk '{print substr($1,7,5)}'

再 kill 掉

1
grep c21c993c51073f41653aa7fd37dbfd232f8439ca79fd4315a410d0b41d8b0e64 /proc/*/mountinfo | awk '{print substr($1,7,5)}' | xargs kill -9

print 是awk打印指定内容的主要命令

$0 表示整个当前行
$1 每行第一个字段,每个字段以空格隔开
substr($1,7,5) 每行第一个字段,第7个字符开始,截取5个字符

然后在 docker rm container

完美解决.

环境

  • elasticsearch 6.4.3

示例

下面一段文字用 ik 进行分词

http://34.0.7.184:9200/_analyze/ POST

1
2
3
4
{
"analyzer": "ik_smart",
"text": "关于加快建设合肥地铁七号线的通知说明"
}

分词结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
{
"tokens": [
{
"token": "关于",
"start_offset": 0,
"end_offset": 2,
"type": "CN_WORD",
"position": 0
}
,
{
"token": "加快",
"start_offset": 2,
"end_offset": 4,
"type": "CN_WORD",
"position": 1
}
,
{
"token": "建设",
"start_offset": 4,
"end_offset": 6,
"type": "CN_WORD",
"position": 2
}
,
{
"token": "合肥",
"start_offset": 6,
"end_offset": 8,
"type": "CN_WORD",
"position": 3
}
,
{
"token": "地铁",
"start_offset": 8,
"end_offset": 10,
"type": "CN_WORD",
"position": 4
}
,
{
"token": "七号",
"start_offset": 10,
"end_offset": 12,
"type": "CN_WORD",
"position": 5
}
,
{
"token": "线",
"start_offset": 12,
"end_offset": 13,
"type": "CN_CHAR",
"position": 6
}
,
{
"token": "的",
"start_offset": 13,
"end_offset": 14,
"type": "CN_CHAR",
"position": 7
}
,
{
"token": "通知",
"start_offset": 14,
"end_offset": 16,
"type": "CN_WORD",
"position": 8
}
,
{
"token": "说明",
"start_offset": 16,
"end_offset": 18,
"type": "CN_WORD",
"position": 9
}
]
}
  • 这个时候如果配置的 analyzer 为 ik_smart 或者 analyzer 和 search_analyzer 都为 ik_smart, 则短语中每一个字都能搜到结果,还可以设置高亮信息来着重看一下

  • 如果配置的 analyzer 为 ik search_analyzer 为 standard ,则 通知,说明,七号 这样的词是搜不到的,而 线 这样的词可以搜到,理解一下

http://34.0.7.184:9200/attachment_libs/_search POST

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
{
"query": {
"multi_match": {
"query": "关于",
"fields": [
"fileName^1.0"
],
"type": "best_fields",
"operator": "OR",
"slop": 0,
"prefix_length": 0,
"max_expansions": 50,
"zero_terms_query": "NONE",
"auto_generate_synonyms_phrase_query": true,
"fuzzy_transpositions": true,
"boost": 1
}
},
"_source": {
"includes": [
"fileName"
],
"excludes": [
"data"
]
},
"highlight": {
"pre_tags": [
"<span style = 'color:red'>"
],
"post_tags": [
"</span>"
],
"fields": {
"*": {}
}
}
}

返回的结果为:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": [ ]
}
}

而搜索 线 返回的结果为:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
{
"took": 5,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0.2876821,
"hits": [
{
"_index": "attachment_libs",
"_type": "attachment_info",
"_id": "fd45d5be-c314-488a-99d3-041acc015377",
"_score": 0.2876821,
"_source": {
"fileName": "关于加快建设合肥地铁七号线的通知说明"
},
"highlight": {
"fileName": [
"关于加快建设合肥地铁七号<span style = 'color:red'>线</span>的通知说明"
]
}
}
]
}
}

总结

  • 分析器主要有两种情况会被使用,一种是插入文档时,将text类型的字段做分词然后插入倒排索引,第二种就是在查询时,先对要查询的text类型的输入做分词,再去倒排索引搜索
  • 如果想要让 索引 和 查询 时使用不同的分词器,ElasticSearch也是能支持的,只需要在字段上加上search_analyzer参数
    1. 在索引时,只会去看字段有没有定义analyzer,有定义的话就用定义的,没定义就用ES预设的
    2. 在查询时,会先去看字段有没有定义search_analyzer,如果没有定义,就去看有没有analyzer,再没有定义,才会去使用ES预设的

.env

1
2
3
PRIVATE_REPO=34.0.7.183:5000
ES_VERSION=6.4.3
ELASTICSEARCH_CLUSTER_DIR=/Users/joylau/dev/idea-project/dev-app/es-doc-office/elasticsearch-cluster

docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
version: '2.2'
services:
node-0:
image: ${PRIVATE_REPO}/joylau/es-doc:${ES_VERSION}
container_name: node-0
ports:
- 9200:9200
- 9300:9300
restart: always
volumes:
- ${ELASTICSEARCH_CLUSTER_DIR}/data/node-0:/usr/share/elasticsearch/data
- ${ELASTICSEARCH_CLUSTER_DIR}/logs/node-0:/usr/share/elasticsearch/logs
environment:
- bootstrap.memory_lock=true
- cluster.name=es-doc-office
- node.name=node-0
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
node-1:
image: ${PRIVATE_REPO}/joylau/es-doc:${ES_VERSION}
container_name: node-1
restart: always
ports:
- 9201:9200
- 9301:9300
volumes:
- ${ELASTICSEARCH_CLUSTER_DIR}/data/node-1:/usr/share/elasticsearch/data
- ${ELASTICSEARCH_CLUSTER_DIR}/logs/node-1:/usr/share/elasticsearch/logs
environment:
- bootstrap.memory_lock=true
- cluster.name=es-doc-office
- node.name=node-1
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- "discovery.zen.ping.unicast.hosts=node-0"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
node-2:
image: ${PRIVATE_REPO}/joylau/es-doc:${ES_VERSION}
container_name: node-2
ports:
- 9202:9200
- 9302:9300
restart: always
volumes:
- ${ELASTICSEARCH_CLUSTER_DIR}/data/node-2:/usr/share/elasticsearch/data
- ${ELASTICSEARCH_CLUSTER_DIR}/logs/node-2:/usr/share/elasticsearch/logs
environment:
- bootstrap.memory_lock=true
- cluster.name=es-doc-office
- node.name=node-2
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- "discovery.zen.ping.unicast.hosts=master,node-1"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
node-3:
image: ${PRIVATE_REPO}/joylau/es-doc:${ES_VERSION}
container_name: node-3
ports:
- 9203:9200
- 9303:9300
restart: always
volumes:
- ${ELASTICSEARCH_CLUSTER_DIR}/data/node-3:/usr/share/elasticsearch/data
- ${ELASTICSEARCH_CLUSTER_DIR}/logs/node-3:/usr/share/elasticsearch/logs
environment:
- bootstrap.memory_lock=true
- cluster.name=es-doc-office
- node.name=node-3
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- "discovery.zen.ping.unicast.hosts=master,node-1,node-2"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
node-4:
image: ${PRIVATE_REPO}/joylau/es-doc:${ES_VERSION}
container_name: node-4
ports:
- 9204:9200
- 9304:9300
restart: always
volumes:
- ${ELASTICSEARCH_CLUSTER_DIR}/data/node-4:/usr/share/elasticsearch/data
- ${ELASTICSEARCH_CLUSTER_DIR}/logs/node-4:/usr/share/elasticsearch/logs
environment:
- bootstrap.memory_lock=true
- cluster.name=es-doc-office
- node.name=node-4
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- "discovery.zen.ping.unicast.hosts=master,node-1,node-3"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
networks:
esnet:

问题

  1. 挂载的日志和数据文件的权限
  2. vm.max_map_count 数目的设置
  3. mac 环境下注意配置 docker 的内存大小设置

env.init

1
2
3
4
5
6
7
8
9
#!/usr/bin/env bash
mkdir -p /home/liufa/es-data/data/{node-0,node-1,node-2,node-3,node-4} && echo es-data directory created success || echo es-data directory created failure && \
mkdir -p /home/liufa/es-data/logs/{node-0,node-1,node-2,node-3,node-4} && echo es-logs directory created success || echo es-logs directory created failure && \
groupadd elasticsearch && \
useradd elasticsearch -g elasticsearch && \
chown -R elasticsearch:elasticsearch /home/liufa/es-data/* && \
chmod -R 777 /home/liufa/es-data/* && \
echo 'vm.max_map_count=262144' >> /etc/sysctl.conf && \
sysctl -p
0%