✅ Gitlab: 代码管理仓库
✅ Gitlab-runner: gitlab中cicd的runner
✅ Code-server: 浏览器版vscode
✅ Registry: dockerhub仓库
✅ Portainer: portainer容器管理
✅ Logstash: 日志收集管道
✅ Elasticsearch: 分布式搜索引擎
✅ Filebeat: 日志收集客户端
✅ Kibana: 数据展示界面
✅ Redis: 分布式数据存储
✅ Nginx: 端口转发与反向代理
✅ Elasticsearch-HQ: ES集群管理
✅ MySQL: 数据库
✅ RabbitMQ: 消息队列
✅ 可扩展Redis集群、ES集群、RabbitMQ集群
- 8001: Gitlab root:hb123456
- 8003: Code-server root:hb123456
- 8004: Registry
- 9000: Portainer
- 5000: Logstash TCP input.
- 9200: Elasticsearch elastic:changeme
- 9300: Elasticsearch TCP transport
- 5601: Kibana elastic:changeme
- 6379: Redis
- 5001: ES-HQ
- 5672: RabbitMQ
- 3306: MySQL
- 1. ELK结构框架
- 2. 基本流程
- 3. 安装 docker
- 4. 安装 JDK
- 5. 安装 Redis
- 6. 安装 Filebeat
- 7. 安装 Logstash
- 8. 安装 Elasticsearch
- 9. 安装 Kibana
- 10. 安装 Nginx
- 11. 安装MySQL
- 12. 安装Gitlab-runner
- 13. ES插件安装
- 14. ElasticSearc-sql
- 15. 最终验证
- 16. redis 集群部署
- 17. 注意事项
- 18. 参考资料
- Elasticsearch是一个分布式搜索分析引擎,稳定、可水平扩展、易于管理是它的主要设计初衷;
- Logstash是一个灵活的数据收集、加工和传输的管道软件;
- Kibana是一个数据可视化平台,可以通过将数据转化为酷炫而强大的图像而实现与数据的交互
- 将三者的收集加工,存储分析和可视转化整合在一起就形成了ELK;
- filebeat 或 Logstash-Shipper获取日志信息发送到redis。
- Redis在此处的作用是防止ElasticSearch服务异常导致丢失日志,提供消息队列的作用。redis会立马传输到ES。
- logstash是读取Redis中的日志信息发送给ElasticSearch。
- ElasticSearch提供日志存储和检索。
- Kibana是ElasticSearch可视化界面插件。
- 部署到服务器上之后通过Nginx端口转发与反向代理把服务暴露出去。
yum install docker docker-compose docker-ce docker-ce-cli containerd.io
systemctl stop docker
rm -rf /var/lib/docker
vim /etc/sysconfig/docker-storage # 修改docker安装路径
vim /usr/lib/systemd/system/docker.service
vim /etc/sysconfig/docker
chown +R root:root docker
systemctl daemon-reload
systemctl restart docker.service
systemctl restart docker
vi /etc/yum.repos.d/centos.repo
添加base.repo文件里的内容
es_version=7.9.0
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
yum install -y filebeat
yum install -y redis
yum install -y logstash
yum install -y elasticsearch
yum install -y kibana
yum install -y nginx httpd-tools
java -version
# 查看下XXX的安装目录
rpm -ql XXX
# 或者到官网下载最新的java jdk https://www.oracle.com/java/technologies/javase-downloads.html
## 这里使用的是redis-5.0.4,请根据实际情况选择合适的版本
redis_version=redis-5.0.4
wget http://download.redis.io/releases/${redis_version}.tar.gz
tar -zxf $redis_version.tar.gz -C /usr/local
mv /usr/local/$redis_version/ /usr/local/redis
cd /usr/local/redis
make MALLOC=libc
make
make install
PATH=/usr/local/redis/src:$PATH
redis-server redis.conf &
# 修改密码
# sed -i "s/# requirepass foobared/requirepass 123456/g" redis.conf
# sudo service redis restart
# 查看redis服务端口
netstat -lnp|grep redis
# 验证节点信息
redis-cli INFO|grep role
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${es_version}-linux-x86_64.tar.gz
tar -xzvf filebeat-${es_version}-linux-x86_64.tar.gz
mv filebeat-${es_version}-linux-x86_64 /usr/local/filebeat
cd /usr/local/filebeat
# 修改filebeat.yml
./filebeat setup
./filebeat -e
wget https://artifacts.elastic.co/downloads/logstash/logstash-${es_version}.zip
unzip logstash-${es_version}.zip
mv logstash-${es_version} /usr/local/logstash
/usr/local/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'
/usr/local/logstash/bin/logstash -f ./logstash.conf
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-${es_version}-linux-x86_64.tar.gz
tar -xzvf elasticsearch-${es_version}-linux-x86_64.tar.gz
mv elasticsearch-${es_version}-linux-x86_64 /usr/local/elasticsearch
cat /etc/security/limits.conf | grep -v "#" | while read line
do
sed -i "s/${line}/ /" /etc/security/limits.conf
done
echo 'root soft memlock unlimited' >> /etc/security/limits.conf
echo 'root hard memlock unlimited' >> /etc/security/limits.conf
echo 'root soft nofile 65536' >> /etc/security/limits.conf
echo 'root hard nofile 65536' >> /etc/security/limits.conf
# 关闭已有的可能启动的elasticsearch服务
ps -aux | grep elasticsearch | grep -v "grep" | awk '{print $2}' | xargs kill -9
cd /usr/local/elasticsearch/config/
# 需要交互输入,慎重
read -p "Input elasticsearch ip:" es_ip
sed -i "s/network.host: 192.168.0.1/network.host: $es_ip/" elasticsearch.yml
sed -i "s/ping.unicast.hosts: \[.*\]/ping.unicast.hosts: \[\"$es_ip:9300\"\]/" elasticsearch.yml
/usr/local/elasticsearch/bin/elasticsearch -d
curl -i -XGET 'localhost:9200/_count?pretty'
netstat -antp |grep 9200
curl http://127.0.0.1:9200/
wget https://artifacts.elastic.co/downloads/kibana/kibana-${es_version}-linux-x86_64.tar.gz
tar xzvf kibana-${es_version}-linux-x86_64.tar.gz
mv kibana-${es_version}-linux-x86_64 /usr/local/kibana
# 修改kibana.yml文件
# 安装screen,以便于kibana在后台运行(当然也可以不用安装,用其他方式进行后台启动)
yum -y install screen
screen ./bin/kibana
#./bin/kibana
netstat -antp |grep 5601
curl localhost:5601
## 下载安装包和解压缩
nginx_version=1.14.0
wget -c https://nginx.org/download/nginx-${nginx_version}.tar.gz
tar zxf nginx-${nginx_version}.tar.gz
cd nginx-${nginx_version}
## 编译安装
./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie'
make
make install
## 配置nginx
vi /etc/nginx/nginx.conf
# include /etc/nginx/conf.d/*conf
# 启动 Nginx 服务
nginx -t
nginx -s reload
sudo systemctl enable nginx
sudo systemctl start nginx
open http://IP:5601
wget https://repo.mysql.com//mysql80-community-release-el8-1.noarch.rpm
yum localinstall mysql80-community-release-el8-1.noarch.rpm -y
yum -y install mysql-community-server
yum install -y mariadb-server
systemctl start mariadb.service
systemctl enable mariadb.service
systemctl start mysqld
systemctl enable mysqld
systemctl daemon-reload
./mysqld --defaults-file=/etc/my.cnf --basedir=/usr/local/mysql/ --datadir=/data/mysql/ --user=mysql --initialize
service mysql start
cat /var/log/mysqld.log | grep password # 查看数据库的密码
mysql -uroot -p
mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '你的密码'; # 修改密码
mysql> exit; # 退出再次登陆
# 进行远程访问的授权
mysql -uroot -p
mysql> create user 'root'@'%' identified with mysql_native_password by '你的密码';
mysql> grant all privileges on *.* to 'root'@'%' with grant option;
mysql> flush privileges;
mysql> exit;
# 打开防火墙开放3306端口
systemctl start firewalld
firewall-cmd --zone=public --add-port=3306/tcp --permanent
firewall-cmd --reload
# 配置默认编码为UTF-8 修改/etc/my.cnf配置文件,在[mysqld]下添加编码配置:
character_set_server=utf8
init_connect='SET NAMES utf8'
# 编辑保存完重启mysql服务
systemctl restart mysqld
mysql> show variables like '%character%'; # 查看下编码
# 如果是使用阿里云服务,需要在安全规则组打开3306端口
sudo curl -L --output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64
sudo chmod +x /usr/local/bin/gitlab-runner
#安装docker,可选,如果执行器选择docker,则需要安装,如果可以,最好装个docker加速器
curl -sSL https://get.docker.com/ | sh
#添加一个普通用户权限,用来运行gitlab runner,然后运行gitlab runner
sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash
sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
sudo gitlab-runner start
首次使用,需要进入容器内部注册
[root@localhost ~]# gitlab-runner register #注册runner到gitlab
Runtime platform arch=amd64 os=linux pid=12351 revision=d0b76032 version=12.0.2
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
http://server.muguayun.top:20080/
Please enter the gitlab-ci token for this runner:
yx6oVQYxrLxczyazysF9
Please enter the gitlab-ci description for this runner:
[localhost.localdomain]: canon_runner
Please enter the gitlab-ci tags for this runner (comma separated):
gitlab-runner-01
Registering runner... succeeded runner=yx6oVQYx
Please enter the executor: docker, docker-ssh, ssh, docker-ssh+machine, parallels, shell, virtualbox, docker+machine, kubernetes:
#设置runner运行方式(推荐选shell,更灵活)。 ⚠️ 如果这里选择docker,再打包时,runner内部会重新启动一个docker镜像,脚本内部命令会在重新启动的镜像内执行。
docker
Please enter the default Docker image (e.g. ruby:2.6):
centos:7
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
[root@localhost ~]# gitlab-runner restart #重启下gitlab runner
[root@localhost ~]# gitlab-runner list #查看当前gitlab runner
Runtime platform arch=amd64 os=linux pid=12379 revision=d0b76032 version=12.0.2
Listing configured runners ConfigFile=/etc/gitlab-runner/config.toml
gitlab-runner-01 Executor=docker Token=KAsxjVbByKauYnNMSKHY URL=http://192.168.31.130/
gitlab注册
# Git global setup
git config --global user.name "cbd"
git config --global user.email "2829969299@qq.com"
# Push an existing folder
git init
git remote add origin ssh://git@abc.git
git add .
git commit -m "Initial commit"
git push -u origin master
备注:
- runner在运行时,默认使用 gitlab-runner用户。
- 所以涉及到 使用ssh权限,需要把 gitlab-runner的公钥添加到部署服务器,免密登录。
- 需要手动修改/home/gitlab-runner 文件夹权限 为 777。
- gitlab runner配置文件在/etc/gitlab-runner/config.toml
- ssh-keygen -t rsa -C "your.email@example.com" -b 4096
- HQ监控 管理ES集群以及通过web界面来查询操作,支持SQL转DSL
docker run -p 5000:5000 elastichq/elasticsearch-hq
http://es_user:es_password@es_ip:es_port
- ik分词
./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v${es_version}/elasticsearch-analysis-ik-${es_version}.zip
- SQL 通过sql语法进行查询的工具
./bin/elasticsearch-plugin install https://github.com/NLPchina/elasticsearch-sql/releases/download/${es_version}.0/elasticsearch-sql-${es_version}.0.zip
- Cerebro 查看ES集群堆内存使用率、CPU使用率、内存使用率、磁盘使用率。
wget https://github.com/lmenezes/cerebro/releases/download/v${es_version}/cerebro-${es_version}.tgz
tar xzf cerebro-${es_version}.tgz
# 指定一个端口启动
cerebro-${es_version}/bin/cerebro -Dhttp.port=8088
bin/logstash-plugin install logstash-input-jdbc
bin/logstash-plugin install logstash-output-elasticsearch
wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.46.zip --no-check-certificate
unzip mysql-connector-java-5.1.46.zip
# 这里面有一个MySQL依赖包jar,用于配置logstash里面的这个参数jdbc_driver_library
编辑nginx配置文件,修改以下内容(在http模块下添加)
log_format json '{"@timestamp":"$time_iso8601",'
'"@version":"1",'
'"client":"$remote_addr",'
'"url":"$uri",'
'"status":"$status",'
'"domian":"$host",'
'"host":"$server_addr",'
'"size":"$body_bytes_sent",'
'"responsetime":"$request_time",'
'"referer":"$http_referer",'
'"ua":"$http_user_agent"'
'}';
# 修改access_log的输出格式为刚才定义的json
access_log logs/elk.access.log json;
运行看看效果如何logstash -f /etc/logstash/conf.d/full.conf
运行看看效果如何logstash -f /etc/logstash/conf.d/redis-out.conf
因为ES保存日志是永久保存,所以需要定期删除一下日志,下面命令为删除指定时间前的日志
curl -X DELETE http://xx.xx.com:9200/logstash-*-
date +%Y-%m-%d -d "-$n days"`
服务器IP | redis端口 | 哨兵端口 | 服务器角色 | 网卡名称 | IP掩码位 |
---|---|---|---|---|---|
172.21.0.9 | 6379 | 26379 | 主 | eno16777984 | 24 |
172.21.0.11 | 6379 | 26379 | 从 | eno16777984 | 24 |
cd /etc/nginx/stream.d
cat test-redis.conf
upstream testproxy {
server 172.21.0.9:6379;
server 172.21.0.11:6379 backup;
}
server {
listen 56379;
proxy_pass testproxy;
access_log /var/log/changsha-rpa/myservic.log proxy;
}
## 平滑重启nginx
nginx -t
nginx -s reload
# 安装Redis
# 设置开机自启动
chkconfig --add redis
chkconfig redis on
## 配置 Redis 主节点
appendonly yes
## 配置 Redis 从节点
slaveof 172.21.0.9 6379
## 启动Redis服务 注意:Redis 启动时一定要先启动 Master 节点,然后再 Slave!
# 验证节点信息 redis-cli INFO|grep role
# 验证主节点上可以读写,从节点上只能读不能写
配置 Master 和 Slave 节点 相同的操作
##创建哨兵数据目录
mkdir -p /var/redis/redis-sentinel
cat sentinel.conf
port 26379
sentinel announce-ip "172.21.0.9"
sentinel monitor mymaster 172.21.0.9 6379 2
sentinel known-slave mymaster 172.21.0.11 6379
dir "/var/redis/redis-sentinel"
logfile "/var/log/redis_sentinel.log"
protected-mode no
daemonize yes
sentinel deny-scripts-reconfig yes
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 15000
# sentinel auth-pass mymaster 123456
sentinel config-epoch mymaster 1
sentinel leader-epoch mymaster 1
sentinel current-epoch 1
依次启动哨兵sentinel 首先启动master然后slave redis-sentinel ./sentinel.conf &
查看哨兵信息:redis-cli -p 26379 INFO Sentinel
测试
- 此时客户端连接要用哨兵端口
- 主从监控,主redis死了,从会变成主,原主重新启动依然是从
- 如果使用registry所有服务器需要如下添加配置
vim /etc/docker/daemon.json
{
"insecure-registries": [
"server.abc.com:8004" # 注意,此处的 host 需要对应 实际服务器.修改后需要重启docker服务
]
}
- filebeat/config/filebeat.yml 权限修改:
chmod go-w filebeat.yml
必要的时候chown root:root filebeat.yml
- elasticsearch/data要可写 权限修改:
chmod 777 elasticsearch/data
Create an index pattern via the Kibana API:
$ curl -XPOST -D- 'http://localhost:5601/api/saved_objects/index-pattern' \
-H 'Content-Type: application/json' \
-H 'kbn-version: 7.8.0' \
-u elastic:changeme \
-d '{"attributes":{"title":"logstash-*","timeFieldName":"@timestamp"}}'
修改容器并更新为新的名字
docker commit -p container image_name:tag
docker commit -p 02821380a8c5 code-server-3.1.1:ningboyinhang_image
DSL
{
"query":{
"bool":{
"must":[
],
"must_not":[
],
"should":[
]
}
},
"aggs":{
"my_agg":{
"terms":{
"field":"user",
"size":10
}
}
},
"highlight":{
"pre_tags":[
"<em>"
],
"post_tags":[
"</em>"
],
"fields":{
"body":{
"number_of_fragments":1,
"fragment_size":20
},
"title":{
}
}
},
"size":20,
"from":100,
"_source":[
"title",
"id"
],
"sort":[
{
"_id":{
"order":"desc"
}
}
]
}
部署sentry
docker run -d --name sentry-redis --restart=always redis
docker run -d --name sentry-postgres -e POSTGRES_PASSWORD=secret -e POSTGRES_USER=sentry --restart=always postgres
key=`docker run --rm sentry config generate-secret-key`
docker run -it --rm -e SENTRY_SECRET_KEY='${key}' --link sentry-postgres:postgres --link sentry-redis:redis sentry upgrade
docker run -d -p 9000:9000 --name my-sentry -e SENTRY_SECRET_KEY='${key}' --link sentry-redis:redis --link sentry-postgres:postgres --restart=always sentry
docker run -d --name sentry-cron -e SENTRY_SECRET_KEY='${key}' --link sentry-postgres:postgres --link sentry-redis:redis sentry run cron
docker run -d --name sentry-worker-1 -e SENTRY_SECRET_KEY='${key}' --link sentry-postgres:postgres --link sentry-redis:redis sentry run worker