centos7 利用elasticsearch、logstash、kibana、filebeat 搭建 日志收集框架(非常详细)
一、前提准备
(1)下载 elasticsearch、logstash、kibana、filebeat 的压缩包,并将 四个压缩包上传到 /opt/elk 目录下
(2)修改系统参数、创建elk 用户(es 需要用 普通用户启动)
确保系统有足够资源启动ES
设置内核参数
-
vi /etc/sysctl.conf
-
# 增加以下参数
-
vm.max_map_count=655360
执行以下命令,确保生效配置生效:
sysctl -p
设置资源参数
-
vi /etc/security/limits.conf
-
# 修改
-
* soft nofile 65536
-
* hard nofile 131072
-
* soft nproc 65536
-
* hard nproc 131072
设置用户资源参数
-
vi /etc/security/limits.d/20-nproc.conf
-
# 设置elk用户参数
-
elk soft nproc 65536
添加启动用户,设置权限
-
useradd elk #创建用户elk
-
groupadd elk #创建组elk
-
useradd elk -g elk #将用户添加到组
-
-
-
# 修改文件所有者
-
chown -R elk:elk /opt/elk
二、修改 es 的配置文件
vim jvm.options
-
#修改jvm启动参数,默认es启动占用2G内存,根据情况设置适当的内存
-
-Xms256m
-
-Xmx256m
vim elasticsearch.yml
-
-
# ---------------------------------- Cluster -----------------------------------
-
cluster.name: es-server
-
# ------------------------------------ Node ------------------------------------
-
node.name: node-1
-
node.attr.rack: r1
-
# ----------------------------------- Paths ------------------------------------
-
#path.data: /path/to/data
-
#
-
# Path to log files:
-
#
-
#path.logs: /path/to/logs
-
# ---------------------------------- Network -----------------------------------
-
network.host: 192.168.42.112
-
http.port: 9200
三、修改kibana 的配置文件
vim kibana.yml
-
server.port: 5601
-
server.host: "192.168.42.112"
-
elasticsearch.url: "http://192.168.42.112:9200"
-
kibana.index: ".kibana"
四、修改logstash的配置文件
vim jvm.options
-
#修改jvm启动参数,默认es启动占用2G内存,根据情况设置适当的内存
-
-Xms256m
-
-Xmx256m
cd config
vim dev.conf
-
input {
-
beats {
-
host =>"192.168.42.112"
-
port => "5044"
-
}
-
}
-
-
filter {
-
grok {
-
match => { "message" => "%{COMBINEDAPACHELOG}" }
-
}
-
date {
-
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
-
target => ["datetime"]
-
}
-
geoip {
-
source => "clientip"
-
}
-
}
-
-
output {
-
elasticsearch {
-
hosts => "192.168.42.112:9200"
-
index => "access_log"
-
}
-
stdout { codec => rubydebug }
-
}
五、修改filebeat 的配置文件
vim filebeat.yml
-
#=========================== Filebeat inputs =============================
-
-
filebeat.inputs:
-
-
# Each - is an input. Most options can be set at the input level, so
-
# you can use different inputs for various configurations.
-
# Below are the input specific configurations.
-
-
- type: log
-
-
# Change to true to enable this input configuration.
-
enabled: true
-
-
# Paths that should be crawled and fetched. Glob based paths.
-
paths:
-
- /opt/elk/apache-tomcat-9.0.13/logs/*.log
-
#- c:\programdata\elasticsearch\logs\*
-
-
-
#================================ Outputs =====================================
-
-
# Configure what output to use when sending the data collected by the beat.
-
-
#-------------------------- Elasticsearch output ------------------------------
-
#output.elasticsearch:
-
# Array of hosts to connect to.
-
# hosts: ["localhost:9200"]
-
-
# Optional protocol and basic auth credentials.
-
#protocol: "https"
-
#username: "elastic"
-
#password: "changeme"
-
-
#----------------------------- Logstash output --------------------------------
-
output.logstash:
-
# The Logstash hosts
-
hosts: ["192.168.42.112:5044"]
-
-
# Optional SSL. By default is off.
-
# List of root certificates for HTTPS server verifications
-
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
-
-
# Certificate for SSL client authentication
-
#ssl.certificate: "/etc/pki/client/cert.pem"
-
-
# Client Certificate Key
-
#ssl.key: "/etc/pki/client/cert.key"
六、依次 启动 es、kibana、filebeat、logstash
启动es
su elk
./bin/elasticsearch -d
启动kibana
./bin/kibana
启动 filebeat
./filebeat -e -c filebeat.yml -d "publish"
启动 logstash
测试你的配置文件 是否正确( 解析配置文件并报告任何错误。)
bin/logstash -f dev.conf --config.test_and_exit
启动命令(启用自动配置重新加载,这样就不必每次修改配置文件时都停止并重新启动Logstash )
bin/logstash -f dev.conf --config.reload.automatic
七、在kibana中查看 收集到的 日志
文章来源: blog.csdn.net,作者:血煞风雨城2018,版权归原作者所有,如需转载,请联系作者。
原文链接:blog.csdn.net/qq_31905135/article/details/84138652
- 点赞
- 收藏
- 关注作者
评论(0)