Elasticsearch+Logstash+Kibana(6.7.1版本)安装部署
【摘要】 Elasticsearch+Logstash+Kibana(6.7.1版本)安装部署
Elasticsearch+Logstash+Kibana(6.7.1版本)安装部署
目前Elasticsearch、Logstash、Kibana三个组件都准备安装在虚拟机供个人学习使用。
一、部署Elasticsearch
1、下载安装包
官网下载地址:
ES下载官网
选择Elasticsearch组件
2、新增ES用户
因为elasticsearch不允许root用户启动,所以需要创建新的用户和组。
[root@s133061 elasticsearch-6.7.1]# useradd es
[root@s133061 elasticsearch-6.7.1]# passwd es
Changing password for user es.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.
3、上传解压安装包
[root@s133061 elk]# pwd
/hadoop/elk
[root@s133061 elk]# ls
elasticsearch-6.7.1.tar.gz kibana-6.7.1-linux-x86_64.tar.gz logstash-6.7.1.tar.gz
[root@s133061 elk]# tar -xvf elasticsearch-6.7.1.tar.gz -C /home/es/
4、#修改elasticsearch的权限给es用户和组
[root@s133061 elk]# cd /home/es/
[root@s133061 es]# ls
elasticsearch-6.7.1
[root@s133061 es]# chown -R es:es elasticsearch-6.7.1
[root@s133061 es]# ll
total 0
drwxr-xr-x 8 es es 143 Apr 3 2019 elasticsearch-6.7.1
5、修改配置文件
[root@s133061 es]# cd elasticsearch-6.7.1/
[root@s133061 elasticsearch-6.7.1]# ls
bin config lib LICENSE.txt logs modules NOTICE.txt plugins README.textile
[root@s133061 elasticsearch-6.7.1]# cd config/
[root@s133061 config]# ls
elasticsearch.yml jvm.options log4j2.properties role_mapping.yml roles.yml users users_roles
切换用户为es后创建目录:
先创建es的日志目录和数据存放目录:
[es@s133061 elasticsearch-6.7.1]$ mkdir -p data/es
[es@s133061 elasticsearch-6.7.1]$ mkdir -p data/logs/es
接下来修改配置文件:
[es@s133061 elasticsearch-6.7.1]$ pwd
/home/es/elasticsearch-6.7.1
[es@s133061 elasticsearch-6.7.1]$ cd config/
[es@s133061 config]$ vim elasticsearch.yml
修改部分配置如下:
path.data: /home/es/elasticsearch-6.7.1/data/es
path.logs: /home/es/elasticsearch-6.7.1/data/logs/es
--目的是使ES支持跨域请求
network.host: 10.241.133.61
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: '*'
6、修改linux系统资源配置
下面是本人在上面配置修改后启动报的错误以及解决方案,建议各位先按照下面配置修改自己配置再启动es。
a.max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
修改 /etc/security/limits.conf 文件,增加配置,来改变用户 es 每个进程最大同时打开文件数的大小:
es soft nofile 65535
es hard nofile 65537
可切换到es用户下,然后通过下面2个命令查看当前数量:
ulimit -Hn
ulimit -Sn
注意:用户退出重新登录后配置才会刷新生效。
b. max number of threads [3818] for user [es] is too low, increase to at least [4096]
最大线程个数太低。修改配置文件 /etc/security/limits.conf ,增加配置:
es - nproc 4096
# 或者
es soft nproc 4096
es hard nproc 4096
可切换到es用户下,然后通过下面2个命令查看当前最大线程数:
ulimit -Hu
ulimit –Su
注意:用户退出重新登录后配置才会刷新生效。
c. max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
修改 /etc/sysctl.conf 文件,在文末增加配置
vm.max_map_count=262144
执行命令sysctl -p生效。
d. memory locking requested for elasticsearch process but memory is not locked
修改 /etc/security/limits.conf 文件,增加配置:
* soft memlock unlimited
* hard memlock unlimited
7、启动elasticsearch
[es@s133061 elasticsearch-6.7.1]$ cd bin/
[es@s133061 bin]$ nohup ./elasticsearch &
[1] 23324
[es@s133061 bin]$ nohup: ignoring input and appending output to ‘nohup.out’
访问url:http://10.241.133.61:9200/
出现下面提示代表es启动成功:
{
"name" : "GRQFxQq",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "j916An9yQ9GSJ6eA5qt_-w",
"version" : {
"number" : "6.7.1",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "2f32220",
"build_date" : "2019-04-02T15:59:27.961366Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
8、停止命令
netstat -ntlp | grep 9200 | awk '{print $7}' | awk -F '/' '{print $1}' | xargs kill -9
二、Logstash部署
1、下载安装包
官网下载地址:
ES下载官网
选择logstash组件
2、解压缩
[root@s133061 elk]# pwd
/hadoop/elk
[root@s133061 elk]# tar -xvf logstash-6.7.1.tar.gz
[root@s133061 elk]# ls
elasticsearch-6.7.1.tar.gz kibana-6.7.1-linux-x86_64.tar.gz logstash-6.7.1 logstash-6.7.1.tar.gz
[root@s133061 elk]# cd logstash-6.7.1/
[root@s133061 logstash-6.7.1]# ls
bin config CONTRIBUTORS data Gemfile Gemfile.lock lib LICENSE.txt logstash-core logstash-core-plugin-api modules NOTICE.TXT tools vendor x-pack
[root@s133061 logstash-6.7.1]# mkdir zhaoyd
[root@s133061 logstash-6.7.1]# cd zhaoyd
3、在ES创建索引
创建索引
[root@s133061 zhaoyd]# curl -XPUT http://10.241.133.61:9200/zydtest
{"acknowledged":true,"shards_acknowledged":true,"index":"zydtest"}
查询刚刚创建的索引:
[root@s133061 zhaoyd]# curl -XGET "http://10.241.133.61:9200/zydtest/_search?pretty"
{
"took" : 85,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : null,
"hits" : [ ]
}
}
4、Logstash向Elasticsearch导数测试
编辑测试数据:
[root@s133061 zhaoyd]# cat log.log
2020-12-14T10:50:58.000+00:00 INFO this is a test! 10.241.133.13
编辑配置文件:
[root@s133061 zhaoyd]# cat test.conf
input{
file{
path => "/hadoop/elk/logstash-6.7.1/zhaoyd/log.log"
start_position => "beginning"
}
}
filter{
grok{
match=>{ "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level} %{DATA:message} %{IP:address}" }
}
}
output{
elasticsearch{
hosts => "10.241.133.61:9200"
index => "zydtest"
}
stdout {}
}
执行导入任务,将log.log文件内容导入到ES:
[root@s133061 logstash-6.7.1]# pwd
/hadoop/elk/logstash-6.7.1
[root@s133061 logstash-6.7.1]# ./bin/logstash -f zhaoyd/test.conf
Sending Logstash logs to /hadoop/elk/logstash-6.7.1/logs which is now configured via log4j2.properties
[2020-12-16T16:36:34,297][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-12-16T16:36:34,319][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.7.1"}
[2020-12-16T16:36:44,171][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2020-12-16T16:36:44,792][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.241.133.61:9200/]}}
[2020-12-16T16:36:45,063][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.241.133.61:9200/"}
[2020-12-16T16:36:45,147][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2020-12-16T16:36:45,152][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2020-12-16T16:36:45,195][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.241.133.61:9200"]}
[2020-12-16T16:36:45,217][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2020-12-16T16:36:45,315][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2020-12-16T16:36:45,429][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2020-12-16T16:36:46,042][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/hadoop/elk/logstash-6.7.1/data/plugins/inputs/file/.sincedb_a1dc436238800e668fd2df0a021e5d89", :path=>["/hadoop/elk/logstash-6.7.1/zhaoyd/log.log"]}
[2020-12-16T16:36:46,106][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x1fe96bd7 run>"}
[2020-12-16T16:36:46,197][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2020-12-16T16:36:46,204][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-12-16T16:36:46,834][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
/hadoop/elk/logstash-6.7.1/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
"message" => [
[0] "2020-12-14T10:50:58.000+00:00 INFO this is a test! 10.241.133.13",
[1] "this is a test!"
],
"@timestamp" => 2020-12-16T08:36:47.058Z,
"host" => "s133061",
"log-level" => "INFO",
"address" => "10.241.133.13",
"@version" => "1",
"timestamp" => "2020-12-14T10:50:58.000+00:00",
"path" => "/hadoop/elk/logstash-6.7.1/zhaoyd/log.log"
}
上面导入任务已经执行成功了,接下来查询下索引数据:
[root@s133061 logstash-6.7.1]# curl -XGET "http://10.241.133.61:9200/zydtest/_search?pretty"
{
"took" : 25,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 1,
"max_score" : 1.0,
"hits" : [
{
"_index" : "zydtest",
"_type" : "doc",
"_id" : "ZR6xanYBLtVoEIHt0n-X",
"_score" : 1.0,
"_source" : {
"message" : [
"2020-12-14T10:50:58.000+00:00 INFO this is a test! 10.241.133.13",
"this is a test!"
],
"@timestamp" : "2020-12-16T08:36:47.058Z",
"host" : "s133061",
"log-level" : "INFO",
"address" : "10.241.133.13",
"@version" : "1",
"timestamp" : "2020-12-14T10:50:58.000+00:00",
"path" : "/hadoop/elk/logstash-6.7.1/zhaoyd/log.log"
}
}
]
}
}
数据已经能够正常导入了,证明安装没问题。
三、安装部署Kibana
1、下载安装包
官网下载地址:
ES下载官网
选择Kibana组件
2、解压
[root@s133061 elk]# pwd
/hadoop/elk
[root@s133061 elk]# tar -xvf kibana-6.7.1-linux-x86_64.tar.gz
[root@s133061 elk]# mv kibana-6.7.1-linux-x86_64 kibana-6.7.1
3、配置参数文件
[root@s133061 elk]# cd kibana-6.7.1/
[root@s133061 kibana-6.7.1]# ls
bin built_assets config data LICENSE.txt node node_modules NOTICE.txt optimize package.json plugins README.txt src target webpackShims
[root@s133061 kibana-6.7.1]# cd config/
[root@s133061 config]# vim kibana.yml
#解除注释,并修改成以下内容
server.port: 5601
server.host: "10.241.133.61" #允许远程访问
elasticsearch.hosts: ["http://10.241.133.61:9200"] //修改成自己集群的地址
kibana.index: ".kibana"
4、启动Kibana
[root@s133061 bin]# pwd
/hadoop/elk/kibana-6.7.1/bin
[root@s133061 bin]# nohup ./kibana &
【版权声明】本文为华为云社区用户原创内容,未经允许不得转载,如需转载请自行联系原作者进行授权。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
- 点赞
- 收藏
- 关注作者
评论(0)