大数据Apache Druid(六):Druid流式数据加载
Druid流式数据加载
一、Druid与Kafka整合
1、使用webui加载Kafka数据
Druid也可以与Kafka整合,直接读取Kafka中某个topic的数据在Druid中进行OLAP分析,步骤如下:
- 启动Kafka,在Kafka中创建topic
#创建Kafka topic
[root@node1 bin]# ./kafka-topics.sh --zookeeper node3:2181,node4:2181,node5:2181 --create --topic druid-topic --partitions 3 --replication-factor 3
#向创建的topic中生产一条数据,这里为了方便后面Druid解析数据
[root@node1 bin]# ./kafka-console-producer.sh --topic druid-topic --broker-list node1:9092,node2:9092,node3:9092
>{"data_dt":"2021-07-01T08:13:23.000Z","uid":"uid001","loc":"北京","item":"衣服","amount":"100"}
- 进入Druid主页,加载Kafka中数据
进入Druid主页http://node5:8888,点击“Load data”标签:
![](https://ask.qcloudimg.com/http-save/1159019/312c155ca35685900498b55cd64e0b82.png?imageView2/2/w/1620)
![](https://ask.qcloudimg.com/http-save/1159019/032e70839b4c5d92b85a517e20ad8dad.png?imageView2/2/w/1620)
填写Kafka Server、Topic、点击“Parse data”:
![](https://ask.qcloudimg.com/http-save/1159019/63ef215efce2b95b11168b9ba4171a7a.png?imageView2/2/w/1620)
![](https://ask.qcloudimg.com/http-save/1159019/89c8eec80c55b0831d916b1da09b9852.png?imageView2/2/w/1620)
![](https://ask.qcloudimg.com/http-save/1159019/c3e8907c1e561f912bfe44dff45a9396.png?imageView2/2/w/1620)
![](https://ask.qcloudimg.com/http-save/1159019/19fe0ab89a6384a540e8aa441c34203e.png?imageView2/2/w/1620)
![](https://ask.qcloudimg.com/http-save/1159019/dcc8371d1b431ad3f4a1fb7b592a9f0b.png?imageView2/2/w/1620)
![](https://ask.qcloudimg.com/http-save/1159019/c9c8365029e3e1c65597940dd7df5538.png?imageView2/2/w/1620)
![](https://ask.qcloudimg.com/http-save/1159019/90f06fc221be46cd949a9762fcf85acf.png?imageView2/2/w/1620)
![](https://ask.qcloudimg.com/http-save/1159019/af1175d392ca4f56f8a139d3d7a9a92a.png?imageView2/2/w/1620)
![](https://ask.qcloudimg.com/http-save/1159019/eefeff0764ae464c3691ddacc1f2b093.png?imageView2/2/w/1620)
![](https://ask.qcloudimg.com/http-save/1159019/d4a0ac3dc4dcc8e6122c134452865e82.png?imageView2/2/w/1620)
![](https://ask.qcloudimg.com/http-save/1159019/a119f3963d2f1f5d442dbe66aaeb6baa.png?imageView2/2/w/1620)
2、查询Druid中的数据
点击“Query”编写SQL ,查询DataSource “druid-topic”数据如下:
![](https://ask.qcloudimg.com/http-save/1159019/4cd79715063b58cb0f92c98f9d715646.png?imageView2/2/w/1620)
向Kafka topic druid-topic中继续写入如下数据:
{"data_dt":"2021-07-01T08:20:13.000Z","uid":"uid001","loc":"北京","item":"手机","amount":"200"}
{"data_dt":"2021-07-01T09:24:46.000Z","uid":"uid002","loc":"上海","item":"书籍","amount":"300"}
{"data_dt":"2021-07-01T09:43:42.000Z","uid":"uid002","loc":"上海","item":"书籍","amount":"400"}
{"data_dt":"2021-07-01T09:53:42.000Z","uid":"uid002","loc":"上海","item":"书籍","amount":"500"}
{"data_dt":"2021-07-01T12:19:52.000Z","uid":"uid003","loc":"天津","item":"水果","amount":"600"}
{"data_dt":"2021-07-01T14:53:13.000Z","uid":"uid004","loc":"广州","item":"生鲜","amount":"700"}
{"data_dt":"2021-07-01T15:51:45.000Z","uid":"uid005","loc":"深圳","item":"手机","amount":"800"}
{"data_dt":"2021-07-01T17:21:21.000Z","uid":"uid006","loc":"杭州","item":"电脑","amount":"900"}
{"data_dt":"2021-07-01T20:26:53.000Z","uid":"uid007","loc":"湖南","item":"水果","amount":"1000"}
{"data_dt":"2021-07-01T09:38:11.000Z","uid":"uid008","loc":"山东","item":"书籍","amount":"1100"}
![](https://ask.qcloudimg.com/http-save/1159019/5fbf4bf5c139cc85f6de319b09342857.png?imageView2/2/w/1620)
执行聚合查询:select loc,item,sum(amount) as total_amount from "druid-topic" group by loc,item
![](https://ask.qcloudimg.com/http-save/1159019/078d83eaa43538e324cff6270e68d0d3.png?imageView2/2/w/1620)
3、删除Druid数据
删除Druid数据,首先在Ingestion中停止实时接收数据的任务:
![](https://ask.qcloudimg.com/http-save/1159019/506bfc7ef4e46df969e41e621aad60ad.png?imageView2/2/w/1620)
然后再DataSource中使所有Segment无效后,再彻底删除对应的数据:
![](https://ask.qcloudimg.com/http-save/1159019/bb3cf992a9bd9b728b50f00442e326de.png?imageView2/2/w/1620)
![](https://ask.qcloudimg.com/http-save/1159019/04f09d20f7962de39e0ae1fbaac24c02.png?imageView2/2/w/1620)
4、使用post方式加载Kafka数据
由于前面已经使用Druid加载过当前Kafka“druid-topic”topic的数据,当停止Druid supervisors 中实时读取Kafka topic 任务后,在MySQL 库表“druid.druid_datasource”中会存放当前datasource读取kafka topic的offset信息,如果使用post方式再次提交实时任务生成一样的datasource名称读取相同的Kafka topic时,会获取到该位置的offset信息,所以为了能从头消费Kafka中的数据,我们可以将mysql中“druid.druid_datasource”对应的datasource数据条目删除:
![](https://ask.qcloudimg.com/http-save/1159019/d8d871aeda3f6a1824d2d534c74c1f7a.png?imageView2/2/w/1620)
准备json配置,使用postman来提交加载Kafka的任务,配置如下:
{
"type": "kafka",
"spec": {
"ioConfig": {
"type": "kafka",
"consumerProperties": {
"bootstrap.servers": "node1:9092,node2:9092,node3:9092"
},
"topic": "druid-topic",
"inputFormat": {
"type": "json"
},
"useEarliestOffset": true
},
"tuningConfig": {
"type": "kafka"
},
"dataSchema": {
"dataSource": "druid-topic",
"timestampSpec": {
"column": "data_dt",
"format": "iso"
},
"dimensionsSpec": {
"dimensions": [
{
"type": "long",
"name": "amount"
},
"item",
"loc",
"uid"
]
},
"granularitySpec": {
"queryGranularity": "none",
"rollup": false,
"segmentGranularity": "day"
}
}
}
}
打开postman,post请求URL:http://node3:8081/druid/indexer/v1/supervisor,在row中写入以上json配置数据提交即可,执行之后可以在Druid页面中看到对应的supervisors和Datasource。
![](https://ask.qcloudimg.com/http-save/1159019/13ad39f8e086b922ba3434cdb27ea701.png?imageView2/2/w/1620)
- 点赞
- 收藏
- 关注作者
评论(0)