分享web开发知识

注册/登录|最近发布|今日推荐

主页 IT知识网页技术软件开发前端开发代码编程运营维护技术分享教程案例
当前位置:首页 > IT知识

Flume 学习笔记之 Flume NG+Kafka整合

发布时间:2023-09-06 01:06责任编辑:彭小芳关键词:暂无标签

Flume NG集群+Kafka集群整合:

修改Flume配置文件(flume-kafka-server.conf),让Sink连上Kafka

hadoop1:

#set Agent namea1.sources = r1a1.channels = c1a1.sinks = k1#set channela1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# other node,nna to nnsa1.sources.r1.type = avroa1.sources.r1.bind = hadoop1a1.sources.r1.port = 52020a1.sources.r1.interceptors = i1a1.sources.r1.interceptors.i1.type = statica1.sources.r1.interceptors.i1.key = Collectora1.sources.r1.interceptors.i1.value = hadoop1a1.sources.r1.channels = c1#set sink to hdfsa1.sinks.k1.type=org.apache.flume.sink.kafka.KafkaSinka1.sinks.k1.topic = ScalaTopica1.sinks.k1.brokerList = hadoop1:9092a1.sinks.k1.requiredAcks = 1a1.sinks.k1.batchSize = 20a1.sinks.k1.channel=c1

hadoop2:

#set Agent namea1.sources = r1a1.channels = c1a1.sinks = k1#set channela1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# other node,nna to nnsa1.sources.r1.type = avroa1.sources.r1.bind = hadoop2a1.sources.r1.port = 52020a1.sources.r1.interceptors = i1a1.sources.r1.interceptors.i1.type = statica1.sources.r1.interceptors.i1.key = Collectora1.sources.r1.interceptors.i1.value = hadoop2a1.sources.r1.channels = c1#set sink to hdfsa1.sinks.k1.type=org.apache.flume.sink.kafka.KafkaSinka1.sinks.k1.topic = ScalaTopica1.sinks.k1.brokerList = hadoop2:9092a1.sinks.k1.requiredAcks = 1a1.sinks.k1.batchSize = 20a1.sinks.k1.channel=c1

集群测试:

  1. 启动zookeeper(hadoop1,hadoop2,hadoop3)
  2. 启动kafka server和consumer(hadoop1,hadoop2)
  3. 启动Flume server(hadoop1,hadoop2):flume-ng agent --conf conf --conf-file /usr/local/flume/conf/flume-kafka-server.conf --name a1 -Dflume.root.logger=INFO,console
  4. 启动Flume client(hadoop3):flume-ng agent --conf conf --conf-file /usr/local/flume/conf/flume-client.conf --name agent1 -Dflume.root.logger=INFO,console
  5. 在hadoop3上追加一条日志记录
  6. kafka consumer收到记录,从则测试完毕。

hadoop3:

hadoop1:

测试完毕,这样Flume+kafka就整合起来了,即Flume+Kafka+Spark Streaming的实时日志分析系统就孕育而生了。

Flume 学习笔记之 Flume NG+Kafka整合

原文地址:http://www.cnblogs.com/AK47Sonic/p/7440197.html

知识推荐

我的编程学习网——分享web前端后端开发技术知识。 垃圾信息处理邮箱 tousu563@163.com 网站地图
icp备案号 闽ICP备2023006418号-8 不良信息举报平台 互联网安全管理备案 Copyright 2023 www.wodecom.cn All Rights Reserved