需求: A B 两台日志服务器实时生产日志主要类型为 access.log,nginx.log,web.log, 现在要求:
把 A,B 机器中的 access.log,nginx.log,Web.log 采集汇总到 C 机器上然后统一收集到 hdfs 中, 但是在 hdfs 中要求的目录为:
/source/logs/access / 日期 /**
/source/logs/nginx / 日期 /**
/source/logs/Web / 日期 /**
场景分析:
规划:
- hadoop01(web01):
- source:access.log ,nginx.log,Web.log
- channel:memory
- sink:avro
- hadoop02(web02):
- source:access.log ,nginx.log,Web.log
- channel:memory
- sink:avro
- hadoop03(数据收集):
- source;avro
- channel:memory
- sink:hdfs
配置文件:
- #exec_source_avro_sink.properties
- # 指定各个核心组件
- a1.sources = r1 r2 r3
- a1.sinks = k1
- a1.channels = c1
- #r1
- a1.sources.r1.type = exec
- a1.sources.r1.command = tail -F /home/hadoop/flume_data/access.log
- a1.sources.r1.interceptors = i1
- a1.sources.r1.interceptors.i1.type = static
- a1.sources.r1.interceptors.i1.key = type
- a1.sources.r1.interceptors.i1.value = access
- #r2
- a1.sources.r2.type = exec
- a1.sources.r2.command = tail -F /home/hadoop/flume_data/nginx.log
- a1.sources.r2.interceptors = i2
- a1.sources.r2.interceptors.i2.type = static
- a1.sources.r2.interceptors.i2.key = type
- a1.sources.r2.interceptors.i2.value = nginx
- #r3
- a1.sources.r3.type = exec
- a1.sources.r3.command = tail -F /home/hadoop/flume_data/Web.log
- a1.sources.r3.interceptors = i3
- a1.sources.r3.interceptors.i3.type = static
- a1.sources.r3.interceptors.i3.key = type
- a1.sources.r3.interceptors.i3.value = Web
- #Describe the sink
- a1.sinks.k1.type = avro
- a1.sinks.k1.hostname = hadoop03
- a1.sinks.k1.port = 41414
- #Use a channel which buffers events in memory
- a1.channels.c1.type = memory
- a1.channels.c1.capacity = 20000
- a1.channels.c1.transactionCapacity = 10000
- #Bind the source and sink to the channela1.sources.r1.channels = c1
- a1.sources.r2.channels = c1
- a1.sources.r3.channels = c1
- a1.sinks.k1.channel = c1
- #avro_source_hdfs_sink.properties
- # 定义 agent 名, source,channel,sink 的名称
- a1.sources = r1
- a1.sinks = k1
- a1.channels = c1
- # 定义 source
- a1.sources.r1.type = avro
- a1.sources.r1.bind = 0.0.0.0
- a1.sources.r1.port =41414
- # 添加时间拦截器
- a1.sources.r1.interceptors = i1
- a1.sources.r1.interceptors.i1.type=org.apache.flume.interceptor.TimestampInterceptor$Builder
- # 定义 channels
- a1.channels.c1.type = memory
- a1.channels.c1.capacity = 20000
- a1.channels.c1.transactionCapacity = 10000
- # 定义 sink
- a1.sinks.k1.type = hdfs
- a1.sinks.k1.hdfs.path=hdfs://myha01/source/logs/%{
- type
- }/%Y%m%d
- a1.sinks.k1.hdfs.filePrefix =events
- a1.sinks.k1.hdfs.fileType = DataStream
- a1.sinks.k1.hdfs.writeFormat = Text
- # 时间类型
- a1.sinks.k1.hdfs.useLocalTimeStamp = true
- # 生成的文件不按条数生成
- a1.sinks.k1.hdfs.rollCount = 0
- # 生成的文件按时间生成
- a1.sinks.k1.hdfs.rollInterval = 30
- # 生成的文件按大小生成
- a1.sinks.k1.hdfs.rollSize = 10485760
- # 批量写入 hdfs 的个数
- a1.sinks.k1.hdfs.batchSize = 20
- #flume 操作 hdfs 的线程数 (包括新建, 写入等)
- a1.sinks.k1.hdfs.threadsPoolSize=10
- # 操作 hdfs 超时时间
- a1.sinks.k1.hdfs.callTimeout=30000
- # 组装 source,channel,sink
- a1.sources.r1.channels = c1
- a1.sinks.k1.channel = c1
测试:
- # 在 hadoop01 和 hadoop02 上的 / home/hadoop/data 有数据文件 access.log,nginx.log, Web.log
- # 先启动 hadoop03 上的 flume:(存储)
- flume-ng agent -c conf -f avro_source_hdfs_sink.properties -name a1 -Dflume.root.logger=DEBUG,console
- # 然后在启动 hadoop01 和 hadoop02 上的命令 flume(收集)
- flume-ng agent -c conf -f exec_source_avro_sink.properties -name a1 -Dflume.root.logger=DEBUG,console
来源: http://www.bubuko.com/infodetail-2924147.html