1.采集目錄到HDFS
采集需求:服務(wù)器的某特定目錄下,會(huì)不斷產(chǎn)生新的文件,每當(dāng)有新文件出現(xiàn),就需要把文件采集到HDFS中去
根據(jù)需求,首先定義以下3大要素
采集源,即source——監(jiān)控文件目錄 : spooldir
下沉目標(biāo),即sink——HDFS文件系統(tǒng) : hdfs sink
source和sink之間的傳遞通道——channel,可用file channel 也可以用內(nèi)存channel
配置文件編寫(xiě):
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
##注意:不能往監(jiān)控目中重復(fù)丟同名文件
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /root/logs
a1.sources.r1.fileHeader = true
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = /flume/events/%y-%m-%d/%H%M/
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
a1.sinks.k1.hdfs.rollInterval = 3
a1.sinks.k1.hdfs.rollSize = 20
a1.sinks.k1.hdfs.rollCount = 5
a1.sinks.k1.hdfs.batchSize = 1
a1.sinks.k1.hdfs.useLocalTimeStamp = true
#生成的文件類(lèi)型,默認(rèn)是Sequencefile,可用DataStream,則為普通文本
a1.sinks.k1.hdfs.fileType = DataStream
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
啟動(dòng)agent去采集數(shù)據(jù)
bin/flume-ng agent -c ./conf -f ./conf/spool-hdfs.conf -n a1 -Dflume.root.logger=INFO,console
capacity:默認(rèn)該通道中最大的可以存儲(chǔ)的event數(shù)量
trasactionCapacity:每次最大可以從source中拿到或者送到sink中的event數(shù)量
2.采集文件到HDFS
2.采集文件到HDFS
采集需求:比如業(yè)務(wù)系統(tǒng)使用log4j生成的日志,日志內(nèi)容不斷增加,需要把追加到日志文件中的數(shù)據(jù)實(shí)時(shí)采集到hdfs
根據(jù)需求,首先定義以下3大要素
采集源,即source——監(jiān)控文件內(nèi)容更新 : exec ‘tail -F file’
下沉目標(biāo),即sink——HDFS文件系統(tǒng) : hdfs sink
Source和sink之間的傳遞通道——channel,可用file channel 也可以用 內(nèi)存channel
配置文件編寫(xiě):
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /root/logs/test.log
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = /flume/tailout/%y-%m-%d/%H%M/
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
a1.sinks.k1.hdfs.rollInterval = 3
a1.sinks.k1.hdfs.rollSize = 20
a1.sinks.k1.hdfs.rollCount = 5
a1.sinks.k1.hdfs.batchSize = 1
a1.sinks.k1.hdfs.useLocalTimeStamp = true
#生成的文件類(lèi)型,默認(rèn)是Sequencefile,可用DataStream,則為普通文本
a1.sinks.k1.hdfs.fileType = DataStream
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
參數(shù)解析:
?rollInterval
默認(rèn)值:30
hdfs sink間隔多長(zhǎng)將臨時(shí)文件滾動(dòng)成最終目標(biāo)文件,單位:秒;
如果設(shè)置成0,則表示不根據(jù)時(shí)間來(lái)滾動(dòng)文件;
注:滾動(dòng)(roll)指的是,hdfs sink將臨時(shí)文件重命名成最終目標(biāo)文件,并新打開(kāi)一個(gè)臨時(shí)文件來(lái)寫(xiě)入數(shù)據(jù);
?rollSize
默認(rèn)值:1024
當(dāng)臨時(shí)文件達(dá)到該大?。▎挝唬篵ytes)時(shí),滾動(dòng)成目標(biāo)文件;
如果設(shè)置成0,則表示不根據(jù)臨時(shí)文件大小來(lái)滾動(dòng)文件;
?rollCount
默認(rèn)值:10
當(dāng)events數(shù)據(jù)達(dá)到該數(shù)量時(shí)候,將臨時(shí)文件滾動(dòng)成目標(biāo)文件;
如果設(shè)置成0,則表示不根據(jù)events數(shù)據(jù)來(lái)滾動(dòng)文件;
?round
默認(rèn)值:false
是否啟用時(shí)間上的“舍棄”,這里的“舍棄”,類(lèi)似于“四舍五入”。
?roundValue
默認(rèn)值:1
時(shí)間上進(jìn)行“舍棄”的值;
?roundUnit
默認(rèn)值:seconds
時(shí)間上進(jìn)行“舍棄”的單位,包含:second,minute,hour
示例:
a1.sinks.k1.hdfs.path = /flume/events/%y-%m-%d/%H%M/%S
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
當(dāng)時(shí)間為2015-10-16 17:38:59時(shí)候,hdfs.path依然會(huì)被解析為:
/flume/events/20151016/17:30/00
因?yàn)樵O(shè)置的是舍棄10分鐘內(nèi)的時(shí)間,因此,該目錄每10分鐘新生成一個(gè)。