flume+kafka+spark-streaming測試

注意:清楚flume版本,不同版本配置的參數(shù)值不一樣,此處是CDH5.7.2對應(yīng)的flume版本是1.6.0

系統(tǒng)環(huán)境

虛擬機版本: VMware Workstation 10.0
節(jié)點: master (3g內(nèi)存) ,slave1(1g內(nèi)存)
CDH版本: 5.7.2
操作系統(tǒng): Centos 6.8
flume版本: 1.6.0
kafka版本: 0.10.0
spark版本: 1.6.0

master數(shù)據(jù)流向的服務(wù)器配置文件

a1.sources= r1  
a1.sinks= k1  
a1.channels= c1  
   
#Describe/configure the source  
a1.sources.r1.type= avro  
a1.sources.r1.channels= c1  
a1.sources.r1.bind= master  
a1.sources.r1.port= 4545  
   
#Describe the sink  
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = sparkstreaming
a1.sinks.k1.brokerList = localhost:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1
   
#Use a channel which buffers events in memory  
a1.channels.c1.type= memory  
a1.channels.c1.keep-alive= 10  
a1.channels.c1.capacity= 100000  
a1.channels.c1.transactionCapacity= 100000  

slave1數(shù)據(jù)流出的服務(wù)器配置文件

a1.sources = r1
a1.sinks = k1
a1.channels = c1


a1.sources.r1.type = spooldir
a1.sources.r1.channels = c1
a1.sources.r1.spoolDir = /opt/cloudera/parcels/CDH/lib/flume-ng/logs
#a1.sources.r1.fileHeader = true



a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = master
a1.sinks.k1.port = 4545

具體步驟

注意,最好每一步都分開一個終端來啟動
啟動這些步驟前需要先啟動zookeeper,在啟動kafka

#第一步master創(chuàng)建kafka topic
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1  --topic  sparkstreaming

#第二步 mater 啟動flume
bin/flume-ng agent --conf /opt/cloudera/parcels/CDH/lib/flume-ng/conf/ -f /opt/cloudera/parcels/CDH/lib/flume-ng/conf/flume-flume-kafka.conf -Dflume.root.logger=INFO,console -n a1

#第三步 slave1啟動flume
bin/flume-ng agent --conf /opt/cloudera/parcels/CDH/lib/flume-ng/conf/ -f /opt/cloudera/parcels/CDH/lib/flume-ng/conf/flume-flume.conf -Dflume.root.logger=INFO,console -n a1


#第四步 slave1使用腳本想目錄增加數(shù)據(jù)
for((i=1;i<=1000;i++));  
do 
  sleep 2;  
  echo "hello world hello world liujm  tljsdkjflsakd hello world hello" >> /opt/cloudera/parcels/CDH/lib/flume-ng/logs/test.log;  
done  

#第五步master啟動kafka消費者
bin/kafka-console-consumer.sh -zookeeper localhost:2181--from-beginning --topic sparkstreaming

#第六步master啟動spark-streaming(使用spark提供的例子)[進入spark_home]
bin/run-example streaming.DirectKafkaWordCount localhost:9092 sparkstreaming

結(jié)果能看到第六步對應(yīng)的終端會出現(xiàn)一下情況,說明配置測試成功

消費者終端顯示的結(jié)果
spark-streaming運行結(jié)果

從結(jié)果來看,網(wǎng)絡(luò)延遲大概有20秒左右。

附錄官網(wǎng)以kafka數(shù)據(jù)源的spark-streaming例子

streaming.DirectKafkaWordCount.scala

/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

// scalastyle:off println
package org.apache.spark.examples.streaming

import kafka.serializer.StringDecoder

import org.apache.spark.streaming._
import org.apache.spark.streaming.kafka._
import org.apache.spark.SparkConf

/**
 * Consumes messages from one or more topics in Kafka and does wordcount.
 * Usage: DirectKafkaWordCount <brokers> <topics>
 *   <brokers> is a list of one or more Kafka brokers
 *   <topics> is a list of one or more kafka topics to consume from
 *
 * Example:
 *    $ bin/run-example streaming.DirectKafkaWordCount broker1-host:port,broker2-host:port \
 *    topic1,topic2
 */
object DirectKafkaWordCount {
  def main(args: Array[String]) {
    if (args.length < 2) {
      System.err.println(s"""
        |Usage: DirectKafkaWordCount <brokers> <topics>
        |  <brokers> is a list of one or more Kafka brokers
        |  <topics> is a list of one or more kafka topics to consume from
        |
        """.stripMargin)
      System.exit(1)
    }

    StreamingExamples.setStreamingLogLevels()

    val Array(brokers, topics) = args

    // Create context with 2 second batch interval
    val sparkConf = new SparkConf().setAppName("DirectKafkaWordCount")
    val ssc = new StreamingContext(sparkConf, Seconds(2))

    // Create direct kafka stream with brokers and topics
    val topicsSet = topics.split(",").toSet
    val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers)
    val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
      ssc, kafkaParams, topicsSet)

    // Get the lines, split them into words, count the words and print
    val lines = messages.map(_._2)
    val words = lines.flatMap(_.split(" "))
    val wordCounts = words.map(x => (x, 1L)).reduceByKey(_ + _)
    wordCounts.print()

    // Start the computation
    ssc.start()
    ssc.awaitTermination()
  }
}

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容