1、我們以Socket數(shù)據(jù)來源為例,通過WordCount計(jì)算來跟蹤Receiver的啟動(dòng)
代碼如下:
objectNetworkWordCount {
defmain(args:Array[String]) {
if (args.length< 2) {
System.err.println("Usage: NetworkWordCount<hostname> <port>")
System.exit(1)
}
val sparkConf= newSparkConf().setAppName("NetworkWordCount").setMaster("local[2]")
val ssc = newStreamingContext(sparkConf,Seconds(1))
val lines= ssc.socketTextStream(args(0), args(1).toInt,StorageLevel.MEMORY_AND_DISK_SER)
val words= lines.flatMap(_.split(""))
val wordCounts= words.map(x => (x,1)).reduceByKey(_ + _)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
}
}
2、ssc.socketTextStream調(diào)用socketStream方法,在socketStream方法中new SocketInputDStream實(shí)例,
SocketInputDStream繼承自ReceiverInputDStream。SocketInputDStream實(shí)現(xiàn)了getReceiver方法,
在getReceiver方法中實(shí)例化了一個(gè)SocketReceiver,SocketReceiver繼承自Receiver類。
在SocketReceiver中主要實(shí)現(xiàn)了onStart方法,在onStart方法中啟動(dòng)一個(gè)線程來調(diào)用receive方法,
在receiver方法中就是具體接收數(shù)據(jù)的邏輯代碼,通過Socket來讀取數(shù)據(jù)然后包裝到Iterator中,從
的start方法。直接看scheduler.start()這行代碼,調(diào)用了JobScheduler的start方法,
看到receiverTracker.start()代碼調(diào)用了receiverTracker的start方法。接著看launchReceivers()方法。
代碼如下:
private def launchReceivers(): Unit = {
val receivers = receiverInputStreams.map(nis => {
val rcvr = nis.getReceiver()
rcvr.setReceiverId(nis.id)
rcvr
})
runDummySparkJob()
logInfo("Starting " + receivers.length + " receivers")
endpoint.send(StartAllReceivers(receivers))
}
3.1 首先看receiverInputStreams ,他在ReceiverTracker實(shí)例化的時(shí)候聲明
private val receiverInputStreams = ssc.graph.getReceiverInputStreams()
看val rcvr = nis.getReceiver(),rcvr是Receiver的一個(gè)子類,就是我們上面看的SocketReceiver,這里返回的是receivers,因?yàn)閞eceiver可能有多個(gè)。
3.2 runDummySparkJob()從字面上看就是運(yùn)行一個(gè)樣本的job來測(cè)試一下應(yīng)用的啟動(dòng)情況,看一下代碼,就是運(yùn)行一個(gè)簡(jiǎn)單的job測(cè)試
private def runDummySparkJob(): Unit = {
if (!ssc.sparkContext.isLocal) {
ssc.sparkContext.makeRDD(1 to 50, 50).map(x => (x, 1)).reduceByKey(_ + _, 20).collect()
}
assert(getExecutors.nonEmpty)
}
3.3 看最后一行代碼endpoint.send(StartAllReceivers(receivers)),發(fā)送一條消息給ReceiverTrackerEndpoint, 而ReceiverTrackerEndpoint是在ReceiverTracker的start方法中被賦值的。
3.4 看ReceiverTrackerEndpoint中的消息接收方法,代碼如下
case StartAllReceivers(receivers) =>
val scheduledLocations = schedulingPolicy.scheduleReceivers(receivers, getExecutors)
for (receiver <- receivers) {
val executors = scheduledLocations(receiver.streamId)
updateReceiverScheduledExecutors(receiver.streamId, executors)
receiverPreferredLocations(receiver.streamId) = receiver.preferredLocation
startReceiver(receiver, executors)
}
val scheduledLocations = schedulingPolicy.scheduleReceivers(receivers, getExecutors)
這行代碼的作用就是計(jì)算第一個(gè)receiver可以運(yùn)行的Executor,接下來看關(guān)鍵性的一行代碼
startReceiver(receiver, executors),代碼如下:
private def startReceiver(
receiver: Receiver[_],
scheduledLocations: Seq[TaskLocation]): Unit = {
def shouldStartReceiver: Boolean = {
// It's okay to start when trackerState is Initialized or Started
!(isTrackerStopping || isTrackerStopped)
}
val receiverId = receiver.streamId
if (!shouldStartReceiver) {
onReceiverJobFinish(receiverId)
return
}
val checkpointDirOption = Option(ssc.checkpointDir)
val serializableHadoopConf = new SerializableConfiguration(ssc.sparkContext.hadoopConfiguration)
// Function to start the receiver on the worker node
val startReceiverFunc: Iterator[Receiver[_]] => Unit =
(iterator: Iterator[Receiver[_]]) => {
if (!iterator.hasNext) {
throw new SparkException("Could not start receiver as object not found.")
}
if (TaskContext.get().attemptNumber() == 0) {
val receiver = iterator.next()
assert(iterator.hasNext == false)
val supervisor = new ReceiverSupervisorImpl(receiver, SparkEnv.get, serializableHadoopConf.value, checkpointDirOption)
supervisor.start()
supervisor.awaitTermination()
} else {
// It's restarted by TaskScheduler, but we want to reschedule it again. So exit it.
}
}
// Create the RDD using the scheduledLocations to run the receiver in a Spark job
val receiverRDD: RDD[Receiver[_]] =
if (scheduledLocations.isEmpty) {
ssc.sc.makeRDD(Seq(receiver), 1)
} else {
val preferredLocations = scheduledLocations.map(_.toString).distinct
ssc.sc.makeRDD(Seq(receiver -> preferredLocations))
}
receiverRDD.setName(s"Receiver $receiverId")
ssc.sparkContext.setJobDescription(s"Streaming job running receiver $receiverId")
ssc.sparkContext.setCallSite(Option(ssc.getStartSite()).getOrElse(Utils.getCallSite()))
val future = ssc.sparkContext.submitJob[Receiver[_], Unit, Unit](receiverRDD, startReceiverFunc, Seq(0), (_, _) => Unit, ())
future.onComplete {
case Success(_) =>
if (!shouldStartReceiver) {
onReceiverJobFinish(receiverId)
} else {
logInfo(s"Restarting Receiver $receiverId")
self.send(RestartReceiver(receiver))
}
case Failure(e) =>
if (!shouldStartReceiver) {
onReceiverJobFinish(receiverId)
} else {
logError("Receiver has been stopped. Try to restart it.", e)
logInfo(s"Restarting Receiver $receiverId")
self.send(RestartReceiver(receiver))
}
}(submitJobThreadPool)
logInfo(s"Receiver ${receiver.streamId} started")
}
4、具體看一下startReceiver方法都做了什么
4.1 看startReceiverFunc函數(shù)的定義,startReceiverFunc就是job中action執(zhí)行的函數(shù),首先判斷iterator中有數(shù)據(jù),然后取第一條數(shù)據(jù)(就是Receiver),看到這樣的寫法,真的非常神奇,把Receiver包裝成RDD的數(shù)據(jù)發(fā)送到Executor上運(yùn)行。
val supervisor = new ReceiverSupervisorImpl(receiver, SparkEnv.get, serializableHadoopConf.value, checkpointDirOption)
supervisor.start()
4.2 把receiver傳入ReceiverSupervisorImpl中,調(diào)用ReceiverSupervisorImpl的start方法,然后調(diào)用startReceiver,在startReceiver中調(diào)用receiver的onStart()方法,這就是前面提到的啟動(dòng)數(shù)據(jù)接收的方法
4.3 定義好action的函數(shù),再來看receiverRDD,通過ssc.sc.makeRDD(Seq(receiver), 1)或ssc.sc.makeRDD(Seq(receiver -> preferredLocations))生成RDD
4.4 最后執(zhí)行submitJob將RDD[Receiver]提交到集群,需要注意一點(diǎn),每一個(gè)receiver生成一個(gè)job,如果一個(gè)Receiver的job失敗不會(huì)影響整個(gè)應(yīng)用的執(zhí)行,job失敗后重新發(fā)送self.send(RestartReceiver(receiver))消息,會(huì)重新提交job,保證receiver的可靠性,這樣的設(shè)計(jì)值得學(xué)習(xí)
注:以上內(nèi)容如有錯(cuò)誤,歡迎指正