RDD
什么是RDD
? RDD(Resilient Distributed Dataset)叫做分布式數(shù)據(jù)集,是Spark中最基本的數(shù)據(jù)抽象。代碼中是一個(gè)抽象類(lèi),它代表一個(gè)不可變、可分區(qū)、里面的元素可并行計(jì)算的集合。
RDD的屬性

一組分區(qū)(Partition),即數(shù)據(jù)集的基本組成單位;
一個(gè)計(jì)算每個(gè)分區(qū)的函數(shù);
RDD之間的依賴(lài)關(guān)系;
一個(gè)Partitioner,即RDD的分片函數(shù);
一個(gè)列表,存儲(chǔ)存取每個(gè)Partition的優(yōu)先位置(preferred location)。
RDD的特點(diǎn)
? RDD表示只讀的分區(qū)的數(shù)據(jù)集,對(duì)RDD進(jìn)行改動(dòng),只能通過(guò)RDD的轉(zhuǎn)換操作,由一個(gè)RDD得到一個(gè)新的RDD,新的RDD包含了從其他RDD衍生所必需的信息。RDDs之間存在依賴(lài),RDD的執(zhí)行是按照血緣關(guān)系延時(shí)計(jì)算的。如果血緣關(guān)系較長(zhǎng),可以通過(guò)持久化RDD來(lái)切斷血緣關(guān)系。
分區(qū)
? RDD邏輯上是分區(qū)的,每個(gè)分區(qū)的數(shù)據(jù)是抽象存在的,計(jì)算的時(shí)候會(huì)通過(guò)一個(gè)compute函數(shù)得到每個(gè)分區(qū)的數(shù)據(jù)。如果RDD是通過(guò)已有的文件系統(tǒng)構(gòu)建,則compute函數(shù)是讀取指定文件系統(tǒng)中的數(shù)據(jù),如果RDD是通過(guò)其他RDD轉(zhuǎn)換而來(lái),則compute函數(shù)是執(zhí)行轉(zhuǎn)換邏輯將其他RDD的數(shù)據(jù)進(jìn)行轉(zhuǎn)換。
只讀
? 由一個(gè)RDD轉(zhuǎn)換到另一個(gè)RDD,可以通過(guò)豐富的操作算子實(shí)現(xiàn),不再像MapReduce那樣只能寫(xiě)map和reduce了,如下圖所示。

RDD的操作算子包括兩類(lèi),一類(lèi)叫做transformations,它是用來(lái)將RDD進(jìn)行轉(zhuǎn)化,構(gòu)建RDD的血緣關(guān)系;另一類(lèi)叫做actions,它是用來(lái)觸發(fā)RDD的計(jì)算,得到RDD的相關(guān)計(jì)算結(jié)果或者將RDD保存的文件系統(tǒng)中。
依賴(lài)
? RDDs通過(guò)操作算子進(jìn)行轉(zhuǎn)換,轉(zhuǎn)換得到的新RDD包含了從其他RDDs衍生所必需的信息,RDDs之間維護(hù)著這種血緣關(guān)系,也稱(chēng)之為依賴(lài)。如下圖所示,依賴(lài)包括兩種,一種是窄依賴(lài),RDDs之間分區(qū)是一一對(duì)應(yīng)的,另一種是寬依賴(lài),下游RDD的每個(gè)分區(qū)與上游RDD(也稱(chēng)之為父RDD)的每個(gè)分區(qū)都有關(guān),是多對(duì)多的關(guān)系。
緩存
? 如果在應(yīng)用程序中多次使用同一個(gè)RDD,可以將該RDD緩存起來(lái),該RDD只有在第一次計(jì)算的時(shí)候會(huì)根據(jù)血緣關(guān)系得到分區(qū)的數(shù)據(jù),在后續(xù)其他地方用到該RDD的時(shí)候,會(huì)直接從緩存處取而不用再根據(jù)血緣關(guān)系計(jì)算,這樣就加速后期的重用。如下圖所示,RDD-1經(jīng)過(guò)一系列的轉(zhuǎn)換后得到RDD-n并保存到hdfs,RDD-1在這一過(guò)程中會(huì)有個(gè)中間結(jié)果,如果將其緩存到內(nèi)存,那么在隨后的RDD-1轉(zhuǎn)換到RDD-m這一過(guò)程中,就不會(huì)計(jì)算其之前的RDD-0了。

CheckPoint
? 雖然RDD的血緣關(guān)系天然地可以實(shí)現(xiàn)容錯(cuò),當(dāng)RDD的某個(gè)分區(qū)數(shù)據(jù)失敗或丟失,可以通過(guò)血緣關(guān)系重建。但是對(duì)于長(zhǎng)時(shí)間迭代型應(yīng)用來(lái)說(shuō),隨著迭代的進(jìn)行,RDDs之間的血緣關(guān)系會(huì)越來(lái)越長(zhǎng),一旦在后續(xù)迭代過(guò)程中出錯(cuò),則需要通過(guò)非常長(zhǎng)的血緣關(guān)系去重建,勢(shì)必影響性能。為此,RDD支持checkpoint將數(shù)據(jù)保存到持久化的存儲(chǔ)中,這樣就可以切斷之前的血緣關(guān)系,因?yàn)閏heckpoint后的RDD不需要知道它的父RDDs了,它可以從checkpoint處拿到數(shù)據(jù)。
RDD編程
? 在Spark中,RDD被表示為對(duì)象,通過(guò)對(duì)象上的方法調(diào)用來(lái)對(duì)RDD進(jìn)行轉(zhuǎn)換。經(jīng)過(guò)一系列的transformations定義RDD之后,就可以調(diào)用actions觸發(fā)RDD的計(jì)算,action可以是向應(yīng)用程序返回結(jié)果(count, collect等),或者是向存儲(chǔ)系統(tǒng)保存數(shù)據(jù)(saveAsTextFile等)。在Spark中,只有遇到action,才會(huì)執(zhí)行RDD的計(jì)算(即延遲計(jì)算),這樣在運(yùn)行時(shí)可以通過(guò)管道的方式傳輸多個(gè)轉(zhuǎn)換。
? 要使用Spark,開(kāi)發(fā)者需要編寫(xiě)一個(gè)Driver程序,它被提交到集群以調(diào)度運(yùn)行Worker,如下圖所示。Driver中定義了一個(gè)或多個(gè)RDD,并調(diào)用RDD上的action,Worker則執(zhí)行RDD分區(qū)計(jì)算任務(wù)。


RDD的創(chuàng)建
在Spark中創(chuàng)建RDD的創(chuàng)建方式可以分為三種:從集合中創(chuàng)建RDD;從外部存儲(chǔ)創(chuàng)建RDD;從其他RDD創(chuàng)建。
需要注意的是,從集合中創(chuàng)建和從外部創(chuàng)建時(shí)重點(diǎn),從其他RDD創(chuàng)建只做了解
從集合中創(chuàng)建
? 從集合中創(chuàng)建RDD,Spark主要提供了兩種函數(shù):
parallelize和makeRDD
- 使用parallelize()從集合創(chuàng)建
scala> val rdd = sc.parallelize(Array(1,2,3,4,5,6,7,8))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:24
- 使用makeRDD()從集合創(chuàng)建
scala> val rdd1 = sc.makeRDD(Array(1,2,3,4,5,6,7,8))
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[1] at makeRDD at <console>:24
由外部存儲(chǔ)系統(tǒng)的數(shù)據(jù)集創(chuàng)建
包括本地的文件系統(tǒng),還有所有Hadoop支持的數(shù)據(jù)集,比如HDFS、Cassandra、HBase等,
scala> val rdd2= sc.textFile("hdfs://hadoop102:9000/RELEASE")
rdd2: org.apache.spark.rdd.RDD[String] = hdfs:// hadoop102:9000/RELEASE MapPartitionsRDD[4] at textFile at <console>:24
RDD的轉(zhuǎn)化 ( 重點(diǎn)掌握 )
RDD整體上分為Value類(lèi)型和Key-Value類(lèi)型
Value類(lèi)型
map(func) 重點(diǎn)
將RDD創(chuàng)建的集合轉(zhuǎn)換為另外一個(gè)映射集合,例如,如果將一個(gè)Array中的數(shù)全部 *2 輸出,那么就會(huì)用到map方法。例如
//創(chuàng)建一個(gè)array
scala> sc.makeRDD(1 to 10)
res0: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at makeRDD at <console>:25
//集合內(nèi)每個(gè)元素*2
scala> res0.map(_*2)
res1: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[1] at map at <console>:27
//打印輸出
scala> res1.collect()
res2: Array[Int] = Array(2, 4, 6, 8, 10, 12, 14, 16, 18, 20)
mapPartitions(func)
? 類(lèi)似于map,但獨(dú)立地在RDD的每一個(gè)分片上運(yùn)行,因此在類(lèi)型為T(mén)的RDD上運(yùn)行時(shí),func的函數(shù)類(lèi)型必須是Iterator[T] => Iterator[U]。假設(shè)有N個(gè)元素,有M個(gè)分區(qū),那么map的函數(shù)的將被調(diào)用N次,而mapPartitions被調(diào)用M次,一個(gè)函數(shù)一次處理所有分區(qū)。同樣以上述的需求為例:
//創(chuàng)建一個(gè)array
scala> sc.makeRDD(1 to 10)
res0: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at makeRDD at <console>:25
//集合內(nèi)每個(gè)元素*2
scala> res0.mapPartitions(x=>x.map(_*2))
res1: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[1] at map at <console>:27
//打印輸出
scala> res1.collect()
res2: Array[Int] = Array(2, 4, 6, 8, 10, 12, 14, 16, 18, 20)
mapPartitionsWithIndex(func)
? 類(lèi)似于mapPartitions,但func帶有一個(gè)整數(shù)參數(shù)表示分片的索引值,因此在類(lèi)型為T(mén)的RDD上運(yùn)行時(shí),func的函數(shù)類(lèi)型必須是(Int, Interator[T]) => Iterator[U];
flatMap(func) 重點(diǎn)
? 類(lèi)似于map,但是每一個(gè)輸入元素可以被映射為0或多個(gè)輸出元素(所以func應(yīng)該返回一個(gè)序列,而不是單一元素)
flatMap與Map之間的區(qū)別
import org.apache.spark.{SparkConf, SparkContext}
object MapAndFlatMap {
def main(args: Array[String]): Unit = {
val sc = new SparkContext(new SparkConf().setAppName("map_flatMap_demo").setMaster("local"))
val arrayRDD =sc.parallelize(Array("a_b","c_d","e_f"))
arrayRDD.foreach(println) //打印結(jié)果1
arrayRDD.map(string=>{
string.split("_")
}).foreach(x=>{
println(x.mkString(",")) //打印結(jié)果2
})
arrayRDD.flatMap(string=>{
string.split("_")
}).foreach(x=>{
println(x.mkString(","))//打印結(jié)果3
})
}
}
打印結(jié)果為



對(duì)比結(jié)果2與結(jié)果3,很容易得出結(jié)論:
map函數(shù)后,RDD的值為 Array(Array("a","b"),Array("c","d"),Array("e","f"))
flatMap函數(shù)處理后,RDD的值為 Array("a","b","c","d","e","f")
即最終可以認(rèn)為,flatMap會(huì)將其返回的數(shù)組全部拆散,然后合成到一個(gè)數(shù)組中。
glom
? 將每一個(gè)分區(qū)形成一個(gè)數(shù)組,形成新的RDD類(lèi)型時(shí)RDD[Array[T]]
scala> val rdd = sc.parallelize(1 to 16,4)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[65] at parallelize at <console>:24
scala> rdd.glom().collect()
res25: Array[Array[Int]] = Array(Array(1, 2, 3, 4), Array(5, 6, 7, 8), Array(9, 10, 11, 12), Array(13, 14, 15, 16))
groupBy(func) 重點(diǎn)
? 分組,按照傳入函數(shù)的返回值進(jìn)行分組。將相同的key對(duì)應(yīng)的值放入一個(gè)迭代器。
scala> val rdd = sc.parallelize(1 to 4)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[65] at parallelize at <console>:24
scala> val group = rdd.groupBy(_%2)
group: org.apache.spark.rdd.RDD[(Int, Iterable[Int])] = ShuffledRDD[2] at groupBy at <console>:26
scala> group.collect
res0: Array[(Int, Iterable[Int])] = Array((0,CompactBuffer(2, 4)), (1,CompactBuffer(1, 3)))
上述例子解釋是創(chuàng)建一個(gè)1到4的序列,然后把能被2整除的放進(jìn)一個(gè)元祖中,不能被2整除的放入另外一個(gè)元祖中。那么分組的條件就是%2
filter(func) 重點(diǎn)
? 過(guò)濾。返回一個(gè)新的RDD,該RDD由經(jīng)過(guò)func函數(shù)計(jì)算后返回值為true的輸入元素組成。比如創(chuàng)建一個(gè)RDD(由字符串組成),過(guò)濾出一個(gè)新RDD(包含”xiao”子串)
scala> var sourceFilter = sc.parallelize(Array("xiaoming","xiaojiang","xiaohe","dazhi"))
sourceFilter: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[10] at parallelize at <console>:24
scala> sourceFilter.collect()
res9: Array[String] = Array(xiaoming, xiaojiang, xiaohe, dazhi)
scala> val filter = sourceFilter.filter(_.contains("xiao"))
filter: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[11] at filter at <console>:26
scala> filter.collect()
res10: Array[String] = Array(xiaoming, xiaojiang, xiaohe)
sample(withReplacement, fraction, seed)
? 以指定的隨機(jī)種子隨機(jī)抽樣出數(shù)量為fraction的數(shù)據(jù),withReplacement表示是抽出的數(shù)據(jù)是否放回,true為有放回的抽樣,false為無(wú)放回的抽樣,seed用于指定隨機(jī)數(shù)生成器種子。
//創(chuàng)建RDD
scala> val rdd = sc.parallelize(1 to 10)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[20] at parallelize at <console>:24
//打印
scala> rdd.collect()
res15: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
//打印放回抽樣結(jié)果
scala> var sample1 = rdd.sample(true,0.4,2)
sample1: org.apache.spark.rdd.RDD[Int] = PartitionwiseSampledRDD[21] at sample at <console>:26
//不放回抽樣
scala> sample1.collect()
res16: Array[Int] = Array(1, 2, 2, 7, 7, 8, 9)
//打印不放回抽樣結(jié)果
scala> var sample2 = rdd.sample(false,0.2,3)
sample2: org.apache.spark.rdd.RDD[Int] = PartitionwiseSampledRDD[22] at sample at <console>:26
scala> sample2.collect()
res17: Array[Int] = Array(1, 9)
distinct([numTasks]))
? 對(duì)源RDD進(jìn)行去重后返回一個(gè)新的RDD。默認(rèn)情況下,只有8個(gè)并行任務(wù)來(lái)操作,但是可以傳入一個(gè)可選的numTasks參數(shù)改變它。
//創(chuàng)建一個(gè)RDD
scala> val distinctRdd = sc.parallelize(List(1,2,1,5,2,9,6,1))
distinctRdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[34] at parallelize at <console>:24
//對(duì)RDD進(jìn)行去重(不指定并行度)
scala> val unionRDD = distinctRdd.distinct()
unionRDD: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[37] at distinct at <console>:26
//打印去重后生成的新RDD
scala> unionRDD.collect()
res20: Array[Int] = Array(1, 9, 5, 6, 2)
//對(duì)RDD(指定并行度為2)
scala> val unionRDD = distinctRdd.distinct(2)
unionRDD: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[40] at distinct at <console>:26
//打印去重后生成的新RDD
scala> unionRDD.collect()
res21: Array[Int] = Array(6, 2, 1, 9, 5)
coalesce(numPartitions)
? 縮減分區(qū)數(shù),用于大數(shù)據(jù)集過(guò)濾后,提高小數(shù)據(jù)集的執(zhí)行效率。
//創(chuàng)建一個(gè)RDD
scala> val rdd = sc.parallelize(1 to 16,4)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[54] at parallelize at <console>:24
//查看RDD的分區(qū)數(shù)
scala> rdd.partitions.size
res20: Int = 4
//對(duì)RDD重新分區(qū)
scala> val coalesceRDD = rdd.coalesce(3)
coalesceRDD: org.apache.spark.rdd.RDD[Int] = CoalescedRDD[55] at coalesce at <console>:26
//查看新RDD的分區(qū)數(shù)
scala> coalesceRDD.partitions.size
res21: Int = 3
repartition(numPartitions)
? 根據(jù)分區(qū)數(shù),重新通過(guò)網(wǎng)絡(luò)隨機(jī)洗牌所有數(shù)據(jù)。
//創(chuàng)建一個(gè)RDD
scala> val rdd = sc.parallelize(1 to 16,4)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[56] at parallelize at <console>:24
//查看RDD的分區(qū)數(shù)
scala> rdd.partitions.size
res22: Int = 4
//對(duì)RDD重新分區(qū)
scala> val rerdd = rdd.repartition(2)
rerdd: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[60] at repartition at <console>:26
//查看新RDD的分區(qū)數(shù)
scala> rerdd.partitions.size
res23: Int = 2
coalesce 與 repartition的區(qū)別
coalesce重新分區(qū),可以選擇是否進(jìn)行shuffle過(guò)程。由參數(shù)shuffle: Boolean = false/true決定。
repartition實(shí)際上是調(diào)用的coalesce,默認(rèn)是進(jìn)行shuffle的。源碼如下:
def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {
coalesce(numPartitions, shuffle = true)
}
sortBy(func,[ascending], [numTasks]) 重點(diǎn)
? 使用func先對(duì)數(shù)據(jù)進(jìn)行處理,按照處理后的數(shù)據(jù)比較結(jié)果排序,默認(rèn)為正序。
//創(chuàng)建一個(gè)RDD
scala> val rdd = sc.parallelize(List(2,1,3,4))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[21] at parallelize at <console>:24
//按照自身大小排序
scala> rdd.sortBy(x => x).collect()
res11: Array[Int] = Array(1, 2, 3, 4)
//按照與3余數(shù)的大小排序
scala> rdd.sortBy(x => x%3).collect()
res12: Array[Int] = Array(3, 4, 1, 2)
雙Value類(lèi)型
union(otherDataset)
? 對(duì)源RDD和參數(shù)RDD求并集后返回一個(gè)新的RDD
//創(chuàng)建第一個(gè)RDD
scala> val rdd1 = sc.parallelize(1 to 5)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[23] at parallelize at <console>:24
//創(chuàng)建第二個(gè)RDD
scala> val rdd2 = sc.parallelize(5 to 10)
rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[24] at parallelize at <console>:24
//計(jì)算兩個(gè)RDD的并集
scala> val rdd3 = rdd1.union(rdd2)
rdd3: org.apache.spark.rdd.RDD[Int] = UnionRDD[25] at union at <console>:28
//打印并集結(jié)果
scala> rdd3.collect()
res18: Array[Int] = Array(1, 2, 3, 4, 5, 5, 6, 7, 8, 9, 10)
subtract (otherDataset)
? 計(jì)算差的一種函數(shù),去除兩個(gè)RDD中相同的元素,不同的RDD將保留下來(lái)
//創(chuàng)建第一個(gè)RDD
scala> val rdd = sc.parallelize(3 to 8)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[70] at parallelize at <console>:24
//創(chuàng)建第二個(gè)RDD
scala> val rdd1 = sc.parallelize(1 to 5)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[71] at parallelize at <console>:24
//計(jì)算第一個(gè)RDD與第二個(gè)RDD的差集并打印
scala> rdd.subtract(rdd1).collect()
res27: Array[Int] = Array(8, 6, 7)
intersection(otherDataset)
? 對(duì)源RDD和參數(shù)RDD求交集后返回一個(gè)新的RDD
//創(chuàng)建第一個(gè)RDD
scala> val rdd1 = sc.parallelize(1 to 7)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[26] at parallelize at <console>:24
//創(chuàng)建第二個(gè)RDD
scala> val rdd2 = sc.parallelize(5 to 10)
rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[27] at parallelize at <console>:24
//計(jì)算兩個(gè)RDD的交集
scala> val rdd3 = rdd1.intersection(rdd2)
rdd3: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[33] at intersection at <console>:28
//打印計(jì)算結(jié)果
scala> rdd3.collect()
res19: Array[Int] = Array(5, 6, 7)
zip(otherDataset)
? 將兩個(gè)RDD組合成Key/Value形式的RDD,這里默認(rèn)兩個(gè)RDD的partition數(shù)量以及元素?cái)?shù)量都相同,否則會(huì)拋出異常。
//創(chuàng)建第一個(gè)RDD
scala> val rdd1 = sc.parallelize(Array(1,2,3),3)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[1] at parallelize at <console>:24
//創(chuàng)建第二個(gè)RDD(與1分區(qū)數(shù)相同)
scala> val rdd2 = sc.parallelize(Array("a","b","c"),3)
rdd2: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[2] at parallelize at <console>:24
//第一個(gè)RDD組合第二個(gè)RDD并打印
scala> rdd1.zip(rdd2).collect
res1: Array[(Int, String)] = Array((1,a), (2,b), (3,c))
//第二個(gè)RDD組合第一個(gè)RDD并打印
scala> rdd2.zip(rdd1).collect
res2: Array[(String, Int)] = Array((a,1), (b,2), (c,3))
//創(chuàng)建第三個(gè)RDD(與1,2分區(qū)數(shù)不同)
scala> val rdd3 = sc.parallelize(Array("a","b","c"),2)
rdd3: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[5] at parallelize at <console>:24
//第一個(gè)RDD組合第三個(gè)RDD并打印
scala> rdd1.zip(rdd3).collect
java.lang.IllegalArgumentException: Can't zip RDDs with unequal numbers of partitions: List(3, 2)
at org.apache.spark.rdd.ZippedPartitionsBaseRDD.getPartitions(ZippedPartitionsRDD.scala:57)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1965)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
... 48 elided
Key-Value類(lèi)型
partitionBy
? pairRDD進(jìn)行分區(qū)操作,如果原有的partionRDD和現(xiàn)有的partionRDD是一致的話就不進(jìn)行分區(qū), 否則會(huì)生成ShuffleRDD,即會(huì)產(chǎn)生shuffle過(guò)程。
//創(chuàng)建一個(gè)RDD
scala> val rdd = sc.parallelize(Array((1,"aaa"),(2,"bbb"),(3,"ccc"),(4,"ddd")),4)
rdd: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[44] at parallelize at <console>:24
//查看RDD的分區(qū)數(shù)
scala> rdd.partitions.size
res24: Int = 4
//對(duì)RDD重新分區(qū)
scala> var rdd2 = rdd.partitionBy(new org.apache.spark.HashPartitioner(2))
rdd2: org.apache.spark.rdd.RDD[(Int, String)] = ShuffledRDD[45] at partitionBy at <console>:26
//查看新RDD的分區(qū)數(shù)
scala> rdd2.partitions.size
res25: Int = 2
groupByKey
? 作用:groupByKey也是對(duì)每個(gè)key進(jìn)行操作,但只生成一個(gè)sequence。
//創(chuàng)建一個(gè)pairRDD
scala> val words = Array("one", "two", "two", "three", "three", "three")
words: Array[String] = Array(one, two, two, three, three, three)
scala> val wordPairsRDD = sc.parallelize(words).map(word => (word, 1))
wordPairsRDD: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[4] at map at <console>:26
//將相同key對(duì)應(yīng)值聚合到一個(gè)sequence中
scala> val group = wordPairsRDD.groupByKey()
group: org.apache.spark.rdd.RDD[(String, Iterable[Int])] = ShuffledRDD[5] at groupByKey at <console>:28
//打印結(jié)果
scala> group.collect()
res1: Array[(String, Iterable[Int])] = Array((two,CompactBuffer(1, 1)), (one,CompactBuffer(1)), (three,CompactBuffer(1, 1, 1)))
//計(jì)算相同key對(duì)應(yīng)值的相加結(jié)果
scala> group.map(t => (t._1, t._2.sum))
res2: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[6] at map at <console>:31
//打印結(jié)果
scala> res2.collect()
res3: Array[(String, Int)] = Array((two,2), (one,1), (three,3))
reduceByKey(func, [numTasks])
? 在一個(gè)(K,V)的RDD上調(diào)用,返回一個(gè)(K,V)的RDD,使用指定的reduce函數(shù),將相同key的值聚合到一起,reduce任務(wù)的個(gè)數(shù)可以通過(guò)第二個(gè)可選的參數(shù)來(lái)設(shè)置。
//創(chuàng)建一個(gè)pairRDD
scala> val rdd = sc.parallelize(List(("female",1),("male",5),("female",5),("male",2)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[46] at parallelize at <console>:24
//算相同key對(duì)應(yīng)值的相加結(jié)果
scala> val reduce = rdd.reduceByKey((x,y) => x+y)
reduce: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[47] at reduceByKey at <console>:26
//打印結(jié)果
scala> reduce.collect()
res29: Array[(String, Int)] = Array((female,6), (male,7))
reduceByKey和groupByKey的區(qū)別
reduceByKey:按照key進(jìn)行聚合,在shuffle之前有combine(預(yù)聚合)操作,返回結(jié)果是RDD[k,v].
groupByKey:按照key進(jìn)行分組,直接進(jìn)行shuffle。
aggregateByKey
? 在kv對(duì)的RDD中,,按key將value進(jìn)行分組合并,合并時(shí),將每個(gè)value和初始值作為seq函數(shù)的參數(shù),進(jìn)行計(jì)算,返回的結(jié)果作為一個(gè)新的kv對(duì),然后再將結(jié)果按照key進(jìn)行合并,最后將每個(gè)分組的value傳遞給combine函數(shù)進(jìn)行計(jì)算(先將前兩個(gè)value進(jìn)行計(jì)算,將返回結(jié)果和下一個(gè)value傳給combine函數(shù),以此類(lèi)推),將key與計(jì)算結(jié)果作為一個(gè)新的kv對(duì)輸出。
(1)zeroValue:給每一個(gè)分區(qū)中的每一個(gè)key一個(gè)初始值;
(2)seqOp:函數(shù)用于在每一個(gè)分區(qū)中用初始值逐步迭代value;
(3)combOp:函數(shù)用于合并每個(gè)分區(qū)中的結(jié)果。

//創(chuàng)建一個(gè)pairRDD
scala> val rdd = sc.parallelize(List(("a",3),("a",2),("c",4),("b",3),("c",6),("c",8)),2)
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[0] at parallelize at <console>:24
//取出每個(gè)分區(qū)相同key對(duì)應(yīng)值的最大值,然后相加
scala> val agg = rdd.aggregateByKey(0)(math.max(_,_),_+_)
agg: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[1] at aggregateByKey at <console>:26
//打印結(jié)果
scala> agg.collect()
res0: Array[(String, Int)] = Array((b,3), (a,3), (c,12))
foldByKey
? 作用:aggregateByKey的簡(jiǎn)化操作,seqop和combop相同
//創(chuàng)建一個(gè)pairRDD
scala> val rdd = sc.parallelize(List((1,3),(1,2),(1,4),(2,3),(3,6),(3,8)),3)
rdd: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[91] at parallelize at <console>:24
//計(jì)算相同key對(duì)應(yīng)值的相加結(jié)果
scala> val agg = rdd.foldByKey(0)(_+_)
agg: org.apache.spark.rdd.RDD[(Int, Int)] = ShuffledRDD[92] at foldByKey at <console>:26
//打印結(jié)果
scala> agg.collect()
res61: Array[(Int, Int)] = Array((3,14), (1,9), (2,3))
combineByKey[C]
? 對(duì)相同K,把V合并成一個(gè)集合。

//創(chuàng)建一個(gè)pairRDD
scala> val input = sc.parallelize(Array(("a", 88), ("b", 95), ("a", 91), ("b", 93), ("a", 95), ("b", 98)),2)
input: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[52] at parallelize at <console>:26
//將相同key對(duì)應(yīng)的值相加,同時(shí)記錄該key出現(xiàn)的次數(shù),放入一個(gè)二元組
scala> val combine = input.combineByKey((_,1),(acc:(Int,Int),v)=>(acc._1+v,acc._2+1),(acc1:(Int,Int),acc2:(Int,Int))=>(acc1._1+acc2._1,acc1._2+acc2._2))
combine: org.apache.spark.rdd.RDD[(String, (Int, Int))] = ShuffledRDD[5] at combineByKey at <console>:28
//打印合并后的結(jié)果
scala> combine.collect
res5: Array[(String, (Int, Int))] = Array((b,(286,3)), (a,(274,3)))
//計(jì)算平均值
scala> val result = combine.map{case (key,value) => (key,value._1/value._2.toDouble)}
result: org.apache.spark.rdd.RDD[(String, Double)] = MapPartitionsRDD[54] at map at <console>:30
//打印結(jié)果
scala> result.collect()
res33: Array[(String, Double)] = Array((b,95.33333333333333), (a,91.33333333333333))
sortByKey([ascending], [numTasks])
? 在一個(gè)(K,V)的RDD上調(diào)用,K必須實(shí)現(xiàn)Ordered接口,返回一個(gè)按照key進(jìn)行排序的(K,V)的RDD
//創(chuàng)建一個(gè)pairRDD
scala> val rdd = sc.parallelize(Array((3,"aa"),(6,"cc"),(2,"bb"),(1,"dd")))
rdd: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[14] at parallelize at <console>:24
//按照key的正序
scala> rdd.sortByKey(true).collect()
res9: Array[(Int, String)] = Array((1,dd), (2,bb), (3,aa), (6,cc))
//按照key的倒序
scala> rdd.sortByKey(false).collect()
res10: Array[(Int, String)] = Array((6,cc), (3,aa), (2,bb), (1,dd))
mapValues
? 針對(duì)于(K,V)形式的類(lèi)型只對(duì)V進(jìn)行操作
//創(chuàng)建一個(gè)pairRDD
scala> val rdd3 = sc.parallelize(Array((1,"a"),(1,"d"),(2,"b"),(3,"c")))
rdd3: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[67] at parallelize at <console>:24
//)對(duì)value添加字符串"|||"
scala> rdd3.mapValues(_+"|||").collect()
res26: Array[(Int, String)] = Array((1,a|||), (1,d|||), (2,b|||), (3,c|||))
join(otherDataset, [numTasks])
在類(lèi)型為(K,V)和(K,W)的RDD上調(diào)用,返回一個(gè)相同key對(duì)應(yīng)的所有元素對(duì)在一起的(K,(V,W))的RDD
//創(chuàng)建第一個(gè)pairRDD
scala> val rdd = sc.parallelize(Array((1,"a"),(2,"b"),(3,"c")))
rdd: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[32] at parallelize at <console>:24
//創(chuàng)建第二個(gè)pairRDD
scala> val rdd1 = sc.parallelize(Array((1,4),(2,5),(3,6)))
rdd1: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[33] at parallelize at <console>:24
//join操作并打印結(jié)果
scala> rdd.join(rdd1).collect()
res13: Array[(Int, (String, Int))] = Array((1,(a,4)), (2,(b,5)), (3,(c,6)))
cogroup(otherDataset, [numTasks])
? 在類(lèi)型為(K,V)和(K,W)的RDD上調(diào)用,返回一個(gè)(K,(Iterable<V>,Iterable<W>))類(lèi)型的RDD
//創(chuàng)建第一個(gè)pairRDD
scala> val rdd = sc.parallelize(Array((1,"a"),(2,"b"),(3,"c")))
rdd: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[37] at parallelize at <console>:24
//創(chuàng)建第二個(gè)pairRDD
scala> val rdd1 = sc.parallelize(Array((1,4),(2,5),(3,6)))
rdd1: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[38] at parallelize at <console>:24
//cogroup兩個(gè)RDD并打印結(jié)果
scala> rdd.cogroup(rdd1).collect()
res14: Array[(Int, (Iterable[String], Iterable[Int]))] = Array((1,(CompactBuffer(a),CompactBuffer(4))), (2,(CompactBuffer(b),CompactBuffer(5))), (3,(CompactBuffer(c),CompactBuffer(6))))