您的当前位置:首页spark的几个示例

spark的几个示例

2024-12-14 来源:哗拓教育

统计《红楼梦》和《西游记》里面高频字:

scala> spark.read.textFile("/Users/bluejoe/testdata/xiyou.txt").map(_.replaceAll("[\\x00-\\xff]|,|。|:|.|“|”|?|!| ", "")).flatMap(_.split("")).groupBy("value").count.sort($"count".desc).take(40)
res19: Array[org.apache.spark.sql.Row] = Array([道,10023], [不,7984], [了,7144], [一,7079], [那,6934], [我,6575], [是,5907], [行,5474], [来,5431], [他,5297], [个,5206], [你,5086], [的,4971], [者,4887], [有,3909], [大,3603], [得,3514], [这,3481], [去,3377], [上,3260], [老,3204], [三,3072], [见,3031], [在,2987], [人,2985], [子,2763], [僧,2751], [也,2706], [里,2629], [下,2613], [师,2438], [着,2273], [只,2234], [又,2227], [妖,2210], [八,2196], [之,2184], [说,2126], [王,2124], [天,2086])

scala> spark.read.textFile("/Users/bluejoe/testdata/honglou.txt").map(_.replaceAll("[\\x00-\\xff]|,|。|:|.|“|”|?|!| ", "")).flatMap(_.split("")).groupBy("value").count.sort($"count".desc).take(40)
res20: Array[org.apache.spark.sql.Row] = Array([了,21157], [的,15603], [不,14957], [一,12106], [来,11404], [道,11029], [人,10493], [是,10099], [说,9801], [我,9137], [这,7797], [他,7712], [你,7118], [着,6172], [去,6165], [儿,6071], [也,6064], [玉,6023], [有,5958], [宝,5789], [个,5647], [子,5566], [又,5205], [贾,5193], [里,5134], [那,4891], [们,4886], [见,4788], [只,4662], [太,4287], [便,4062], [好,4026], [在,3990], [笑,3945], [家,3910], [上,3886], [么,3676], [得,3577], [大,3557], [姐,3435])

接着再来统计双字词的频次:

scala> spark.read.textFile("/Users/bluejoe/testdata/xiyou.txt").map(_.replaceAll("[\\x00-\\xff]|,|。|:|.|“|”|?|!| ", "")).flatMap(s => s.zip(s.drop(1)).map(t => "" + t._1 + t._2)).groupBy("value").count.sort($"count".desc).take(40)
res18: Array[org.apache.spark.sql.Row] = Array([行者,4189], [八戒,1747], [者道,1641], [师父,1546], [三藏,1287], [一个,1149], [大圣,1047], [唐僧,972], [道你,838], [沙僧,780], [和尚,732], [笑道,718], [怎么,707], [那里,707], [我们,685], [不知,665], [道我,637], [菩萨,623], [长老,612], [妖精,604], [老孙,563], [戒道,559], [两个,533], [了一,484], [什么,478], [——,468], [不是,467], [国王,455], [见那,451], [藏道,435], [那怪,434], [道师,434], [道这,434], [呆子,417], [徒弟,408], [只见,403], [也不,382], [僧道,377], [那妖,356], [小妖,348])

scala> val xx = spark.read.textFile("/Users/bluejoe/testdata/honglou.txt").map(_.replaceAll("[\\x00-\\xff]|,|。|:|.|“|”|?|!| ", "")).flatMap(s => s.zip(s.drop(1)).map(t => "" + t._1 + t._2)).groupBy("value").count.sort($"count".desc).take(40)
xx: Array[org.apache.spark.sql.Row] = Array([宝玉,3963], [笑道,2454], [太太,1986], [了一,1900], [什么,1833], [凤姐,1731], [贾母,1684], [一个,1532], [夫人,1457], [也不,1446], [黛玉,1372], [道你,1287], [我们,1220], [那里,1174], [袭人,1151], [姑娘,1125], [道我,1120], [去了,1095], [宝钗,1081], [不知,1076], [王夫,1076], [起来,1054], [听了,1050], [出来,1044], [来了,1042], [怎么,1029], [你们,1014], [如今,1004], [丫头,993], [知道,982], [说道,975], [老太,972], [贾政,946], [这里,935], [道这,903], [他们,895], [说着,894], [不是,891], [众人,875], [奶奶,852])

比较一下每20章的词频与这top40词频的差别:

Seq(20,40,60,80,100,120).map(num=>20-spark.read.textFile("/Users/bluejoe/testdata/honglou"+num+".txt").map(_.replaceAll("[\\x00-\\xff]|,|。|:|.|“|”|?|!| ", "")).flatMap(s => s.zip(s.drop(1)).map(t => "" + t._1 + t._2)).groupBy("value").count.sort($"count".desc).take(40).map(_(0)).toSet.intersect(xx.map(_(0)).toSet).size)
res17: Seq[Int] = List(-10, -12, -16, -11, -11, -14)  
显示全文