背景:抽取hive上的數(shù)據(jù),搭建線性回歸模型,進行預(yù)測。
目標:抽取hive數(shù)據(jù),并進行預(yù)測。

一、數(shù)據(jù)抽取
本次為測試數(shù)據(jù),是通過Yarn配置的spark環(huán)境。抽取方式與spark集群抽取方式一致。也可以hdfs上準備txt文件進行讀取。
import os
import sys
# 如果當前代碼文件運行測試需要加入修改路徑,避免出現(xiàn)后導(dǎo)包問題
BASE_DIR = os.path.dirname(os.path.dirname(os.getcwd()))
sys.path.insert(0, os.path.join(BASE_DIR))
PYSPARK_PYTHON = "/opt/anaconda3/envs/pythonOnYarn/bin/python3.8"
# 當存在多個版本時,不指定很可能會導(dǎo)致出錯
os.environ["PYSPARK_PYTHON"] = PYSPARK_PYTHON
os.environ["PYSPARK_DRIVER_PYTHON"] = PYSPARK_PYTHON
os.environ["HADOOP_USER_NAME"]="hdfs"
from pythonOnYarn.test import SparkSessionBase
class TestPythonOnYarn(SparkSessionBase):
SPARK_APP_NAME = "testPythonOnYarn"
SPARK_URL = "yarn"
ENABLE_HIVE_SUPPORT = True
def __init__(self):
self.spark = self._create_spark_session()
oa = TestPythonOnYarn()
df = oa.spark.sql("select * from test.u_data limit 10")
讀取df 數(shù)據(jù)樣例如下

二、非向量數(shù)據(jù)格式問題
嘗試與pandas中dataframe一樣的處理方式,先拼接所需要的自變量。
from pyspark.sql.functions import split, explode, concat, concat_ws
df_ws = df.withColumn("s", concat_ws( ',',df['userid'],df['movieid'], df['rating'] ))
df_ws = df_ws.withColumn("features", split(df_ws['s'], ","))
vhouse_df = df_ws.select(['unixtime', 'features'])
vhouse_df.show(3)
處理后數(shù)據(jù)如下

直接使用該數(shù)據(jù),代入 pyspark.ml.regression中的LinearRegression模型
from pyspark.ml.regression import LinearRegression
from pyspark.sql.session import SparkSession
lin_reg = LinearRegression(featuresCol='features',labelCol='userid',predictionCol='prediction')
lin_reg_model = lin_reg.fit(train_df)
直接運行會報錯:IllegalArgumentException: requirement failed: Column s2 must be of type class org.apache.spark.ml.linalg.VectorUDT:struct<type:tinyint,size:int,indices:array<int>,values:array<double>> but was actually class org.apache.spark.sql.types.ArrayType:array<string>.
三、數(shù)據(jù)格式處理為向量格式
經(jīng)過查詢發(fā)現(xiàn)以上報錯是因為列 ‘features’ 必須為向量格式才能代入模型中使用。那么嘗試把特征存儲在Vector中,創(chuàng)建一個Iris模式的RDD,然后轉(zhuǎn)化成dataframe。
from pyspark.sql import Row,functions
from pyspark.ml.linalg import Vector,Vectors
#將特征列轉(zhuǎn)換成向量格式,并將數(shù)據(jù)處理為浮點數(shù)形式
def f(x):
rel = {}
rel['features'] = Vectors.dense(float(x[0]),float(x[1]),float(x[2]))
rel['label'] = float(x[3])
return rel
df1 = df.rdd.map(lambda p: Row(**f(p))).toDF()
df1.show()
處理后數(shù)據(jù)樣式

或者使用,效果一致。
from pyspark.ml.linalg import Vector
from pyspark.ml.feature import VectorAssembler
vec = VectorAssembler(inputCols=['userid', 'movieid', 'rating'], outputCol='features')
features_df = vec.transform(df)
features_df.show()

再次代入模型中進行計算,能正常運行了。
#切分數(shù)據(jù)集,70%數(shù)據(jù)作為訓(xùn)練集,30%數(shù)據(jù)作為測試集。
splits = df1.randomSplit([0.7, 0.3])
train_df = splits[0]
test_df = splits[1]
from pyspark.ml.regression import LinearRegression
from pyspark.sql.session import SparkSession
lin_reg=LinearRegression(featuresCol='features',labelCol='label',predictionCol='prediction')
lin_reg_model = lin_reg.fit(train_df)
四、構(gòu)建和訓(xùn)練線性回歸模型
評估線性回歸模型在訓(xùn)練數(shù)據(jù)上的性能。
from pyspark.ml.regression import LinearRegression
lin_reg = LinearRegression(labelCol='label')
lr_model = lin_reg.fit(train_df)
lr_model.coefficients

lr_model.intercept

在測試數(shù)據(jù)上評估模型
test_p = lr_model.evaluate(test_df)
test_p.r2
test_p.meanSquaredError

五、預(yù)測數(shù)值
#預(yù)測
predictions = lr_model.transform(test_df)
print (predictions.collect())
print (predictions.show())
