文章推薦系統(tǒng) | 三、收集用戶行為數(shù)據

推薦閱讀:
文章推薦系統(tǒng) | 一、推薦流程設計
文章推薦系統(tǒng) | 二、同步業(yè)務數(shù)據

在上一篇文章中,我們完成了業(yè)務數(shù)據的同步,在推薦系統(tǒng)中另一個必不可少的數(shù)據就是用戶行為數(shù)據,可以說用戶行為數(shù)據是推薦系統(tǒng)的基石,巧婦難為無米之炊,所以接下來,我們就要將用戶的行為數(shù)據同步到推薦系統(tǒng)數(shù)據庫中。

在文章推薦系統(tǒng)中,用戶行為包括曝光、點擊、停留、收藏、分享等,所以這里我們定義的用戶行為數(shù)據的字段包括:發(fā)生時間(actionTime)、停留時間(readTime)、頻道 ID(channelId)、事件名稱(action)、用戶 ID(userId)、文章 ID(articleId)以及算法 ID(algorithmCombine),這里采用 json 格式,如下所示

# 曝光的參數(shù)
{"actionTime":"2019-04-10 18:15:35","readTime":"","channelId":0,"param":{"action": "exposure", "userId": "2", "articleId": "[18577, 14299]", "algorithmCombine": "C2"}}

# 對文章觸發(fā)行為的參數(shù)
{"actionTime":"2019-04-10 18:15:36","readTime":"","channelId":18,"param":{"action": "click", "userId": "2", "articleId": "18577", "algorithmCombine": "C2"}}
{"actionTime":"2019-04-10 18:15:38","readTime":"1621","channelId":18,"param":{"action": "read", "userId": "2", "articleId": "18577", "algorithmCombine": "C2"}}
{"actionTime":"2019-04-10 18:15:39","readTime":"","channelId":18,"param":{"action": "click", "userId": "1", "articleId": "14299", "algorithmCombine": "C2"}}
{"actionTime":"2019-04-10 18:15:39","readTime":"","channelId":18,"param":{"action": "click", "userId": "2", "articleId": "14299", "algorithmCombine": "C2"}}
{"actionTime":"2019-04-10 18:15:41","readTime":"914","channelId":18,"param":{"action": "read", "userId": "2", "articleId": "14299", "algorithmCombine": "C2"}}
{"actionTime":"2019-04-10 18:15:47","readTime":"7256","channelId":18,"param":{"action": "read", "userId": "1", "articleId": "14299", "algorithmCombine": "C2"}}

用戶離線行為數(shù)據

由于用戶行為數(shù)據規(guī)模龐大,通常是每天更新一次,以供離線計算使用。首先,在 Hive 中創(chuàng)建用戶行為數(shù)據庫 profile 及用戶行為表 user_action,設置按照日期進行分區(qū),并匹配 json 格式

-- 創(chuàng)建用戶行為數(shù)據庫
create database if not exists profile comment "use action" location '/user/hive/warehouse/profile.db/';
-- 創(chuàng)建用戶行為信息表
create table user_action
(
    actionTime STRING comment "user actions time",
    readTime   STRING comment "user reading time",
    channelId  INT comment "article channel id",
    param MAP<STRING, STRING> comment "action parameter"
)
    COMMENT "user primitive action"
    PARTITIONED BY (dt STRING) # 按照日期分區(qū)
    ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' # 匹配json格式
    LOCATION '/user/hive/warehouse/profile.db/user_action';

通常用戶行為數(shù)據被保存在應用服務器的日志文件中,我們可以利用 Flume 監(jiān)聽應用服務器上的日志文件,將用戶行為數(shù)據收集到 Hive 的 user_action 表對應的 HDFS 目錄中,F(xiàn)lume 配置如下

a1.sources = s1
a1.sinks = k1
a1.channels = c1

a1.sources.s1.channels= c1
a1.sources.s1.type = exec
a1.sources.s1.command = tail -F /root/logs/userClick.log
a1.sources.s1.interceptors=i1 i2
a1.sources.s1.interceptors.i1.type=regex_filter
a1.sources.s1.interceptors.i1.regex=\\{.*\\}
a1.sources.s1.interceptors.i2.type=timestamp

# c1
a1.channels.c1.type=memory
a1.channels.c1.capacity=30000
a1.channels.c1.transactionCapacity=1000

# k1
a1.sinks.k1.type=hdfs
a1.sinks.k1.channel=c1
a1.sinks.k1.hdfs.path=hdfs://192.168.19.137:9000/user/hive/warehouse/profile.db/user_action/%Y-%m-%d
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.fileType=DataStream
a1.sinks.k1.hdfs.writeFormat=Text
a1.sinks.k1.hdfs.rollInterval=0
a1.sinks.k1.hdfs.rollSize=10240
a1.sinks.k1.hdfs.rollCount=0
a1.sinks.k1.hdfs.idleTimeout=60

編寫 Flume 啟動腳本 collect_click.sh

#!/usr/bin/env bash

export JAVA_HOME=/root/bigdata/jdk
export HADOOP_HOME=/root/bigdata/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

/root/bigdata/flume/bin/flume-ng agent -c /root/bigdata/flume/conf -f /root/bigdata/flume/conf/collect_click.conf -Dflume.root.logger=INFO,console -name a1

Flume 自動生成目錄后,需要手動關聯(lián) Hive 分區(qū)后才能加載到數(shù)據

alter table user_action add partition (dt='2019-11-11') location "/user/hive/warehouse/profile.db/user_action/2011-11-11/"

用戶實時行為數(shù)據

為了提高推薦的實時性,我們也需要收集用戶的實時行為數(shù)據,以供在線計算使用。這里利用 Flume 將日志收集到 Kafka,在線計算任務可以從 Kafka 讀取用戶實時行為數(shù)據。首先,開啟 zookeeper,以守護進程運行

/root/bigdata/kafka/bin/zookeeper-server-start.sh -daemon /root/bigdata/kafka/config/zookeeper.properties

開啟 Kafka

/root/bigdata/kafka/bin/kafka-server-start.sh /root/bigdata/kafka/config/server.properties

# 開啟消息生產者
/root/bigdata/kafka/bin/kafka-console-producer.sh --broker-list 192.168.19.19092 --sync --topic click-trace
# 開啟消費者
/root/bigdata/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.19.137:9092 --topic  click-trace

修改 Flume 的日志收集配置文件,添加 c2 和 k2 ,將日志數(shù)據收集到 Kafka

a1.sources = s1
a1.sinks = k1 k2
a1.channels = c1 c2

a1.sources.s1.channels= c1 c2
a1.sources.s1.type = exec
a1.sources.s1.command = tail -F /root/logs/userClick.log
a1.sources.s1.interceptors=i1 i2
a1.sources.s1.interceptors.i1.type=regex_filter
a1.sources.s1.interceptors.i1.regex=\\{.*\\}
a1.sources.s1.interceptors.i2.type=timestamp

# c1
a1.channels.c1.type=memory
a1.channels.c1.capacity=30000
a1.channels.c1.transactionCapacity=1000

# c2
a1.channels.c2.type=memory
a1.channels.c2.capacity=30000
a1.channels.c2.transactionCapacity=1000

# k1
a1.sinks.k1.type=hdfs
a1.sinks.k1.channel=c1
a1.sinks.k1.hdfs.path=hdfs://192.168.19.137:9000/user/hive/warehouse/profile.db/user_action/%Y-%m-%d
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.fileType=DataStream
a1.sinks.k1.hdfs.writeFormat=Text
a1.sinks.k1.hdfs.rollInterval=0
a1.sinks.k1.hdfs.rollSize=10240
a1.sinks.k1.hdfs.rollCount=0
a1.sinks.k1.hdfs.idleTimeout=60

# k2
a1.sinks.k2.channel=c2
a1.sinks.k2.type=org.apache.flume.supervisorctl
我們可以利用supervisorctl來管理supervisor。sink.kafka.KafkaSink
a1.sinks.k2.kafka.bootstrap.servers=192.168.19.137:9092
a1.sinks.k2.kafka.topic=click-trace
a1.sinks.k2.kafka.batchSize=20
a1.sinks.k2.kafka.producer.requiredAcks=1

編寫 Kafka 啟動腳本 start_kafka.sh

#!/usr/bin/env bash
# 啟動zookeeper
/root/bigdata/kafka/bin/zookeeper-server-start.sh -daemon /root/bigdata/kafka/config/zookeeper.properties
# 啟動kafka
/root/bigdata/kafka/bin/kafka-server-start.sh /root/bigdata/kafka/config/server.properties
# 增加topic
/root/bigdata/kafka/bin/kafka-topics.sh --zookeeper 192.168.19.137:2181 --create --replication-factor 1 --topic click-trace --partitions 1

進程管理

我們這里使用 Supervisor 進行進程管理,當進程異常時可以自動重啟,F(xiàn)lume 進程配置如下

[program:collect-click]
command=/bin/bash /root/toutiao_project/scripts/collect_click.sh
user=root
autorestart=true
redirect_stderr=true
stdout_logfile=/root/logs/collect.log
loglevel=info
stopsignal=KILL
stopasgroup=true
killasgroup=true

Kafka 進程配置如下

[program:kafka]
command=/bin/bash /root/toutiao_project/scripts/start_kafka.sh
user=root
autorestart=true
redirect_stderr=true
stdout_logfile=/root/logs/kafka.log
loglevel=info
stopsignal=KILL
stopasgroup=true
killasgroup=true

啟動 Supervisor

supervisord -c /etc/supervisord.conf

啟動 Kafka 消費者,并在應用服務器日志文件中寫入日志數(shù)據,Kafka 消費者即可收集到實時行為數(shù)據

# 啟動Kafka消費者
/root/bigdata/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.19.137:9092 --topic  click-trace

# 寫入日志數(shù)據
echo {\"actionTime\":\"2019-04-10 21:04:39\",\"readTime\":\"\",\"channelId\":18,\"param\":{\"action\": \"click\", \"userId\": \"2\", \"articleId\": \"14299\", \"algorithmCombine\": \"C2\"}} >> userClick.log

# 消費者接收到日志數(shù)據
{"actionTime":"2019-04-10 21:04:39","readTime":"","channelId":18,"param":{"action": "click", "userId": "2", "articleId": "14299", "algorithmCombine": "C2"}}

Supervisor 常用命令如下

supervisorctl

> status              # 查看程序狀態(tài)
> start apscheduler   # 啟動apscheduler單一程序
> stop toutiao:*      # 關閉toutiao組程序
> start toutiao:*     # 啟動toutiao組程序
> restart toutiao:*   # 重啟toutiao組程序
> update              # 重啟配置文件修改過的程序

參考

https://www.bilibili.com/video/av68356229
https://pan.baidu.com/s/1-uvGJ-mEskjhtaial0Xmgw(學習資源已保存至網盤, 提取碼:eakp)

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯(lián)系作者
【社區(qū)內容提示】社區(qū)部分內容疑似由AI輔助生成,瀏覽時請結合常識與多方信息審慎甄別。
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發(fā)布,文章內容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

友情鏈接更多精彩內容