偽分布式 Hadoop 部署 Mac版

歡迎Follow我的GitHub, 關(guān)注我的簡書.

配置Hadoop,單機(jī)偽分布式模式,用于學(xué)習(xí)Hadoop的原理,實(shí)際工作仍需要Hadoop集群。設(shè)備為Mac操作系統(tǒng)。

Hadoop

SSH

開啟本機(jī)的遠(yuǎn)程登錄,位于 系統(tǒng)偏好設(shè)置 -> 共享

SSH

登錄本機(jī)

?  ~ ssh localhost
Last login: Sun Sep  3 08:02:53 2017

默認(rèn)需要鍵入密碼,可以復(fù)制已有的SSH密匙至本機(jī)的驗(yàn)證密匙,避免每次都需要輸入密碼。

cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

Homebrew

使用Homebrew安裝Hadoop,默認(rèn)會(huì)設(shè)置一些系統(tǒng)參數(shù),避免手動(dòng)修改,Homebrew官網(wǎng)

執(zhí)行安裝命令

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

安裝失敗的卸載命令

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall)"

/Library/Caches/權(quán)限不足,導(dǎo)致錯(cuò)誤

==> Cleaning up /Library/Caches/Homebrew...
==> Migrating /Library/Caches/Homebrew to /Users/wangchenlong/Library/Caches/Homebrew...
==> Deleting /Library/Caches/Homebrew...
Warning: Failed to delete /Library/Caches/Homebrew.

修改文件夾歸屬到當(dāng)前用戶

sudo chown -R $USER /usr/local/*
sudo chown -R $USER /Users/wangchenlong/Library/*

安裝成功

?  ~ brew --version
Homebrew 1.3.1
Homebrew/homebrew-core (git revision 1278; last commit 2017-09-02)

在修改用戶權(quán)限時(shí),導(dǎo)致pip的python實(shí)效,需要重新安裝pip,報(bào)錯(cuò)

pip installation /usr/local/opt/python/bin/python2.7: 
bad interpreter: No such file or directory

修復(fù)pip,參考

重新安裝pip

curl https://bootstrap.pypa.io/ez_setup.py -o - | sudo python
sudo easy_install pip

Hadoop安裝

使用brew安裝hadoop

?  ~ brew install hadoop
==> Downloading https://www.apache.org/dyn/closer.cgi?path=hadoop/common/hadoop-2.8.1/hadoop-2.8.1.tar.gz
==> Best Mirror http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.8.1/hadoop-2.8.1.tar.gz
######################################################################## 100.0%
==> Caveats
In Hadoop's config file:
  /usr/local/opt/hadoop/libexec/etc/hadoop/hadoop-env.sh,
  /usr/local/opt/hadoop/libexec/etc/hadoop/mapred-env.sh and
  /usr/local/opt/hadoop/libexec/etc/hadoop/yarn-env.sh
$JAVA_HOME has been set to be the output of:
  /usr/libexec/java_home
==> Summary
??  /usr/local/Cellar/hadoop/2.8.1: 25,233 files, 2.1GB, built in 1 minute 1 second

Hadoop的安裝位置:/usr/local/Cellar/hadoop/2.8.1

顯示目錄歸屬的用戶,確保為當(dāng)前用戶

ls -lad /usr/local /usr/local/Cellar

為了使用方便,設(shè)置Hadoop的環(huán)境變量HADOOP_HOME,我使用的Shell是oh-my-zsh,默認(rèn)的配置是.zshrc,在末尾添加環(huán)境變量即可,參考

#Hadoop Settings
export HADOOP_HOME=/usr/local/Cellar/hadoop/2.8.1/libexec

測試

?  ~ echo $HADOOP_HOME
/usr/local/Cellar/hadoop/2.8.1/libexec
?  ~

vi /etc/hosts,添加本機(jī)的地址映射

127.0.0.1 bd01.wangchenlong.org

HDFS配置

重啟電腦后,Hadoop服務(wù)全部關(guān)閉,需要重新啟動(dòng)。

進(jìn)入Hadoop的配置目錄

?  ~ cd ${HADOOP_HOME}/etc/hadoop
?  hadoop git:(master) ? pwd
/usr/local/Cellar/hadoop/2.8.1/libexec/etc/hadoop
?  hadoop git:(master) ? ls
capacity-scheduler.xml     kms-env.sh
configuration.xsl          kms-log4j.properties
container-executor.cfg     kms-site.xml
core-site.xml              log4j.properties
hadoop-env.sh              mapred-env.sh
hadoop-metrics.properties  mapred-queues.xml.template
hadoop-metrics2.properties mapred-site.xml
hadoop-policy.xml          mapred-site.xml.template
hdfs-site.xml              slaves
httpfs-env.sh              ssl-client.xml.example
httpfs-log4j.properties    ssl-server.xml.example
httpfs-signature.secret    yarn-env.sh
httpfs-site.xml            yarn-site.xml
kms-acls.xml
?  hadoop git:(master) ?

配置HDFS,編輯core-site.xml文件

?  hadoop git:(master) ? vi core-site.xml
?  hadoop git:(master) ? open core-site.xml

添加兩個(gè)屬性,hadoop.tmp.dir用于存儲(chǔ)HDFS的臨時(shí)文件,fs.default.name用于指定HDFS的訪問端口。臨時(shí)文件,如果使用/tmp文件夾,每次重啟電腦都會(huì)清空,因此,我們創(chuàng)建永久性質(zhì)的臨時(shí)文件夾./Hadoop/opt/data/tmp,提前使用mkdir創(chuàng)建。

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <!-- 永久存儲(chǔ)的tmp文件. -->
        <value>/Users/wangchenlong/Hadoop/opt/data/tmp/hadoop-${user.name}</value>
        <description>A base for other temporary directories.</description>
    </property> 
    <property>
        <name>fs.default.name</name>
        <value>hdfs://bd01.wangchenlong.org:8020</value>
    </property>
</configuration>

編輯hdfs-site.xml文件,HDFS在存儲(chǔ)時(shí)的復(fù)制數(shù)量dfs.replication,我們是單機(jī),只需要存儲(chǔ)1份。

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

格式化HDFS

hdfs namenode -format

如果在臨時(shí)文件夾./Hadoop/opt/data/tmp/hadoop-wangchenlong/dfs/中,創(chuàng)建了dfs文件夾,則表示格式化成功。檢查dfs/name/current文件夾

-rw-r--r--  1 wangchenlong  staff  218 Sep  3 14:36 VERSION
-rw-r--r--  1 wangchenlong  staff  329 Sep  3 14:36 fsimage_0000000000000000000
-rw-r--r--  1 wangchenlong  staff   62 Sep  3 14:36 fsimage_0000000000000000000.md5
-rw-r--r--  1 wangchenlong  staff    2 Sep  3 14:36 seen_txid

fsimage是持久化存儲(chǔ)的文件,fsimage.md5是文件的校驗(yàn),seen_txid是HDFS的版本

VERSION的信息,namespaceID是NameNode的唯一ID;clusterID是集群ID,NameNode與DataNode的集群ID一致,表示同一個(gè)集群

namespaceID=742879587
clusterID=CID-679809f6-9f1f-4d6c-a35f-ac519ad543c2
cTime=1504420584572
storageType=NAME_NODE
blockpoolID=BP-1330604992-169.254.59.178-1504420584572
layoutVersion=-63

JPS(Java Virtual Machine Process Status Tool)是JDK提供的一個(gè)顯示當(dāng)前所有Java進(jìn)程pid的命令,也可以用于顯示Hadoop的進(jìn)程信息。

?  ~ jps
767 Jps
?  ~

Hadoop的命令,都位于sbin中,使用hadoop-daemon.sh啟動(dòng)NameNode

?  ~ cd ${HADOOP_HOME}/sbin
?  sbin git:(master) ? ls
distribute-exclude.sh   refresh-namenodes.sh    stop-all.sh
hadoop-daemon.sh        slaves.sh               stop-balancer.sh
hadoop-daemons.sh       start-all.sh            stop-dfs.sh
hdfs-config.sh          start-balancer.sh       stop-secure-dns.sh
httpfs.sh               start-dfs.sh            stop-yarn.sh
kms.sh                  start-secure-dns.sh     yarn-daemon.sh
mr-jobhistory-daemon.sh start-yarn.sh           yarn-daemons.sh
?  sbin git:(master) ? ./hadoop-daemon.sh start namenode
starting namenode, logging to /usr/local/Cellar/hadoop/2.8.1/libexec/logs/hadoop-wangchenlong-namenode-wangchenlong.local.out
?  sbin git:(master) ? jps
840 NameNode
907 Jps
?  sbin git:(master) ?

啟動(dòng)DataNode。如果無法啟動(dòng)DataNode,因?yàn)槎啻胃袷交疦ameNode,可能會(huì)導(dǎo)致VERSION中的namespaceID不一致,刪除hadoop-${user.name}文件夾,重新格式化即可。

?  sbin git:(master) ? ./hadoop-daemon.sh start datanode
starting datanode, logging to /usr/local/Cellar/hadoop/2.8.1/libexec/logs/hadoop-wangchenlong-datanode-wangchenlong.local.out
?  sbin git:(master) ? jps
840 NameNode
984 Jps

啟動(dòng)SecondaryNameNode

?  sbin git:(master) ? ./hadoop-daemon.sh start secondarynamenode
starting secondarynamenode, logging to /usr/local/Cellar/hadoop/2.8.1/libexec/logs/hadoop-wangchenlong-secondarynamenode-wangchenlong.local.out
?  sbin git:(master) ? jps
1105 SecondaryNameNode
1139 Jps
840 NameNode

最終情況的Java進(jìn)程,需要NameNode、DataNode、SecondaryNameNode都存在。

?  sbin git:(master) ? jps
1105 SecondaryNameNode
2484 DataNode
840 NameNode
2522 Jps

在HDFS上,創(chuàng)建文件夾demo-test,上傳一個(gè)任意文件wcl.txt。

?  sbin git:(master) ? hdfs dfs -mkdir /demo-test
?  sbin git:(master) ? hdfs dfs -ls /
Found 1 items
drwxr-xr-x   - wangchenlong supergroup          0 2017-09-03 15:19 /demo-test
?  ~ hdfs dfs -put wcl.txt /demo-test
?  ~ hdfs dfs -ls /demo-test
Found 1 items
-rw-r--r--   1 wangchenlong supergroup         23 2017-09-03 15:21 /demo-test/wcl.txt
?  ~

關(guān)閉HDFS的命令匯總

./hadoop-daemon.sh stop namenode
./hadoop-daemon.sh stop secondarynamenode
./hadoop-daemon.sh stop datanode

啟動(dòng)HDFS的命令匯總

cd $HADOOP_HOME/sbin
./hadoop-daemon.sh start namenode
./hadoop-daemon.sh start secondarynamenode
./hadoop-daemon.sh start datanode

配置Yarn

進(jìn)入配置目錄,復(fù)制MapReduce的模板

cd ${HADOOP_HOME}/etc/hadoop
cp mapred-site.xml.template mapred-site.xml

編輯mapred-site.xml,MapReduce的框架設(shè)置為Yarn。

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

配置Yarn的屬性,yarn-site.xml。yarn.nodemanager.aux-services是Yarn的NodeManager,用于管理node,使用MR的隨機(jī)洗牌模式;yarn.resourcemanager.hostname是Yarn的資源管理器(ResourceManager),使用本機(jī)地址。

<configuration>
<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>bd01.wangchenlong.org</value>
    </property>
</configuration>

啟動(dòng)NodeManager和ResourceManager,先進(jìn)入命令文件夾,再啟動(dòng)yarn-daemon腳本,最后使用jps檢查是否成功。

?  sbin git:(master) ? ./yarn-daemon.sh start nodemanager
starting nodemanager, logging to /usr/local/Cellar/hadoop/2.8.1/libexec/logs/yarn-wangchenlong-nodemanager-wangchenlong.local.out
?  sbin git:(master) ? ./yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /usr/local/Cellar/hadoop/2.8.1/libexec/logs/yarn-wangchenlong-resourcemanager-wangchenlong.local.out
?  sbin git:(master) ? jps
3028 SecondaryNameNode
2966 DataNode
3736 ResourceManager
2905 NameNode
3769 Jps
3661 NodeManager
?  sbin git:(master) ?

Yarn的端口是8088,訪問 http://bd01.wangchenlong.org:8088/

Yarn

MR作業(yè)

MR的數(shù)據(jù)存儲(chǔ)使用HDFS,MR的框架使用Yarn,執(zhí)行MR作業(yè) - 單詞統(tǒng)計(jì)。

./Hadoop/opt/data中,創(chuàng)建文本wc.input

hadoop mapreduce hive
hbase spark storm
sqoop hadoop hive
spark hadoop

將文件上傳至HDFS的demo-test文件夾

?  data hdfs dfs -put wc.input /demo-test
?  data hdfs dfs -ls /demo-test
Found 2 items
-rw-r--r--   1 wangchenlong supergroup         71 2017-09-03 16:05 /demo-test/wc.input
-rw-r--r--   1 wangchenlong supergroup         23 2017-09-03 15:21 /demo-test/wcl.txt
?  data

進(jìn)入Hadoop的示例文件夾share

?  data cd ${HADOOP_HOME}/share/hadoop/mapreduce
?  mapreduce git:(master) ? ls
hadoop-mapreduce-client-app-2.8.1.jar             hadoop-mapreduce-client-jobclient-2.8.1-tests.jar lib
hadoop-mapreduce-client-common-2.8.1.jar          hadoop-mapreduce-client-jobclient-2.8.1.jar       lib-examples
hadoop-mapreduce-client-core-2.8.1.jar            hadoop-mapreduce-client-shuffle-2.8.1.jar         sources
hadoop-mapreduce-client-hs-2.8.1.jar              hadoop-mapreduce-examples-2.8.1.jar
hadoop-mapreduce-client-hs-plugins-2.8.1.jar      jdiff
?  mapreduce git:(master) ?

執(zhí)行MR作業(yè)

yarn jar hadoop-mapreduce-examples-2.8.1.jar wordcount /demo-test/wc.input /demo-output/

查看輸出結(jié)果

?  mapreduce git:(master) ? hdfs dfs -cat /demo-output/part-r-00000
17/09/03 16:12:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hadoop  3
hbase   1
hive    2
mapreduce   1
spark   2
sqoop   1
storm   1
?  mapreduce git:(master) ?

開啟Yarn的歷史服務(wù)

cd ${HADOOP_HOME}/sbin
./mr-jobhistory-daemon.sh start historyserver

停止命令匯總

./mr-jobhistory-daemon.sh stop historyserver
./yarn-daemon.sh stop nodemanager
./yarn-daemon.sh stop resourcemanager

啟動(dòng)命令匯總

./yarn-daemon.sh start nodemanager
./yarn-daemon.sh start resourcemanager
./mr-jobhistory-daemon.sh start historyserver

MR作業(yè)的執(zhí)行日志,The url to track the job可能無法訪問,替換為相應(yīng)的host即可。

INFO client.RMProxy: Connecting to ResourceManager at bd01.wangchenlong.org/127.0.0.1:8032
INFO input.FileInputFormat: Total input files to process : 1
INFO mapreduce.JobSubmitter: number of splits:1
INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1504434568655_0001
INFO impl.YarnClientImpl: Submitted application application_1504434568655_0001
INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1504434568655_0001/
INFO mapreduce.Job: Running job: job_1504434568655_0001
17/09/03 18:30:13 INFO mapreduce.Job: Job job_1504434568655_0001 running in uber mode : false
17/09/03 18:30:13 INFO mapreduce.Job:  map 0% reduce 0%
17/09/03 18:30:32 INFO mapreduce.Job:  map 100% reduce 0%
17/09/03 18:30:52 INFO mapreduce.Job:  map 100% reduce 100%
17/09/03 18:30:53 INFO mapreduce.Job: Job job_1504434568655_0001 completed successfully

OK, that's all! Enjoy it!

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容