開(kāi)源地址:https://github.com/bigbeef
個(gè)人博客:http://blog.cppba.com
1.安裝JDK
2.配置SSH無(wú)密碼登陸
1.ssh-keygen -t rsa //然后一路回車
//把id_rsa.pub 復(fù)制到node機(jī)器(現(xiàn)在是本機(jī),不需要這一步操作).
2.scp ~/.ssh/id_rsa.pub root@127.0.0.1:~/.ssh
3.切換到node機(jī)器:
4.cd /root/.ssh
//生成authorized_keys.
5.cat id_rsa.pub >> authorized_keys
//把a(bǔ)uthorized_keys scp到Master(現(xiàn)在是本機(jī),不需要這一步操作)
6.scp ~/.ssh/authorized_keys root@127.0.0.1:~/.ssh
//然后把所有機(jī)器 .ssh/ 文件夾權(quán)限改為700,authorized_keys文件權(quán)限改為600
7.
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
8.ssh root@127.0.0.1 //驗(yàn)證ssh,不需要輸入密碼即可登錄
3.安裝Hadoop
1.官網(wǎng)下載hadoop壓縮包(這里是hadoop-2.7.3.tar.gz)
2.解壓
tar -zxvf hadoop-2.7.3.tar.gz
3.修改hadoop配置文件
cd /opt/hadoop-2.7.3/etc/hadoop
(1).配置hadoop-env.sh
# The java implementation to use.
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/java/jdk1.8.0_121(自己的jdk路徑)
(2).//修改配置core-site.xml
vi core-site.xml
<configuration>
<!--配置hdfs的namenode(老大)的地址-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
<description>HDFS的URI,文件系統(tǒng)://namenode標(biāo)識(shí):端口號(hào)</description>
</property>
<!--配置hadoop運(yùn)行時(shí)產(chǎn)生數(shù)據(jù)的存儲(chǔ)目錄,不是臨時(shí)的數(shù)據(jù)-->
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value>
<description>namenode上本地的hadoop臨時(shí)文件夾</description>
</property>
</configuration>
(3).修改配置hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>副本個(gè)數(shù),配置默認(rèn)是3,應(yīng)小于datanode機(jī)器數(shù)量</description>
</property>
</configuration>
(4).修改mapred-site.xml
mv mapred-site.xml.template mapred-site.xml
vi mapred-site.xml
<configuration>
<!--指定mapreduce運(yùn)行在yarn模型上-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
(5).配置yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>127.0.0.1</value>
</property>
<!--mapreduce執(zhí)行shuffle時(shí)獲取數(shù)據(jù)的方式-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
4.啟動(dòng)hadoop
(1)格式化namenode
bin/hdfs namenode -format
(2)啟動(dòng)NameNode 和 DataNode 守護(hù)進(jìn)程
sbin/start-dfs.sh
(3)啟動(dòng)ResourceManager 和 NodeManager 守護(hù)進(jìn)程
sbin/start-yarn.sh
(4)jps命令查看進(jìn)程

4.訪問(wèn)
127.0.0.1:50070
127.0.0.1:8088
如果可以訪問(wèn)表示配置成功