一、安裝Shell
參考:https://jingyan.baidu.com/article/19192ad8d2bdcde53e5707da.html
二、安裝Xftp
參考:https://jingyan.baidu.com/article/624e74590fea4f34e9ba5a74.html
三、安裝JDK
需要使用JDK8,不能使用JDK12或其他高版本
下載JDK網(wǎng)址:https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html








cp jdk-8u201-linux-x64 /opt/ #JDK移到OPT文件夾
cd /opt/ #進(jìn)入到OPT文件夾
rpm -ivh jdk-8u201-linux-x64.rpm #安裝JDK
#修改配置
vi /etc/hosts
#增加配置
192.168.192.129 bigdata
# Esc : wq 保存退出

#修改配置
vi /etc/profile
#驗(yàn)證是否配置成功(不會(huì)顯示)
echo $JAVA_HOME
#刷新配置
source /etc/profile

#末尾增加如下信息
JAVA_HOME=/usr/java/jdk1.8.0_201-amd64
HADOOP_HOME=/opt/hadoop-3.1.2
PATH=$JAVA_HOME/BIN:$HADOOP_HOME/bin:$PATH
# Esc : wq 保存退出

四、配置免密碼登陸
cd
ssh-keygen -t rsa
#生成公鑰和私鑰
#三次回車,不用輸入密碼
#進(jìn)入到SSH文件夾
cd .ssh/
cat id_rsa.pub >> authorized_keys
chmod 644 authorized_keys
#驗(yàn)證是否成功
ssh bigdata
#確認(rèn)輸入yes
五、安裝及配置Hadoop
cd
cd /opt/
tar zxf hadoop-3.1.2.tar.gz
cd /opt/hadoop-3.1.2/etc/hadoop/
cd
cd /opt/hadoop-3.1.2/etc/hadoop
vi core-site.xml
# <configuration>和</configuration>間插入
<property>
<name>fs.default.name</name>
<value>hdfs://bigdata:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop-3.1.2/current/tmp</value>
</property>
<property>
<name>fs.trash.interval</name>
<value>4320</value>
</property>
# Esc : wq 保存退出

vi hdfs-site.xml
# <configuration>和</configuration>間插入
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/hadoop-3.1.2/current/namenode/data</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/hadoop-3.1.2/current/datanode/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions.superusergroup</name>
<value>staff</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.http.address</name>
<value>0.0.0.0:50070</value>
</property>
# Esc : wq 保存退出
vi yarn-site.xml
# <configuration>和</configuration>間插入
<property>
<name>yarn.resourcemanager.hostname</name>
<value>bigdata</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux- services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager,address</name>
<value>bigdata:18040</value>
</property>
<property>
<name>yarn.resourcemanager,scheduler.address</name>
<value>bigdata:18030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>bigdata:18025</value>
</property>
<property>
<name>yarn.resource.manager.admin.address</name>
<value>bigdata:18141</value>
</property>
<property>
<name>yarn.resourcemanager,webapp.address</name>
<value>bigdata:18088</value>
</property>
<property>
<name>yarn,1og-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.1og-aggregation.retain-seconds</name>
<value>86400</value>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>bigdata:18025</value>
</property>
<property>
<name>yarn.resource.manager.admin.address</name>
<value>bigdata:18141</value>
</property>
<property>
<name>yarn.resourcemanager,webapp.address</name>
<value>bigdata:18088</value>
</property>
<property>
<name>yarn,1og-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.1og-aggregation.retain-seconds</name>
<value>86400</value>
</property>
<property>
<name>yarn,1og-aggregation,retain-check-interval-seconds</name>
<value>86400</value>
</property>
<property>
<name>yarn,nodemanager,remote-app-log-dir</name>
<value>/tmp/logs</value>
</property>
<property>
<name>yarn.nodemanager,remote-app-1og-dir-suffix</name>
<value>logs</value>
</property>
vi mapred-site.xml
# <configuration>和</configuration>間插入
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>bigdata:50030</value>
</property>
<property>
<name>mapreduce.jobhisotry.address</name>
<value>bigdata:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>bigdata:19888</value>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/jobhistory/done</value>
</property>
<property>
<name>mapreduce.intermedlate-done-dir</name>
<value>/jobhisotry/done _intermediate</value>
</property>
<property>
<name>mapreduce.job.ubertask.enable</name>
<value>true</value>
</property>
<property>
<name>mapred.job.tracker.http.address</name>
<value>0.0.0.0:50030</value>
</property>
<property>
<name>mapred.task.tracker.http.address</name>
<value>0.0.0.0:50060</value>
</property>
vi slaves
# Esc : wq 保存退出
vi hadoop-env.sh
#export JAVA_HOME=
#改成
export JAVA_HOME=/usr/java/jdk1.8.0_201-amd64/
# Esc : wq 保存退出
六、格式化
cd
hdfs namenode -format
cd
/opt/hadoop-3.1.2/sbin/start-all.sh
七、關(guān)閉防火墻
cd
yum -y install iptables-services
cd
service iptables stop #臨時(shí)關(guān)閉防火墻
systemctl disable firewalld.service #永久關(guān)閉防火墻,重啟才能生效,可不執(zhí)行
service iptables status #查看防火墻狀態(tài)

八、檢驗(yàn)Hadoop狀態(tài)
cd
jps
#顯示如下信息則成功

#瀏覽器打開,驗(yàn)證是否成功
192.168.192.129:50070

九、FAQ
1、瀏覽器無法打開50070
參考:https://blog.csdn.net/zxz547388910/article/details/86468925
若還無法打開50070:
①、刪除/opt/hadoop-3.1.2/current/tmp #刪除,慎用!
②、hdfs namenode -format #格式化,慎用!
③、hadoop-daemon.sh start namenode #啟動(dòng)namenode
④、netstat -ntlp #看50070是否有在監(jiān)控范圍
⑤、start-all.sh
2、用戶定義的配置問題
①ERROR: Attempting to launch hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting launch.
②、ERROR: Attempting to launch yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting launch.
ERROR: Attempting to launch yarn nodemanager as root
ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting launch.
參考:https://blog.csdn.net/u013725455/article/details/70147331
③、定義錯(cuò)誤的配置問題
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER
參考:https://blog.csdn.net/weixin_38763887/article/details/79157652
④、出現(xiàn)錯(cuò)誤:start-all.sh: 未找到命令
輸入:sh start-all.sh或 ./start-all.sh
⑤、常見問題
參考:http://www.cnblogs.com/dimg/p/9790448.html
⑥、其余補(bǔ)充:
更改hostname方法:sudo hostnamectl set-hostname <newhostname>