環(huán)境準備:三個節(jié)點
bigdata01 192.168.182.100
bigdata02 192.168.182.101
bigdata03 192.168.182.102
修改每臺的主機名
[root@bigdata01 ~]# hostname bigdata01
[root@bigdata01 ~]# vi /etc/hostname
bigdata01
修改每臺機器的hosts文件
[root@bigdata01 ~]# vi /etc/hosts
192.168.182.100 bigdata01
192.168.182.101 bigdata02
192.168.182.102 bigdata03
臨時關(guān)閉防火墻+永久關(guān)閉防火墻
systemctl stop firewalld
systemctl disable firewalld
ssh免密碼登錄
首先在bigdata01機器上執(zhí)行下面命令,將公鑰信息拷貝到兩個從節(jié)點
[root@bigdata01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I8J8RDun4bklmx9T45SRsKAu7FvP2HqtriYUqUqF1q4 root@bigdata01
The key's randomart image is:
+---[RSA 2048]----+
| o . |
| o o o . |
| o.. = o o |
| +o* o * o |
|..=.= B S = |
|.o.o o B = . |
|o.o . +.o . |
|.E.o.=...o |
| .o+=*.. |
+----[SHA256]-----+
[root@bigdata01 ~]# ll ~/.ssh/
total 12
-rw-------. 1 root root 1679 Apr 7 16:39 id_rsa
-rw-r--r--. 1 root root 396 Apr 7 16:39 id_rsa.pub
[root@bigdata01 ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root@bigdata01 ~]# scp ~/.ssh/authorized_keys bigdata02:~/
[root@bigdata01 ~]# scp ~/.ssh/authorized_keys bigdata03:~/
然后在bigdata02和bigdata03上執(zhí)行
[root@bigdata02 ~]# cat ~/authorized_keys >> ~/.ssh/authorized_keys
[root@bigdata03 ~]# cat ~/authorized_keys >> ~/.ssh/authorized_keys
集群節(jié)點之間時間同步
安裝
[root@bigdata01 ~]# yum install -y ntpdate
確認是否可以正常執(zhí)行
[root@bigdata01 ~]# ntpdate -u ntp.sjtu.edu.cn
7 Apr 21:21:01 ntpdate[5447]: step time server 185.255.55.20 offset 6.252298 sec
配置定時任務(wù)
[root@bigdata01 ~]# vi /etc/crontab
* * * * * root /usr/sbin/ntpdate -u ntp.sjtu.edu.cn
bigdata02和bigdata03節(jié)點上重復(fù)上述操作
安裝jdk1.8
略
安裝Hadoop
首先把hadoop的安裝包上傳到/data/soft目錄下,解壓
[root@bigdata01 soft]# tar -zxvf hadoop-3.2.0.tar.gz
修改環(huán)境變量
[root@bigdata01 hadoop-3.2.0]# vi /etc/profile
.......
export JAVA_HOME=/data/soft/jdk1.8
export HADOOP_HOME=/data/soft/hadoop-3.2.0
export PATH=.:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$PATH
[root@bigdata01 hadoop-3.2.0]# source /etc/profile
進入配置文件所在目錄
[root@bigdata01 hadoop-3.2.0]# cd etc/hadoop/
修改 hadoop-env.sh 文件
[root@bigdata01 hadoop]# vi hadoop-env.sh
.......
export JAVA_HOME=/data/soft/jdk1.8
export HADOOP_LOG_DIR=/data/hadoop_repo/logs/hadoop
修改 core-site.xml 文件
[root@bigdata01 hadoop]# vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://bigdata01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop_repo</value>
</property>
</configuration>
修改hdfs-site.xml文件,把hdfs中文件副本的數(shù)量設(shè)置為2,最多為2,因為現(xiàn)在集群中有兩個從節(jié)點,還有secondaryNamenode進程所在的節(jié)點信息
[root@bigdata01 hadoop]# vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>bigdata01:50090</value>
</property>
</configuration>
修改mapred-site.xml,設(shè)置mapreduce使用的資源調(diào)度框架
[root@bigdata01 hadoop]# vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
修改yarn-site.xml,設(shè)置yarn上支持運行的服務(wù)和環(huán)境變量白名單
[root@bigdata01 hadoop]# vi yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>bigdata01</value>
</property>
</configuration>
修改workers文件,增加所有從節(jié)點的主機名,一個一行
[root@bigdata01 hadoop]# vi workers
bigdata02
bigdata03
修改start-dfs.sh,stop-dfs.sh這兩個腳本文件,在文件前面增加如下內(nèi)容
[root@bigdata01 hadoop]# cd /data/soft/hadoop-3.2.0/sbin
[root@bigdata01 sbin]# vi start-dfs.sh
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
[root@bigdata01 sbin]# vi stop-dfs.sh
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
修改start-yarn.sh,stop-yarn.sh這兩個腳本文件,在文件前面增加如下內(nèi)容
[root@bigdata01 sbin]# vi start-yarn.sh
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
[root@bigdata01 sbin]# vi stop-yarn.sh
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
把bigdata01節(jié)點上將修改好配置的安裝包拷貝到其他兩個從節(jié)點
[root@bigdata01 sbin]# cd /data/soft/
[root@bigdata01 soft]# scp -rq hadoop-3.2.0 bigdata02:/data/soft/
[root@bigdata01 soft]# scp -rq hadoop-3.2.0 bigdata03:/data/soft/
在bigdata01節(jié)點上格式化HDFS
[root@bigdata01 soft]# cd /data/soft/hadoop-3.2.0
[root@bigdata01 hadoop-3.2.0]# bin/hdfs namenode -format
啟動集群
在bigdata01節(jié)點上執(zhí)行下面命令
[root@bigdata01 hadoop-3.2.0]# sbin/start-all.sh
Starting namenodes on [bigdata01]
Last login: Tue Apr 7 21:03:21 CST 2020 from 192.168.182.1 on pts/2
Starting datanodes
Last login: Tue Apr 7 22:15:51 CST 2020 on pts/1
bigdata02: WARNING: /data/hadoop_repo/logs/hadoop does not exist. Creating.
bigdata03: WARNING: /data/hadoop_repo/logs/hadoop does not exist. Creating.
Starting secondary namenodes [bigdata01]
Last login: Tue Apr 7 22:15:53 CST 2020 on pts/1
Starting resourcemanager
Last login: Tue Apr 7 22:15:58 CST 2020 on pts/1
Starting nodemanagers
Last login: Tue Apr 7 22:16:04 CST 2020 on pts/1
驗證集群
分別在3臺機器上執(zhí)行jps命令,進程信息如下所示:
在bigdata01節(jié)點執(zhí)行
[root@bigdata01 hadoop-3.2.0]# jps
6128 NameNode
6621 ResourceManager
6382 SecondaryNameNode
在bigdata02節(jié)點執(zhí)行
[root@bigdata02 ~]# jps
2385 NodeManager
2276 DataNode
在bigdata03節(jié)點執(zhí)行
[root@bigdata03 ~]# jps
2326 NodeManager
2217 DataNode