- 準(zhǔn)備一個(gè)新的虛擬機(jī),網(wǎng)絡(luò)配置為nat模式
- 配置靜態(tài)ip
見centos7 配置靜態(tài)ip - 修改主機(jī)名
見linux修改 hostname - 安裝jdk
見centos7安裝java - 關(guān)閉防火墻
見centos7關(guān)閉防火墻 - 下載hadoop 安裝包
我們這里使用的是hadoop-3.2.1.tar.gz
放到/tmp目錄 - 解壓到/usr/local/hadoop/目錄下
mkdir /usr/local/hadoop
tar -zxvf /tmp/hadoop-3.2.1.tar.gz -C /usr/local/hadoop
- 修改配置
cd /usr/local/hadoop/hadoop-3.2.1/etc/hadoop
- 修改core-site.xml
vi core-site.xml
配置為以下內(nèi)容
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://node01:9000</value>
<!--node01為本機(jī)hostname-->
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/hadoop/pseudo</value>
</property>
</configuration>
- 修改hdfs-site.xml
vi hdfs-site.xml
配置為以下內(nèi)容
<configuration>
<property>
<name>dfs.replication</name>
<!--配置副本數(shù)-->
<value>1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<!--配置secondary node地址-->
<value>node01:9868</value>
</property>
</configuration>
- 配置從節(jié)點(diǎn)信息
vi workers
將localhost修改為node01
- 配置JAVA_HOME
vi hadoop-env.sh
添加以下內(nèi)容
export JAVA_HOME=/usr/local/java/jdk1.8.0_251
- 修改sbin目錄下的幾個(gè)腳本,確保通過root用戶可以把hadoop啟起來
cd /usr/local/hadoop/hadoop-3.2.1/sbin
編輯start-dfs.sh和stop-dfs.sh文件,添加下列參數(shù):
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
編輯start-yarn.sh和stop-yarn.sh文件,添加下列參數(shù):
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
- 設(shè)置免密登錄
先ssh localhost,看一下是否需要輸入密碼,若不需要,則可以跳過該步驟
若需要輸入密碼,則按照以下步驟進(jìn)行配置
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
- 格式化文件系統(tǒng)
cd /usr/local/hadoop/hadoop-3.2.1/bin
hdfs namenode -format
- 啟動(dòng)hadoop
cd /usr/local/hadoop/hadoop-3.2.1/sbin
start-dfs.sh
啟動(dòng)ok后可以jps看一下
[root@hadoop01 hadoop-3.2.1]# jps
11459 Jps
10981 NameNode
11144 DataNode
11343 SecondaryNameNode
- 瀏覽器端訪問以下
http://本機(jī)IP:9870/ - Make the HDFS directories required to execute MapReduce jobs:
bin/hdfs dfs -mkdir -p /user/root
查看一下是否創(chuàng)建成功
bin/hdfs dfs -ls /
Found 1 items
drwxr-xr-x - root supergroup 0 2020-04-25 17:10 /user
- Copy the input files into the distributed filesystem:
bin/hdfs dfs -mkdir input
bin/hdfs dfs -put etc/hadoop/*.xml input
- Run some of the examples provided:
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.1.jar grep input output 'dfs[a-z.]+'
- Examine the output files: Copy the output files from the distributed filesystem to the local filesystem and examine them:
bin/hdfs dfs -get output output
cat output/*
or
View the output files on the distributed filesystem:
bin/hdfs dfs -cat output/*
- 關(guān)閉hadoop
sbin/stop-dfs.sh
- 單節(jié)點(diǎn)下yarn的配置
修改以下文件配置
etc/hadoop/mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
</property>
</configuration>
etc/hadoop/yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
- Start ResourceManager daemon and NodeManager daemon:
sbin/start-yarn.sh
- 在瀏覽器端訪問
http://本機(jī)IP:8088/
- Run a MapReduce job.
- 關(guān)閉yarn資源管理
sbin/stop-yarn.sh