一、目的
記錄Spark集群框架搭建及實(shí)驗(yàn)自學(xué)心得。
二、準(zhǔn)備工作
- VMware 15 Pro
- Centos7
- JDK 1.8
- Hadoop 2.7.2
- SecureCRT version 8.5
- Scala 2.12.7
- Spark 2.3.1
- Zookeeper 3.4.10
- HBase 2.0.2
- Hive 2.3.4
三、安裝過(guò)程
3.1 在虛擬機(jī)中安裝CentOS7
3.1.1 虛擬機(jī)設(shè)置
打開(kāi)VMware15Pro,并創(chuàng)建虛擬機(jī)。
選擇典型安裝。
設(shè)定稍后安裝本地已下載好的Centos7系統(tǒng)。
3.1.2 安裝Linux系統(tǒng)
載入CentOS7安裝文件。
開(kāi)啟此虛擬機(jī),系統(tǒng)文件自動(dòng)導(dǎo)入。
CentOS7系統(tǒng)安裝設(shè)置。
考慮到默認(rèn)安裝軟件選擇是“最小安裝”,該方式安裝后需要手動(dòng)添加資源較多,將其更替為“GNOME桌面”。
用戶(hù)設(shè)置,為了避免后期hadoop集群環(huán)境搭建時(shí)候反復(fù)切換權(quán)限用戶(hù),可以選擇只建立root賬戶(hù)。
完成安裝。
3.2 JAVA環(huán)境
3.2.1 卸載Linux自帶的jdk
查看系統(tǒng)自帶的jdk
[root@master ~]# java -version
openjdk version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)
查詢(xún)系統(tǒng)自帶的java文件,根據(jù)不同的系統(tǒng)版本,輸入rpm -qa | grep jdk或者rpm -qa | grep java
[root@master ~]# rpm -qa | grep jdk
java-1.7.0-openjdk-headless-1.7.0.171-2.6.13.2.el7.x86_64
java-1.8.0-openjdk-headless-1.8.0.161-2.b14.el7.x86_64
java-1.7.0-openjdk-1.7.0.171-2.6.13.2.el7.x86_64
java-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64
copy-jdk-configs-3.3-2.el7.noarch
刪除noarch文件以外的其他文件,輸入rpm -e --nodeps 需要卸載的安裝文件名
[root@master ~]# rpm -e --nodeps java-1.7.0-openjdk-headless-1.7.0.171-2.6.13.2.el7.x86_64
[root@master ~]# rpm -e --nodeps java-1.8.0-openjdk-headless-1.8.0.161-2.b14.el7.x86_64
[root@master ~]# rpm -e --nodeps java-1.7.0-openjdk-1.7.0.171-2.6.13.2.el7.x86_64
[root@master ~]# rpm -e --nodeps java-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64
查看是否已經(jīng)刪除完畢
[root@master ~]# java -version
bash: /usr/bin/java: 沒(méi)有那個(gè)文件或目錄
3.2.2 下載并安裝最新版本的jdk
jdk下載可分成兩種情況:
A.在虛擬機(jī)中借助自帶的火狐瀏覽器,將jdk文件下載到虛擬機(jī)中。
默認(rèn)下載到Linux系統(tǒng)的下載文件中。
B.將jdk直接下載到本地windows系統(tǒng),然后通過(guò)SecureCRT等工具導(dǎo)入虛擬機(jī)中,本次試驗(yàn)采用該法。
[root@,master ~]# rz
rz waiting to receive.
Starting zmodem transfer. Press Ctrl+C to cancel.
Transferring jdk-8u181-linux-x64.tar.gz...
100% 181295 KB 36259 KB/sec 00:00:05 0 Errors
由于本機(jī)直接root用戶(hù)登錄,通過(guò)rz命令后jdk載入到/root/Home路徑。
將idk安裝包轉(zhuǎn)移到系統(tǒng)文件中,可以通過(guò)madir命令,也可以直接定位到安裝文件然后手動(dòng)轉(zhuǎn)移并修改jdk路徑,本次試驗(yàn)首先在opt文件下新建一個(gè)java文件,然后將jdk放入/opt/java路徑下。
通過(guò)tar -zxvf jdk-8u181-linux-x64.tar.gz命令解壓安裝包。
[root@master ~]# cd /opt/java
[root@master java]# tar -zxvf jdk-8u181-linux-x64.tar.gz
3.2.3 環(huán)境變量設(shè)置
通過(guò)vi /etc/profile或者vim /etc/profile進(jìn)入profile文件的編輯狀態(tài)(vim相關(guān)編輯命令請(qǐng)自行百度),也可直接在Linux系統(tǒng)下直接進(jìn)入/etc/profile路徑進(jìn)行操作。最后,將以下內(nèi)容復(fù)制到profile文件的最后。
#java environment
export JAVA_HOME=/opt/java/jdk1.8.0_181
export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar
export PATH=$PATH:${JAVA_HOME}/bin
輸入source /etc/profile使得剛才的修改生效,同時(shí)java -version再次查看java是否已經(jīng)完成安裝。
[root@master ~]# source /etc/profile
[root@master ~]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
3.3 SSH免密登陸
3.3.1 準(zhǔn)備工作
查看是否安裝SSH,一般Linux系統(tǒng)默認(rèn)安裝。
[root@master ~]# rpm -qa |grep ssh
openssh-clients-7.4p1-16.el7.x86_64
libssh2-1.4.3-10.el7_2.1.x86_64
openssh-7.4p1-16.el7.x86_64
openssh-server-7.4p1-16.el7.x86_64
借助vi /etc/host修改機(jī)器名和IP。
master 192.168.31.237
slave1 192.168.31.238
slave2 192.168.31.239
3.3.2 設(shè)置免密登陸
生成公鑰與私鑰。
[root@master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in y.
Your public key has been saved in y.pub.
The key fingerprint is:
SHA256:+cCJUbTOrw0ON9gjKK7D5rsdNRcWlrNFXxpZpDY2jM4 root@slave2
The key's randomart image is:
+---[RSA 2048]----+
| +=. .++ |
| .+.o+.= |
| .o=. X |
| .B+oo o |
| o..SE |
| ..oo + |
|. ... + * o |
|.+... = * |
|+*+. o . |
+----[SHA256]-----+
[root@master ~]#
合并公鑰到authorized_keys文件,在master服務(wù)器,進(jìn)入/root/.ssh目錄,通過(guò)SSH命令合并。
[root@master ~]# cd /root/.ssh
[root@master ~]# cat id_rsa.pub>> authorized_keys
[root@master ~]# ssh root@192.168.31.238 cat ~/.ssh/id_rsa.pub >> authorized_keys
[root@master ~]# ssh root@192.168.31.239 cat ~/.ssh/id_rsa.pub >> authorized_keys
把master服務(wù)器的authorized_keys、known_hosts復(fù)制到slave服務(wù)器的/root/.ssh目錄。
scp -r /root/.ssh/authorized_keys root@192.168.31.238:/root/.ssh/
scp -r /root/.ssh/known_hosts root@192.168.31.238:/root/.ssh/
scp -r /root/.ssh/authorized_keys root@192.168.31.239:/root/.ssh/
scp -r /root/.ssh/known_hosts root@192.168.31.239:/root/.ssh/
驗(yàn)證是否可以免密登陸其他機(jī)器。
[root@master ~]# ssh slave1
Last login: Mon Oct 1 16:43:06 2018
[root@slave1 ~]# ssh master
Last login: Mon Oct 1 16:43:58 2018 from slave1
[root@master ~]# ssh slave2
Last login: Mon Oct 1 16:43:33 2018
bug
如何解決虛擬機(jī)無(wú)法連接外網(wǎng)?
[root@master ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
# 未生成ip地址
inet6 fe80::20c:29ff:fe72:641f prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:72:64:1f txqueuelen 1000 (Ethernet)
RX packets 12335 bytes 1908583 (1.8 MiB)
RX errors 0 dropped 868 overruns 0 frame 0
TX packets 11 bytes 828 (828.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:cb:c7:a8 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@master ~]# service network start
Restarting network (via systemctl): Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.
[失敗]
[root@master ~]# systemctl status network.service
● network.service - LSB: Bring up/down networking
Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since 三 2018-12-05 16:59:04 CST; 1min 7s ago
Docs: man:systemd-sysv-generator(8)
Process: 4546 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE)
12月 05 16:59:04 master network[4546]: RTNETLINK answers: File exists
12月 05 16:59:04 master network[4546]: RTNETLINK answers: File exists
12月 05 16:59:04 master network[4546]: RTNETLINK answers: File exists
12月 05 16:59:04 master network[4546]: RTNETLINK answers: File exists
12月 05 16:59:04 master network[4546]: RTNETLINK answers: File exists
12月 05 16:59:04 master network[4546]: RTNETLINK answers: File exists
12月 05 16:59:04 master systemd[1]: network.service: control process exited, code...=1
12月 05 16:59:04 master systemd[1]: Failed to start LSB: Bring up/down networking.
12月 05 16:59:04 master systemd[1]: Unit network.service entered failed state.
12月 05 16:59:04 master systemd[1]: network.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
[root@master ~]# tail -f /var/log/messages
Dec 5 16:59:04 master network: RTNETLINK answers: File exists
Dec 5 16:59:04 master network: RTNETLINK answers: File exists
Dec 5 16:59:04 master systemd: network.service: control process exited, code=exited status=1
Dec 5 16:59:04 master systemd: Failed to start LSB: Bring up/down networking.
Dec 5 16:59:04 master systemd: Unit network.service entered failed state.
Dec 5 16:59:04 master systemd: network.service failed.
Dec 5 17:00:01 master systemd: Started Session 10 of user root.
Dec 5 17:00:01 master systemd: Starting Session 10 of user root.
Dec 5 17:01:01 master systemd: Started Session 11 of user root.
Dec 5 17:01:01 master systemd: Starting Session 11 of user root.
[root@master ~]# cat /var/log/messages | grep network
Dec 5 14:09:20 master kernel: drop_monitor: Initializing network drop monitor service
Dec 5 14:09:43 master systemd: Starting Import network configuration from initramfs...
Dec 5 14:09:43 master systemd: Started Import network configuration from initramfs.
Dec 5 14:10:01 master systemd: Starting LSB: Bring up/down networking...
Dec 5 14:10:08 master network: 正在打開(kāi)環(huán)回接口: [ 確定 ]
Dec 5 14:10:09 master network: 正在打開(kāi)接口 ens33: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Error, some other host (70:85:C2:03:8E:AF) already uses address 192.168.31.237.
Dec 5 14:10:09 master /etc/sysconfig/network-scripts/ifup-eth: Error, some other host (70:85:C2:03:8E:AF) already uses address 192.168.31.237.
Dec 5 14:10:09 master network: [失敗]
Dec 5 14:10:09 master systemd: network.service: control process exited, code=exited status=1
Dec 5 14:10:09 master systemd: Failed to start LSB: Bring up/down networking.
Dec 5 14:10:09 master systemd: Unit network.service entered failed state.
Dec 5 14:10:09 master systemd: network.service failed.
Dec 5 14:11:46 master pulseaudio: GetManagedObjects() failed: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
solution
解決方法依具體情況而定,大致分為以下幾種:
# 01 修改ifcfg-ens33文件(網(wǎng)絡(luò)上很多資料提示需要將ens33變更為eth0,其實(shí)大可不必)
[root@master ~]# cd /etc/sysconfig/network-scripts
[root@master network-scripts]# ls
ifcfg-ens33 ifdown-isdn ifup ifup-plip ifup-tunnel
ifcfg-lo ifdown-post ifup-aliases ifup-plusb ifup-wireless
ifdown ifdown-ppp ifup-bnep ifup-post init.ipv6-global
ifdown-bnep ifdown-routes ifup-eth ifup-ppp network-functions
ifdown-eth ifdown-sit ifup-ib ifup-routes network-functions-ipv6
ifdown-ib ifdown-Team ifup-ippp ifup-sit
ifdown-ippp ifdown-TeamPort ifup-ipv6 ifup-Team
ifdown-ipv6 ifdown-tunnel ifup-isdn ifup-TeamPort
[root@master network-scripts]# vi ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
#設(shè)置靜態(tài)IP
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="cecb46d8-4d6e-4678-b2f4-445b9f09c73d"
DEVICE="ens33"
#開(kāi)機(jī)自啟
ONBOOT="yes"
IPADDR=192.168.31.237
NETMASK=255.255.255.0
GATEWAY=192.168.31.1
DNS1=192.168.31.1
# 02 考慮到當(dāng)前IP被占用的情況,設(shè)置新的靜態(tài)IP地址,包括/etc/hosts和/etc/sysconfig/network-scripts/ifcfg-ens33
[root@master ~]# vi /etc/hostname
[root@master ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
[root@master ~]# service network restart
Restarting network (via systemctl): [ 確定 ]
# 03 關(guān)閉NetworkManager管理套件
[root@master ~]# systemctl stop NetworkManager
[root@master ~]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
[root@master ~]# systemctl restart network
# 通過(guò)上述方式最終成功解決
[root@master ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.31.237 netmask 255.255.255.0 broadcast 192.168.31.255
inet6 fe80::20c:29ff:fe72:641f prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:72:64:1f txqueuelen 1000 (Ethernet)
RX packets 341 bytes 32414 (31.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 61 bytes 7540 (7.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 2 bytes 108 (108.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2 bytes 108 (108.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:cb:c7:a8 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
3.4 Hadoop2.7.2安裝及集群配置
3.4.1 Hadoop安裝
與jdk文件處理方式類(lèi)似,導(dǎo)入并解壓到/opt/Hadoop路徑下。
配置hadoop環(huán)境變量。
[root@master ~]# vim /etc/profile
export HADOOP_HOME=/opt/hadoop/hadoop2.7.2
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
:x
[root@master ~]# source /etc/profile
驗(yàn)證是否完成安裝。
[root@master ~]# hadoop version
Hadoop 2.7.2
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41
Compiled by jenkins on 2016-01-26T00:08Z
Compiled with protoc 2.5.0
From source with checksum d0fda26633fa762bff87ec759ebe689c
This command was run using /opt/hadoop/hadoop-2.7.2/share/hadoop/common/hadoop-common-2.7.2.jar
3.4.2 偽分布式集群配置
在/opt/hadoop目錄下創(chuàng)建數(shù)據(jù)存放的文件夾,tmp、dfs、dfs/data、dfs/name。
進(jìn)入hadoop配置文件目錄。
[root@master ~]# cd /opt/hadoop/hadoop-2.7.2/etc/hadoop
[root@master hadoop]# ls
capacity-scheduler.xml httpfs-env.sh mapred-env.sh
configuration.xsl httpfs-log4j.properties mapred-queues.xml.template
container-executor.cfg httpfs-signature.secret mapred-site.xml.template
core-site.xml httpfs-site.xml slaves
hadoop-env.cmd kms-acls.xml ssl-client.xml.example
hadoop-env.sh kms-env.sh ssl-server.xml.example
hadoop-metrics2.properties kms-log4j.properties yarn-env.cmd
hadoop-metrics.properties kms-site.xml yarn-env.sh
hadoop-policy.xml log4j.properties yarn-site.xml
hdfs-site.xml mapred-env.cmd
配置core-site.xml 文件。
vi core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
</configuration>
配置hdfs-site.xml文件。
vi hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///opt/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///opt/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:50090</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
配置 mapred-site.xml 文件。
vi mapred-site.xml.template
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<final>true</final>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>master:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>http://master:9001</value>
</property>
</configuration>
配置 yarn-site.xml 文件。
vi yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property> <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
</configuration>
配置hadoop-env.sh和yarn-env.sh的JAVA_HOME。
[root@master hadoop]# vi hadoop-env.sh
[root@master hadoop]# vi yarn-env.sh
配置slaves,增加兩個(gè)slave節(jié)點(diǎn)。
#刪除默認(rèn)的localhost
slave1
slave2
通過(guò)scp將master服務(wù)器上配置好的Hadoop復(fù)制到各個(gè)節(jié)點(diǎn)對(duì)應(yīng)位置上。
[root@master hadoop]# scp -r /opt/hadoop 192.168.10.132:/opt/
[root@master hadoop]# scp -r /opt/hadoop 192.168.10.133:/opt/
3.4.3 啟動(dòng)hadoop
從master服務(wù)器上進(jìn)行hadoop文件目錄,并初始化。
[root@master ~]# cd /opt/hadoop/hadoop-2.7.2
[root@master hadoop-2.7.2]# bin/hdfs namenode –format
啟動(dòng)/終止命令
sbin/start-dfs.sh
sbin/start-yarn.sh
sbin/stop-dfs.sh
sbin/stop-yarn.sh
輸入jps查看相關(guān)信息。
- master
[root@master hadoop-2.7.2]# jps
8976 Jps
8710 ResourceManager
8559 SecondaryNameNode
- slave
[root@slave1 ~]# jps
4945 Jps
3703 DataNode
4778 NodeManager
3.5 Spark安裝及環(huán)境配置
3.5.1 Scala安裝
3.5.2 Spark安裝
3.5.3 Spark啟動(dòng)
關(guān)閉/開(kāi)啟 防火墻。
# 開(kāi)啟防火墻
[root@master ~]# systemctl start firewalld.service
# 關(guān)閉防火墻
[root@master ~]# systemctl stop firewalld.service
# 開(kāi)啟開(kāi)機(jī)啟動(dòng)
[root@master ~]# systemctl enable firewalld.service
# 關(guān)閉開(kāi)機(jī)啟動(dòng)
[root@master ~]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
啟動(dòng)Hadoop節(jié)點(diǎn)。
[root@master ~]# cd /opt/hadoop/hadoop-2.7.2/
[root@master hadoop-2.7.2]# sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /opt/hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-master.out
slave1: starting datanode, logging to /opt/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave1.out
slave2: starting datanode, logging to /opt/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave2.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /opt/hadoop/hadoop-2.7.2/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-master.out
slave2: starting nodemanager, logging to /opt/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave2.out
slave1: starting nodemanager, logging to /opt/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave1.out
[root@master hadoop-2.7.2]# jps
3648 SecondaryNameNode
4099 Jps
3801 ResourceManager
啟動(dòng)Spark。
[root@master hadoop-2.7.2]# cd /opt/spark/spark-2.3.1-bin-hadoop2.7
[root@master spark-2.3.1-bin-hadoop2.7]# sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /opt/spark/spark-2.3.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-master.out
slave1: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/spark-2.3.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.out
slave2: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/spark-2.3.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.out
Spark集群測(cè)試(master節(jié)點(diǎn))。