搜索鏡像
[root@localhost /]# docker search hadoop
沒有官方鏡像,我選擇使用singlarities/hadoop鏡像
[root@localhost /]# docker pull singularities/hadoop
查看
[root@localhost /]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/singularities/hadoop latest e213c9ae1b36 3 months ago 1.19 GB
創(chuàng)建docker-compose.yml文件
[root@localhost /]# vim docker-compose.yml
docker-compose.yml文件內(nèi)容:
version: "2"
services:
namenode:
image: singularities/hadoop
command: start-hadoop namenode
hostname: namenode
environment:
HDFS_USER: hdfsuser
ports:
- "8020:8020"
- "14000:14000"
- "50070:50070"
- "50075:50075"
- "10020:10020"
- "13562:13562"
- "19888:19888"
datanode:
image: singularities/hadoop
command: start-hadoop datanode namenode
environment:
HDFS_USER: hdfsuser
links:
- namenode
其中HDFS_USER的名字為HDFS的賬戶名,需要手動(dòng)建立,在下面會(huì)說(shuō)明如何建立
執(zhí)行:
[root@localhost hadoop]# docker-compose up -d
Creating network "hadoop_default" with the default driver
Creating hadoop_namenode_1 ... done
Creating hadoop_datanode_1 ... done
生成3個(gè)datanode
[root@localhost hadoop]# docker-compose scale datanode=3
WARNING: The scale command is deprecated. Use the up command with the --scale flag instead.
Starting hadoop_datanode_1 ... done
Creating hadoop_datanode_2 ... done
Creating hadoop_datanode_3 ... done
可列出容器查看
[root@localhost hadoop]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19f9685e286f singularities/hadoop "start-hadoop data..." 48 seconds ago Up 46 seconds 8020/tcp, 9000/tcp, 10020/tcp, 13562/tcp, 14000/tcp, 19888/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp, 50470/tcp, 50475/tcp hadoop_datanode_3
e96b395f56e3 singularities/hadoop "start-hadoop data..." 48 seconds ago Up 46 seconds 8020/tcp, 9000/tcp, 10020/tcp, 13562/tcp, 14000/tcp, 19888/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp, 50470/tcp, 50475/tcp hadoop_datanode_2
5a26b1069dbb singularities/hadoop "start-hadoop data..." 8 minutes ago Up 8 minutes 8020/tcp, 9000/tcp, 10020/tcp, 13562/tcp, 14000/tcp, 19888/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp, 50470/tcp, 50475/tcp hadoop_datanode_1
a8656de09ecc singularities/hadoop "start-hadoop name..." 8 minutes ago Up 8 minutes 0.0.0.0:8020->8020/tcp, 0.0.0.0:10020->10020/tcp, 0.0.0.0:13562->13562/tcp, 0.0.0.0:14000->14000/tcp, 9000/tcp, 50010/tcp, 0.0.0.0:19888->19888/tcp, 0.0.0.0:50070->50070/tcp, 50020/tcp, 50090/tcp, 50470/tcp, 0.0.0.0:50075->50075/tcp, 50475/tcp hadoop_namenode_1
打開瀏覽器,查看效果圖

1568803464(1).png
創(chuàng)建HDFS的系統(tǒng)賬戶
[root@localhost /]# adduser hdfsuser
文件維護(hù)
文件維護(hù)需要先進(jìn)入datanode節(jié)點(diǎn),再進(jìn)行操作
進(jìn)入datanode節(jié)點(diǎn)的docker容器:
[root@iZ2ze82xifgiw8sbzpte9tZ ~]# docker exec -it 這換成容器的id bash
1、創(chuàng)建目錄
hadoop fs -mkdir /hdfs #在根目錄下創(chuàng)建hdfs文件夾
2、查看目錄
>hadoop fs -ls / #列出跟目錄下的文件列表
3、級(jí)聯(lián)創(chuàng)建目錄
>hadoop fs -mkdir -p /hdfs/d1/d2
4、級(jí)聯(lián)列出目錄
>hadoop fs -ls -R /
5、上傳本地文件到HDFS
>echo "hello hdfs" >>local.txt
>hadoop fs -put local.txt /hdfs/d1/d2
6、查看HDFS中文件的內(nèi)容
>hadoop fs -cat /hdfs/d1/d2/local.txt
hello hdfs
參考鏈接: