# hdfs dfs //相當于 hadoop fs
ubuntu@s0:/soft/hadoop/logs$ hdfs dfs -ls -R /
ubuntu@s0:/soft/hadoop/logs$ hdfs dfs -mkdir -p /user/ubuntu/hadoop
ubuntu@s0:/soft/hadoop/logs$ hdfs dfs -ls -R /
drwxr-xr-x - ubuntu supergroup 0 2018-10-07 21:27 /user
drwxr-xr-x - ubuntu supergroup 0 2018-10-07 21:27 /user/ubuntu
drwxr-xr-x - ubuntu supergroup 0 2018-10-07 21:27 /user/ubuntu/hadoop
# 查看-put參數(shù)的幫助
ubuntu@s0:/soft/hadoop/logs$ hdfs dfs -help put
-put [-f] [-p] [-l] <localsrc> ... <dst> :
Copy files from the local file system into fs. Copying fails if the file already
exists, unless the -f flag is given.
Flags:
-p Preserves access and modification times, ownership and the mode.
-f Overwrites the destination if it already exists.
-l Allow DataNode to lazily persist the file to disk. Forces
replication factor of 1. This flag will result in reduced
durability. Use with care.
# 查看-copyFromLocal參數(shù)的幫助
ubuntu@s0:/soft/hadoop/logs$ hdfs dfs -help copyFromLocal
-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst> :
Identical to the -put command.
# 將本地文件index.html上傳到hdfs目錄/user/ubuntu/hadoop下
ubuntu@s0:~$ hdfs dfs -put index.html /user/ubuntu/hadoop
# 下載到本地
ubuntu@s0:~$ hdfs dfs -get /user/ubuntu/hadoop/index.html a.html
# 刪除目錄
ubuntu@s0:~$ hdfs dfs -rm -r -f /user/ubuntu/hadoop
18/10/07 21:44:00 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /user/ubuntu/hadoop
java程序能夠識別Hadoop的hdfs URL方案還需要額外工作。
通過FsUrlStreamHandlerFactory實例
調(diào)用java.net.URL對象的setURLStreamHandlerFactory方法
磁盤尋道時間10ms,磁盤的熟慮每秒鐘100M左右,1s走出來100M左右,如果按二進制算出來應該是128M。按這個法則去定制塊大小。
# 黑白名單的組合情況
include //dfs.hosts
exclude //dfs.hosts.exclude
include exclude Interpretation
no no 不能連接
no yes 不能連接
yes no 可以連接
yes yes 可以連接,將會退役狀態(tài)
# 節(jié)點的服役和退役(hdfs)
1.在dfs.include文件中包含新節(jié)點名稱,該文件在nn的本地目錄。
白名單
[s0:/soft/hadoop/etc/dfs.include.txt]
s1
s2
s3
s4
2.在hdfs-site.xml文件中添加屬性
<property>
<name>dfs.hosts</name>
<value>/soft/hadoop/etc/dfs.include.txt</value>
</property>
3.在nn上刷新節(jié)點
hdfs dfsadmin -refreshNodes
4.在slaves文件中添加節(jié)點ip(主機名)
s1
s2
s3
s4 //新添加的
5.單獨啟動新節(jié)點中的DataNode
[s4]
hadoop-daemon.sh start datanode
[退役]
1.添加退役節(jié)點的ip到黑名單,不要更新白名單
[/soft/hadoop/etc/dfs.hosts.exclude.txt]
s4
2.配置hdfs-site.xml
<property>
<name>dfs.hosts.exclude</name>
<value>/soft/hadoop/etc/dfs.hosts.exclude.txt</value>
</property>
3.刷新nn節(jié)點
hdfs dfsadmin -refreshNodes
4.查看webui,節(jié)點狀態(tài)在decommission in progress.
5.當所有的要退役的節(jié)點都報告為Decommissioned,數(shù)據(jù)轉移工作已經(jīng)完成。
6.從白名單刪除節(jié)點,并刷新節(jié)點
[s0:/soft/hadoop/etc/dfs.include.txt]
hdfs dfsadmin -refreshNodes
7.從slaves文件中刪除退役節(jié)點
# 節(jié)點的服役和退役(yarn)
1.在dfs.include文件中包含新節(jié)點名稱,該文件在nn的本地目錄。
白名單
[s0:/soft/hadoop/etc/dfs.include.txt]
s1
s2
s3
s4
2.在yarn-site.xml文件中添加屬性
<property>
<name>yarn.resourcemanager.nodes.include-path</name>
<value>/soft/hadoop/etc/dfs.include.txt</value>
</property>
3.在nn上刷新節(jié)點
yarn rmadmin -refreshNodes
4.在slaves文件中添加節(jié)點ip(主機名)
s1
s2
s3
s4 //新添加的
5.單獨啟動新節(jié)點中的nodemanager
[s4]
yarn-daemon.sh start datanode
[退役]
1.添加退役節(jié)點的ip到黑名單,不要更新白名單
[/soft/hadoop/etc/dfs.hosts.exclude.txt]
s4
2.配置yarn-site.xml
<property>
<name>yarn.resourcemanager.nodes.exclude-path</name>
<value>/soft/hadoop/etc/dfs.hosts.exclude.txt</value>
</property>
3.刷新nn節(jié)點
yarn rmadmin -refreshNodes
4.查看webui,節(jié)點狀態(tài)在decommission in progress.
5.當所有的要退役的節(jié)點都報告為Decommissioned,數(shù)據(jù)轉移工作已經(jīng)完成。
6.從白名單刪除節(jié)點,并刷新節(jié)點
[s0:/soft/hadoop/etc/dfs.include.txt]
yarn rmadmin -refreshNodes
7.從slaves文件中刪除退役節(jié)點