Hadoop學(xué)習(xí)筆記(二)HDFS

HDFS的設(shè)計(jì)目標(biāo)

通過上一篇文章的介紹我們已經(jīng)了解到HDFS到底是怎樣的東西,以及它是怎樣通過多副本機(jī)制來提供高可靠性的,我們可以發(fā)現(xiàn)HDFS設(shè)計(jì)目標(biāo)可以總結(jié)為以下幾點(diǎn):

  • 非常巨大的分布式文件系統(tǒng)
  • 運(yùn)行在普通廉價(jià)的硬件上
  • 易擴(kuò)展、為用戶提供性能不錯(cuò)的文件存儲(chǔ)服務(wù)

HDFS的架構(gòu)

我們通過官網(wǎng)的文檔來了解HDFS的基礎(chǔ)架構(gòu)(http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html):

Introduction

The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS was originally built as infrastructure for the Apache Nutch web search engine project. HDFS is part of the Apache Hadoop Core project. The project URL is http://hadoop.apache.org/.

這段是HDFS的基本介紹,Hadoop分布式文件系統(tǒng)是一個(gè)設(shè)計(jì)可以運(yùn)行在廉價(jià)硬件的分布式系統(tǒng)。它跟目前存在的分布式系統(tǒng)有很多相似之處。然而,不同之處才是重要的。HDFS是一個(gè)高容錯(cuò)和可部署(deployed)在廉價(jià)機(jī)器上的系統(tǒng)。HDFS提供高吞吐(hign throughout)數(shù)據(jù)能力適合處理大量數(shù)據(jù)。HDFS松散了一些需求使得支持流式傳輸。HDFS原本是為Apache Butch的搜索引擎設(shè)計(jì)的,現(xiàn)在是Apache Hadoop Core項(xiàng)目的子項(xiàng)目。

Assumptions and Goals

Hardware Failure

Hardware failure is the norm rather than the exception. An HDFS instance may consist of hundreds or thousands of server machines, each storing part of the file system’s data. The fact that there are a huge number of components and that each component has a non-trivial probability of failure means that some component of HDFS is always non-functional. Therefore, detection of faults and quick, automatic recovery from them is a core architectural goal of HDFS.

硬件失效,硬件失效是常態(tài)而不是意外。HDFS實(shí)例可能包含上百成千個(gè)服務(wù)器,每個(gè)節(jié)點(diǎn)存儲(chǔ)著文件系統(tǒng)的部分?jǐn)?shù)據(jù)。事實(shí)是集群有大量的節(jié)點(diǎn),而每個(gè)節(jié)點(diǎn)都存在一定的概率失效也就意味著HDFS的一些組成部分經(jīng)常失效。因此,檢測(cè)錯(cuò)誤、快速和自動(dòng)恢復(fù)是HDFS的核心架構(gòu)。

Streaming Data Access

Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements that are not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates.

流式數(shù)據(jù)訪問,應(yīng)用運(yùn)行在HDFS需要允許流式訪問它的數(shù)據(jù)集。這不是普通的應(yīng)用程序運(yùn)行在普通的文件系統(tǒng)上。HDFS是被設(shè)計(jì)用于批量處理而非用戶交互。設(shè)計(jì)的重點(diǎn)是高吞吐量訪問而不是低延遲數(shù)據(jù)訪問(low latency of data access)。POSIX語義在一些關(guān)鍵領(lǐng)域是用來提高吞吐量。

Large Data Sets

Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It should provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance.

大數(shù)據(jù)集,運(yùn)行在HDFS的應(yīng)用程序有大數(shù)據(jù)集。一個(gè)典型文檔在HDFS是GB到TB級(jí)別的。因此,HDFS是用來支持大文件。它應(yīng)該提供高帶寬和可擴(kuò)展(scale)到上百節(jié)點(diǎn)在一個(gè)集群中。它應(yīng)該支持在一個(gè)實(shí)例中有以千萬計(jì)的文件數(shù)。

Simple Coherency Model

HDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed except for appends and truncates. Appending the content to the end of the files is supported but cannot be updated at arbitrary point. This assumption simplifies data coherency issues and enables high throughput data access. A MapReduce application or a web crawler application fits perfectly with this model.

簡(jiǎn)單一致模型,HDFS應(yīng)用需要一個(gè)一次寫入多次讀取的文件訪問模型。一個(gè)文件一旦創(chuàng)建,寫入和關(guān)閉都不需要改變除了追加和截?cái)?truncate)。支持在文件的末端進(jìn)行追加數(shù)據(jù)而不支持在文件的任意位置進(jìn)行修改。這個(gè)假設(shè)簡(jiǎn)化了數(shù)據(jù)一致性問題和支持高吞吐量的訪問。一個(gè)Map/Reduce任務(wù)或者web爬蟲(crawler)完美匹配了這個(gè)模型。

“Moving Computation is Cheaper than Moving Data”

A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the application is running. HDFS provides interfaces for applications to move themselves closer to where the data is located.

移動(dòng)計(jì)算比移動(dòng)數(shù)據(jù)更劃算,如果應(yīng)用的計(jì)算在它要操作的數(shù)據(jù)附近執(zhí)行那就會(huì)更高效。尤其是數(shù)據(jù)集非常大的時(shí)候。這將最大限度地減少網(wǎng)絡(luò)擁堵(congestion)和提高系統(tǒng)的吞吐量。這個(gè)假設(shè)是,在應(yīng)用運(yùn)行中,移動(dòng)計(jì)算到要操作的數(shù)據(jù)附近往往比移動(dòng)數(shù)據(jù)數(shù)據(jù)更好。HDFS提供接口讓應(yīng)用去移動(dòng)計(jì)算到數(shù)據(jù)所在的位置。

Portability Across Heterogeneous Hardware and Software Platforms

HDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications.

輕便的跨異構(gòu)的軟硬件平臺(tái),被設(shè)計(jì)成可輕便從一個(gè)平臺(tái)跨到另一個(gè)平臺(tái)。這促使HDFS被廣泛地采用作為應(yīng)用的大數(shù)據(jù)集系統(tǒng)。

NameNode and DataNodes

HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is split into one or more blocks and these blocks are stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode.

HDFS使用主/從架構(gòu)。一個(gè)HDFS集群包含一個(gè)NameNode,一個(gè)服務(wù)器管理系統(tǒng)的命名空間和并控制客戶端對(duì)文件的訪問。此外,有許多的DataNodes,通常是集群中的每個(gè)節(jié)點(diǎn),用來管理它們所運(yùn)行的節(jié)點(diǎn)的存儲(chǔ)。HDFS暴露了文件系統(tǒng)的命名空間并且允許將用戶數(shù)據(jù)存儲(chǔ)在文件中。在系統(tǒng)內(nèi)部,一個(gè)文件被切割成一個(gè)或者多個(gè)塊(blocks),而這些塊將儲(chǔ)存在一系列的DataNode中。NameNode執(zhí)行文件系統(tǒng)的命名空間操作例如打開、關(guān)閉和重命名文件和路徑。它也能指定數(shù)據(jù)塊對(duì)應(yīng)的DataNode。DataNode負(fù)責(zé)提供客戶端對(duì)文件的讀寫服務(wù)。DataNode也負(fù)責(zé)執(zhí)行NameNode的創(chuàng)建、刪除和復(fù)制指令。

The NameNode and DataNode are pieces of software designed to run on commodity machines. These machines typically run a GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case.

NameNode和DatNode是設(shè)計(jì)運(yùn)行在商業(yè)電腦(commodity)的軟件框架。這些機(jī)器通常是運(yùn)行著GNU/Linux操作系統(tǒng)。HDFS是用Java語言構(gòu)建的;任何機(jī)器只要支持Java就可以運(yùn)行NameNode或者DataNode。使用Java這種高可移植性的語言就意味著HDFS可以部署(deployed)在大范圍的機(jī)器上。一個(gè)典型的部署通常是在專用(dedicated)的機(jī)器上,這個(gè)機(jī)器只能運(yùn)行NameNode軟件。集群中的其他每個(gè)機(jī)器運(yùn)行著單個(gè)DataNode實(shí)例。架構(gòu)并不排除(preclude)在同一臺(tái)機(jī)器部署多個(gè)DataNode,但是這種情況比較少見。

The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode.

集群中只存在一個(gè)NameNode實(shí)例極大地簡(jiǎn)化系統(tǒng)的架構(gòu)。NameNode是HDFS元數(shù)據(jù)的仲裁者(arbitrator)和儲(chǔ)存庫(kù)。這個(gè)系統(tǒng)用這樣的方式保證了數(shù)據(jù)的流動(dòng)不能避過NameNode。

HDFS架構(gòu)

以上是關(guān)于HDFS中的架構(gòu)以及Namenode和Datanode的介紹,我們稍微總結(jié)一下:

  1. HDFS的架構(gòu),HDFS是master/slaves的主從架構(gòu),就是一個(gè)master(Namenode)帶多個(gè)slaves(Datanode)。
  2. 在系統(tǒng)內(nèi)部,一個(gè)文件會(huì)被拆分成多個(gè)塊(block),假設(shè)一個(gè)塊的大小是128M,那么一個(gè)130M的文件會(huì)被拆分成兩個(gè)塊,一個(gè)128M,一個(gè)2M。
  3. NameNode
    • NameNode負(fù)責(zé)客戶端的請(qǐng)求響應(yīng)。
    • 負(fù)責(zé)元數(shù)據(jù)(Metadata)的管理。什么是元數(shù)據(jù)呢?我們從圖中可以看到一個(gè)客戶端想要訪問HDFS中的文件,首先要向Namenode發(fā)出一個(gè)Metadata ops的請(qǐng)求,而Namenode中存有想要訪問的文件的元數(shù)據(jù),這些元數(shù)據(jù)包括文件的名稱、副本系數(shù)、每個(gè)塊存放在哪幾個(gè)Datenode上,然后Namenode根據(jù)這些元數(shù)據(jù)去相應(yīng)的Datanode上取數(shù)據(jù)(block ops)。
  4. Datanode
    • 存儲(chǔ)用戶的文件對(duì)應(yīng)的數(shù)據(jù)塊(block)。
    • 并且要定期向NameNode發(fā)送心跳信息,匯報(bào)本身及其所有的block信息,健康狀況。NameNode作為管理者,自然需要知道它管理的每個(gè)節(jié)點(diǎn)到底存儲(chǔ)了哪些數(shù)據(jù),并且每個(gè)具體是個(gè)什么狀況,如果一個(gè)Datanode跟Namenode報(bào)告說他出問題了,不能存儲(chǔ)數(shù)據(jù)了,那么Namenode在接受到這個(gè)信息后自然就不會(huì)再把新的存儲(chǔ)任務(wù)分配給這個(gè)節(jié)點(diǎn)了。所以DataNode需要定期向Namenode報(bào)告這些情況以便Namenode進(jìn)行管理。
  5. 部署
    • A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. 一個(gè)典型的部署結(jié)構(gòu)是:1個(gè)Namenode+N個(gè)Datanodes。
    • The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case.不排除Namenode和DataNode都在一個(gè)節(jié)點(diǎn)之上,但不建議。

The File System Namespace

HDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. HDFS supports user quotas and access permissions. HDFS does not support hard links or soft links. However, the HDFS architecture does not preclude implementing these features.

HDFS支持傳統(tǒng)的層級(jí)文件結(jié)構(gòu)(hierarchical file organization)。用戶或應(yīng)用可以在這些目錄下創(chuàng)建文件目錄和存儲(chǔ)文件。文件系統(tǒng)的命名空間層級(jí)跟其他已經(jīng)在存在的文件系統(tǒng)很相像;可以創(chuàng)建和刪除文件,將文件從一個(gè)目錄移動(dòng)到另一個(gè)目錄或者重命名。HDFS支持用戶限制(user quotas)和訪問權(quán)限。HDFS不支持硬關(guān)聯(lián)或者軟關(guān)聯(lián)。然而,HDFS架構(gòu)不排除實(shí)現(xiàn)這些特性。

The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This information is stored by the NameNode.

NameNode維持文件系統(tǒng)的命名空間。文件系統(tǒng)的命名空間或者它的屬性的任何改變都被NameNode所記錄。應(yīng)用可以指定HDFS維持多少個(gè)文件副本。文件的拷貝數(shù)目稱為文件的復(fù)制因子(replication factor)。這個(gè)信息將會(huì)被NameNode記錄。

Data Replication

HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file.

All blocks in a file except the last block are the same size, while users can start a new block without filling out the last block to the configured block size after the support for variable length block was added to append and hsync.

An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once (except for appends and truncates) and have strictly one writer at any time.

The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode.

HDFS是被設(shè)計(jì)成在一個(gè)集群中跨機(jī)器可靠地存儲(chǔ)大量文件。它將每個(gè)文件存儲(chǔ)為一序列(sequence)的塊。文件的塊被復(fù)制以保證可以容錯(cuò)。每個(gè)文件塊的大小和復(fù)制因子都是可配置的。

一個(gè)文件的所有的塊除了最后一個(gè)都是同樣大小的。假設(shè)一個(gè)塊是128M,文件的大小不可能剛好是128M的倍數(shù),所以在切分的時(shí)候,最后一個(gè)塊的大小肯定是小于等于128M的,而前面的塊大小都是128M。

應(yīng)用可以指定文件的副本數(shù)目。復(fù)制因子可以在文件創(chuàng)建時(shí)指定,在后面時(shí)間修改。HDFS中的文件是僅僅只能寫入一次的(除非是添加和截?cái)啵?,并且是有?yán)格要求的,在任意的時(shí)間只能有一個(gè)寫者來進(jìn)行數(shù)據(jù)的寫入,是不支持多并發(fā)寫入的。

NameNode控制著關(guān)于blocks復(fù)制的所有決定。它周期性地接收集群中DataNode發(fā)送的心跳和塊報(bào)告。收到心跳意味著DataNode在正常地運(yùn)行著。一個(gè)塊報(bào)告包含著DataNode上所有塊信息的集合。

這里寫圖片描述

HDFS的安裝

HDFS的安裝我就不多贅述了,網(wǎng)上有很多配置教程,但我想說一下我配置過程中遇到的幾個(gè)問題:

  1. 配置hadoop配置文件時(shí),由于是在網(wǎng)上復(fù)制的,一定要檢查一下xml中的標(biāo)簽是否匹配,我在復(fù)制過程中發(fā)現(xiàn)有個(gè)</value>被寫成了/value>,結(jié)果導(dǎo)致配置出錯(cuò)。

  2. 有些比較老的教程在運(yùn)行hadoop的時(shí)候會(huì)使用./sbin/start-all.sh命令,如果這個(gè)啟動(dòng)不了,可以使用./sbin/start-dfs.sh啟動(dòng)Namenode、DataNode、Secondarynode,再使用start-yarn.sh。

  3. 在執(zhí)行了這兩個(gè)命令后,hadoop就已經(jīng)啟動(dòng)起來了,通過jps可以查看運(yùn)行的的進(jìn)程如下,

    11296 NodeManager
    11344 Jps
    11061 SecondaryNameNode
    10838 NameNode
    10938 DataNode
    11194 ResourceManager
    

    如果通過jps命令卻查不到Namenode、DataNode等進(jìn)程,這可能是由配置文件配置不當(dāng)所引起的,也可能是端口號(hào)被占用的問題,這都可以通過查看日志解決。日志位于hadoop安裝目錄下的logs文件夾,里面保存了Namenode、DataNode等啟動(dòng)過程中報(bào)錯(cuò)的方式,如果是端口號(hào)占用問題,需要在配置文件中更改端口號(hào),不清楚的話可以去搜索Namenode、DataNode等分別對(duì)應(yīng)的端口號(hào)怎么配置。

這里推薦一個(gè)配置教程:http://www.yiibai.com/hadoop/hadoop_enviornment_setup.html

HDFS shell常見命令

HDFS shell提供了一系列命令實(shí)現(xiàn)對(duì)HDFS上和本地?cái)?shù)據(jù)的訪問和操作。

其實(shí)熟悉linux操作的話還是沒什么難度的,用法跟linux上幾乎一樣。

使用這些命令的標(biāo)準(zhǔn)格式是:hadoop fs -[command],比如ls,hadoop fs -ls /就是列出根目錄下的所有文件。

put

使用方法:hadoop fs -put <文件源目錄> <目標(biāo)目錄>

從本地文件系統(tǒng)中復(fù)制單個(gè)或多個(gè)源路徑到目標(biāo)文件系統(tǒng)。就是把本地的文件上傳到HDFS上,一個(gè)參數(shù)是源文件路徑,另一個(gè)是HDFS中目標(biāo)文件路徑。

示例:

hadoop fs -put hello.txt /將hello.txt文件放到根目錄中。

wangsheng@MacPro[10:18:56]:~/desktop (つ??ω??)つcat hello.txt 
hello java
hello text
hello hadoop
hello wangsheng
wangsheng@MacPro[10:19:05]:~/desktop (つ??ω??)つhadoop fs -put hello.txt /
17/10/15 10:19:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
wangsheng@MacPro[10:19:23]:~/desktop (つ??ω??)つhadoop fs -ls /
17/10/15 10:19:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   1 wangsheng supergroup         51 2017-10-15 10:19 /hello.txt

ls

使用方法:hadoop fs -ls <目錄/文件>

如果是文件,則返回文件信息
如果是目錄,則返回它直接子文件的一個(gè)列表,可以用-R選項(xiàng)遞歸顯示。
示例:

hadoop fs -ls /顯示根目錄下的直接子文件。

hadoop fs -ls -R /跟linux中一樣,遞歸顯示根目錄下的子文件。

wangsheng@MacPro[10:58:11]:~/desktop (つ??ω??)つhadoop fs -ls /
17/10/15 10:58:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r--   1 wangsheng supergroup         51 2017-10-15 10:19 /hello.txt
drwxr-xr-x   - wangsheng supergroup          0 2017-10-15 10:58 /test
wangsheng@MacPro[10:58:17]:~/desktop (つ??ω??)つhadoop fs -ls -R /
17/10/15 10:58:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-rw-r--r--   1 wangsheng supergroup         51 2017-10-15 10:19 /hello.txt
drwxr-xr-x   - wangsheng supergroup          0 2017-10-15 10:58 /test
drwxr-xr-x   - wangsheng supergroup          0 2017-10-15 10:58 /test/a

mkdir

使用方法:hadoop fs -mkdir <文件夾路徑>

與linux中一樣是創(chuàng)建文件夾,可以用-p選項(xiàng)遞歸創(chuàng)建。

示例:

hadoop fs -mkdir /test在根目錄下創(chuàng)建一個(gè)test文件夾。

hadoop fs -mkdir -p /data/a在根目錄下遞歸創(chuàng)建/data/a路徑。

wangsheng@MacPro[11:36:42]:~/desktop (つ??ω??)つhadoop fs -mkdir -p /data/a 
17/10/15 11:37:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
wangsheng@MacPro[11:37:05]:~/desktop (つ??ω??)つhadoop fs -ls -R /
17/10/15 11:37:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
drwxr-xr-x   - wangsheng supergroup          0 2017-10-15 11:37 /data
drwxr-xr-x   - wangsheng supergroup          0 2017-10-15 11:37 /data/a
-rw-r--r--   1 wangsheng supergroup         51 2017-10-15 10:19 /hello.txt
drwxr-xr-x   - wangsheng supergroup          0 2017-10-15 10:58 /test
drwxr-xr-x   - wangsheng supergroup          0 2017-10-15 10:58 /test/a

rm

使用方法:hadoop fs -rm <文件目錄/文件>

刪除指定的文件。只刪除非空目錄和文件。若要遞歸刪除需要加-R選項(xiàng)。
示例:

hadoop fs -rm /hello.txt刪除hello.txt文件。

hadoop fs -rm -R /data/a遞歸刪除/data/a目錄。

wangsheng@MacPro[11:41:06]:~/desktop (つ??ω??)つhadoop fs -rm /hello.txt 
17/10/15 11:41:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Deleted /hello.txt
wangsheng@MacPro[11:41:27]:~/desktop (つ??ω??)つhadoop fs -rm -R /data/a
17/10/15 11:41:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Deleted /data/a

get

使用方法:hadoop fs -get <源文件路徑>

將HDFS中的文件下載到本地。

示例:

hadoop fs -get /hello.txt將hello.txt文件復(fù)制到本地。

wangsheng@MacPro[11:43:46]:~/desktop (つ??ω??)つls
161208082042352.png                                  
app       
wangsheng@MacPro[11:43:48]:~/desktop (つ??ω??)つhadoop fs -get /hello.txt
17/10/15 11:43:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
wangsheng@MacPro[11:43:56]:~/desktop (つ??ω??)つls
161208082042352.png                                               
hello.txt                   
app

For more commands , please visit:http://hadoop.apache.org/docs/r1.0.4/cn/hdfs_shell.html

參考資料

Hadoop官方文檔翻譯——HDFS Architecture:http://www.linuxidc.com/Linux/2016-12/138027.htm

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容