2020-10-28 hdfs塊數(shù)據(jù)傳輸?shù)臄?shù)據(jù)加密

操作系統(tǒng):CentOS Linux release 7.4.1708 (Core)

軟件:jdk-8u201-linux-x64.tar.gz、hadoop-2.7.7.tar.gz

安裝:

登陸 192.168.1.17

hostnamectl set-hostname node1;mkdir /opt/namenode;mkdir /opt/datanode


vi /etc/hosts

192.168.1.17? node1

192.168.1.18? node2


tar -zxvf jdk-8u201-linux-x64.tar.gz;mv jdk1.8.0_201/ /opt/jdk

tar -zxvf hadoop-2.7.7.tar.gz;mv hadoop-2.7.7/ /opt/hadoop


vi ~/.bashrc

export JAVA_HOME=/opt/jdk

export HADOOP_HOME=/opt/hadoop

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

source ~/.bashrc

------------------------------------

登陸 192.168.1.18

hostnamectl set-hostname node2;mkdir /opt/namenode;mkdir /opt/datanode


vi /etc/hosts

192.168.1.17? node1

192.168.1.18? node2


tar -zxvf jdk-8u201-linux-x64.tar.gz;mv jdk1.8.0_201/ /opt/jdk


tar -zxvf hadoop-2.7.7.tar.gz;mv hadoop-2.7.7/ /opt/hadoop


vi ~/.bashrc

export JAVA_HOME=/opt/jdk

export HADOOP_HOME=/opt/hadoop

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

source ~/.bashrc

-----------------------------------

修改 core-site.xml 配置

<configuration>

? ? <property>

? ? ? ? <name>fs.defaultFS</name>

? ? ? ? <value>hdfs://192.168.1.17:9000</value>

? ? </property>

</configuration>

修改 hdfs-site.xml 配置

<configuration>

? ? <property>

? ? ? ? <name>dfs.replication</name>

? ? ? ? <value>1</value>

? ? </property>

? ? <property>

? ? ? ? <name>dfs.namenode.name.dir</name>

? ? ? ? <value>file:///opt/namenode</value>

? ? </property>

? ? <property>

? ? ? ? <name>dfs.datanode.data.dir</name>

? ? ? ? <value>file:///opt/datanode</value>

? ? </property>

? ? <property>

? ? ? ? <name>dfs.encrypt.data.transfer</name>

? ? ? ? <value>true</value>

? ? </property>

? ? <property>

? ? ? ? <name>dfs.encrypt.data.transfer.algorithm</name>

? ? ? ? <value>3des</value>

? ? </property>

? ? <property>

? ? ? ? <name>dfs.encrypt.data.transfer.cipher.suites</name>

? ? ? ? <value>AES/CTR/NoPadding</value>

? ? </property>

? ? <property>

? ? ? ? <name>dfs.block.access.token.enable</name>

? ? ? ? <value>true</value>

? ? </property>

</configuration>

-------------------------------

登陸 192.168.1.17

hadoop-daemon.sh start namenode;hadoop-daemon.sh start datanode

登陸 192.168.1.18

hadoop-daemon.sh start datanode

-------------------------------

測(cè)試

hadoop fs -mkdir? -p /user/linzw

hadoop fs -put anaconda-ks.cfg /user/linzw


備注:

如果不添加 dfs.block.access.token.enable = true,會(huì)出現(xiàn)如下報(bào)錯(cuò)。添加該配置以后,可以正常寫入數(shù)據(jù)

20/10/28 21:29:12 INFO hdfs.DFSClient: Exception in createBlockOutputStream

java.io.IOException: Connection reset by peer

at sun.nio.ch.FileDispatcherImpl.read0(Native Method)

at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)

at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)

at sun.nio.ch.IOUtil.read(IOUtil.java:197)

at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)

at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)

at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)

at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)

at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)

at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)

at java.io.FilterInputStream.read(FilterInputStream.java:83)

at java.io.FilterInputStream.read(FilterInputStream.java:83)

at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1480)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1400)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)

20/10/28 21:29:12 INFO hdfs.DFSClient: Abandoning BP-304062021-127.0.0.1-1603891241969:blk_1073741825_1001

20/10/28 21:29:12 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[192.168.1.17:50010,DS-d431ec75-6a89-42b2-b782-7a2d224873d8,DISK]

20/10/28 21:29:12 INFO hdfs.DFSClient: Exception in createBlockOutputStream

java.io.IOException: Connection reset by peer

at sun.nio.ch.FileDispatcherImpl.read0(Native Method)

at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)

at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)

at sun.nio.ch.IOUtil.read(IOUtil.java:197)

at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)

at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)

at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)

at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)

at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)

at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)

at java.io.FilterInputStream.read(FilterInputStream.java:83)

at java.io.FilterInputStream.read(FilterInputStream.java:83)

at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2292)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1480)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1400)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)

20/10/28 21:29:12 INFO hdfs.DFSClient: Abandoning BP-304062021-127.0.0.1-1603891241969:blk_1073741826_1002

20/10/28 21:29:12 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[192.168.1.18:50010,DS-168fefa8-5ff0-4fcb-82be-ecd9f81c5861,DISK]

20/10/28 21:29:12 WARN hdfs.DFSClient: DataStreamer Exception

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/linzw/anaconda-ks.cfg._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).? There are 2 datanode(s) running and 2 node(s) are excluded in this operation.

at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1620)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3135)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3059)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:493)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)

at org.apache.hadoop.ipc.Client.call(Client.java:1476)

at org.apache.hadoop.ipc.Client.call(Client.java:1413)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)

at com.sun.proxy.$Proxy10.addBlock(Unknown Source)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

at com.sun.proxy.$Proxy11.addBlock(Unknown Source)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1603)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1388)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)

put: File /user/linzw/anaconda-ks.cfg._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).? There are 2 datanode(s) running and 2 node(s) are excluded in this operation.

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • 4.1 HDFS寫數(shù)據(jù)流程 4.1.1 剖析文件寫入 1)客戶端向namenode請(qǐng)求上傳文件,namenode檢...
    碼農(nóng)GG閱讀 517評(píng)論 0 0
  • 一、操作方式 Hadoop支持的文件系統(tǒng)由很多(見下圖),HDFS只是其中一種實(shí)現(xiàn)。Java抽象類org.apac...
    Mervey閱讀 1,370評(píng)論 0 0
  • 一、HBase簡(jiǎn)介 1.1 HBase是什么 HBase是一個(gè)分布式的、面向列的開源數(shù)據(jù)庫,Hadoop 數(shù)據(jù)庫。...
    這一刻_776b閱讀 1,021評(píng)論 0 0
  • 為什么要有Hadoop? 從計(jì)算機(jī)誕生到現(xiàn)今,積累了海量的數(shù)據(jù),這些海量的數(shù)據(jù)有結(jié)構(gòu)化、半結(jié)構(gòu)化、非 結(jié)構(gòu)的數(shù)據(jù)...
    _Levi__閱讀 920評(píng)論 1 0
  • 久違的晴天,家長(zhǎng)會(huì)。 家長(zhǎng)大會(huì)開好到教室時(shí),離放學(xué)已經(jīng)沒多少時(shí)間了。班主任說已經(jīng)安排了三個(gè)家長(zhǎng)分享經(jīng)驗(yàn)。 放學(xué)鈴聲...
    飄雪兒5閱讀 7,818評(píng)論 16 22

友情鏈接更多精彩內(nèi)容