5.HDFS API訪(fǎng)問(wèn)文件

拋出org.apache.hadoop.security.AccessControlException: Permission denied: user=zhangsan, access=WRITE, inode="/input/fd.txt":root:supergroup:drwxr-xr-x

In my case, a key of the problem was following error message.

There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

It means that your hdfs-client couldn't connect to your datanode with 50010 port. As you connected to hdfs namenode, you could got a datanode's status. But, your hdfs-client would failed to connect to your datanode.

(In hdfs, a namenode manages file directories, and datanodes. If hdfs-client connect to a namnenode, it will find a target file path and address of datanode that have the data. Then hdfs-client will communicate with datanode. (You can check those datanode uri by using netstat. because, hdfs-client will be trying to communicate with datanodes using by address informed by namenode)

I?solved?that problem by:

opening 50010 port in a firewall.

adding propertiy?"dfs.client.use.datanode.hostname", "true"

adding hostname to hostfile in my client PC.

VPC要設(shè)置正確

最后:

Configuration conf =new Configuration();

conf.set("fs.defaultFS",ConfConst.fs_defaultFS);

conf.set("dfs.replication",ConfConst.dfs_replication);

conf.set("dfs.client.use.datanode.hostname", "true");

System.setProperty("HADOOP_USER_NAME", "root");

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀(guān)點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容