Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/spring/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java HDFS授予文件及其所有目录的权限_Java_Scala_Hadoop_Hdfs_Hadoop2 - Fatal编程技术网

Java HDFS授予文件及其所有目录的权限

Java HDFS授予文件及其所有目录的权限,java,scala,hadoop,hdfs,hadoop2,Java,Scala,Hadoop,Hdfs,Hadoop2,我在HDFS 2文件中有以下数据: /a /b /c /f1.txt /f2.txt 我想将f1.txt和f2.txt的权限更改为644: e、 g.hadoop fs-chmod 644/a/b/c/*.txt 但是,为了真正授予对这些文件的访问权限,我需要将/b和/c的权限更改为755:+x,以访问包含这些文件的目录。注意:我没有/a,这已经是世界可读的了 有没有一个hadoop fs命令可以让我这么做?Java/Scala代码如何?您可以使用ACL实现

我在HDFS 2文件中有以下数据:

/a
  /b
    /c
      /f1.txt
      /f2.txt
我想将f1.txt和f2.txt的权限更改为644: e、 g.hadoop fs-chmod 644/a/b/c/*.txt

但是,为了真正授予对这些文件的访问权限,我需要将/b和/c的权限更改为755:+x,以访问包含这些文件的目录。注意:我没有/a,这已经是世界可读的了

有没有一个hadoop fs命令可以让我这么做?Java/Scala代码如何?

您可以使用ACL实现:

为用户提供读写和执行访问权限

hdfs dfs -setfacl -m -R user:UserName:rwx /a/b/c/f1.txt
如果要查看文件的权限,请使用getfacl

hdfs dfs-getfacl-Rhdfs://somehost:8020/a/b/c/f1.txt

塞特法克尔

用法:hdfs dfs-setfacl[-R][-b |-k-m |-x][-set]

设置访问控制列表文件和目录的ACL

选项:

-b: Remove all but the base ACL entries. The entries for user, group and others are retained for compatibility with permission bits.
-k: Remove the default ACL.
-R: Apply operations to all files and directories recursively.
-m: Modify ACL. New entries are added to the ACL, and existing entries are retained.
-x: Remove specified ACL entries. Other ACL entries are retained.
--set: Fully replace the ACL, discarding all existing entries. The acl_spec must include entries for user, group, and others for compatibility with permission bits.
acl_spec: Comma separated list of ACL entries.
path: File or directory to modify.
示例:

hdfs dfs -setfacl -m user:hadoop:rw- /file
hdfs dfs -setfacl -x user:hadoop /file
hdfs dfs -setfacl -b /file
hdfs dfs -setfacl -k /dir
hdfs dfs -setfacl --set user::rw-,user:hadoop:rw-,group::r--,other::r-- /file
hdfs dfs -setfacl -R -m user:hadoop:r-x /dir
hdfs dfs -setfacl -m default:user:hadoop:r-x /dir
Exit Code:

Returns 0 on success and non-zero on error.
您可以为此使用ACL:

为用户提供读写和执行访问权限

hdfs dfs -setfacl -m -R user:UserName:rwx /a/b/c/f1.txt
如果要查看文件的权限,请使用getfacl

hdfs dfs-getfacl-Rhdfs://somehost:8020/a/b/c/f1.txt

塞特法克尔

用法:hdfs dfs-setfacl[-R][-b |-k-m |-x][-set]

设置访问控制列表文件和目录的ACL

选项:

-b: Remove all but the base ACL entries. The entries for user, group and others are retained for compatibility with permission bits.
-k: Remove the default ACL.
-R: Apply operations to all files and directories recursively.
-m: Modify ACL. New entries are added to the ACL, and existing entries are retained.
-x: Remove specified ACL entries. Other ACL entries are retained.
--set: Fully replace the ACL, discarding all existing entries. The acl_spec must include entries for user, group, and others for compatibility with permission bits.
acl_spec: Comma separated list of ACL entries.
path: File or directory to modify.
示例:

hdfs dfs -setfacl -m user:hadoop:rw- /file
hdfs dfs -setfacl -x user:hadoop /file
hdfs dfs -setfacl -b /file
hdfs dfs -setfacl -k /dir
hdfs dfs -setfacl --set user::rw-,user:hadoop:rw-,group::r--,other::r-- /file
hdfs dfs -setfacl -R -m user:hadoop:r-x /dir
hdfs dfs -setfacl -m default:user:hadoop:r-x /dir
Exit Code:

Returns 0 on success and non-zero on error.
使用-R递归选项。它为存在于目录中的所有文件授予权限

hadoop fs -chmod -R 755 /a/b/c/
使用-R递归选项。它为存在于目录中的所有文件授予权限

hadoop fs -chmod -R 755 /a/b/c/

这仅在HDFS中启用ACL时有效。dfs.namenode.acls.enabled默认为false。只有在HDFS中启用了ACL时,此选项才有效。dfs.namenode.acls.enabled默认值为false。