This article is a mirror article of machine translation, please click here to jump to the original article.

View: 8357|Reply: 0

Hadoop HDFS Shell command rollup

[Copy link]
Posted on 7/5/2019 3:29:44 PM | | | |
FS Shell

Invoking the File System (FS) Shell command should be in the form of bin/hadoop fs<args>. All FS shell commands use the URI path as a parameter. The URI format is scheme://authority/path. For HDFS file systems, scheme is hdfs, and for local file systems, scheme is file. The scheme and authority parameters are optional, and if not specified, the default scheme specified in the configuration will be used. An HDFS file or directory such as /parent/child can be represented as hdfs://namenode:namenodeport/parent/child, or simply /parent/child (assuming the default value in your configuration file is namenode:namenodeport). Most FS Shell commands behave similarly to their Unix Shell counterparts, with differences that will be noted in the following section. Error information is output to stderr, and other information is output to stdout.



catHow to use: hadoop fs -cat URI [URI ...]

Output the contents of the path-specified file to stdout.

Example:

hadoop fs -cat hdfs://host1:port1/file1 hdfs://host2:port2/file2
hadoop fs -cat file:///file3 /user/hadoop/file4
Return value:
Successfully returns 0, fails to return -1.

chgrp
How to use: hadoop fs -chgrp [-R] GROUP URI [URI ...] Change group association of files. With -R, make the change recursively through the directory structure. The user must be the owner of files, or else a super-user. Additional information is in the Permissions User Guide. -->

Change the group to which the file belongs. Using -R will make the change recursively under the directory structure. The user of the command must be the owner of the file or a superuser. For more information, see the HDFS Permissions User Guide.

chmod
How to use: hadoop fs -chmod [-R] <MODE[,MODE]... | OCTALMODE> URI [URI ...]

Change the permissions of a file. Using -R will make the change recursively under the directory structure. The user of the command must be the owner of the file or a superuser. For more information, see the HDFS Permissions User Guide.

chown
How to use: hadoop fs -chown [-R] [OWNER][:[GROUP]] URI [URI ]

Change the owner of the file. Using -R will make the change recursively under the directory structure. The user of the command must be a superuser. For more information, see the HDFS Permissions User Guide.

copyFromLocal
How to use: hadoop fs -copyFromLocal <localsrc> URI

Except that the source path is a local file, similar to the put command.

copyToLocal
How to use: hadoop fs -copyToLocal [-ignorecrc] [-crc] URI <localdst>

Similar to the get command, except that the target path is a local file.

cp
How to use: hadoop fs -cp URI [URI ...] <dest>

Copy files from the source path to the destination path. This command allows for multiple source paths, in which case the destination path must be a directory.
Example:

hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2
hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir
Return value:

Successfully returns 0, fails to return -1.

du
How to use: hadoop fs -du URI [URI ...]

Displays the size of all files in the directory, or when only one file is specified.
Example:
hadoop fs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://host:port/user/hadoop/dir1
Return value:
Successfully returns 0, fails to return -1.

dus
How to use: hadoop fs -<args>dus

Displays the size of the file.

expunge

How to use: Hadoop FS -expunge

Empty the recycle bin. Please refer to the HDFS design documentation for more information on the characteristics of the recycle bin.

get

How to use: hadoop fs -get [-ignorecrc] [-crc <src> <localdst>]
Copy files to your local file system. You can use the -ignorecrc option to copy files that failed CRC verification. Use the -crc option to copy the file along with the CRC information.

Example:

hadoop fs -get /user/hadoop/file localfile
hadoop fs -get hdfs://host:port/user/hadoop/file localfile
Return value:

Successfully returns 0, fails to return -1.

getmerge
How to use: hadoop fs -getmerge <src> <localdst> [addnl]

Accept a source directory and a target file as input, and connect all the files in the source directory to the local target file. addnl is optional and specifies that a line break is added to the end of each file.

ls
How to use: hadoop fs -ls <args>

If it is a file, the file information is returned in the following format:
File Name <副本数> File Size Date Modified: Time Modified: Permissions User ID, Group ID, and ID
If it is a directory, it returns a list of its direct subfiles, just like in Unix. The table of contents returns the following information:
Directory Name <dir> Modified Date Modified Time Permission User ID Group ID
Example:
hadoop fs -ls /user/hadoop/file1 /user/hadoop/file2 hdfs://host:port/user/hadoop/dir1 /nonexistentfile
Return value:
Successfully returns 0, fails to return -1.

lsr

How to use: hadoop fs -<args>lsr
ls command. Similar to ls -R in Unix.

mkdir
How to use: hadoop fs -mkdir <paths>
Accept the URI specified by the path as a parameter to create these directories. It behaves like Unix's mkdir -p, which creates parent directories at all levels in the path.

Example:

hadoop fs -mkdir /user/hadoop/dir1 /user/hadoop/dir2
hadoop fs -mkdir hdfs://host1:port1/user/hadoop/dir hdfs://host2:port2/user/hadoop/dir
Return value:

Successfully returns 0, fails to return -1.

movefromLocal

How to use: dfs -moveFromLocal <src> <dst>

Outputs a "not implemented" message.

mv
How to use: hadoop fs -mv URI [URI ...] <dest>

Move files from the source path to the destination path. This command allows for multiple source paths, in which case the destination path must be a directory. Moving files between different file systems is not allowed.
Example:

hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2
hadoop fs -mv hdfs://host:port/file1 hdfs://host:port/file2 hdfs://host:port/file3 hdfs://host:port/dir1
Return value:

Successfully returns 0, fails to return -1.

put
How to use: hadoop fs -put <localsrc> ... <dst>

Copy a single or multiple source paths from the local file system to the destination file system. It also supports reading inputs from standard inputs and writing them to the target file system.
hadoop fs -put localfile /user/hadoop/hadoopfile
hadoop fs -put localfile1 localfile2 /user/hadoop/hadoopdir
hadoop fs -put localfile hdfs://host:port/hadoop/hadoopfile
hadoop fs -put - hdfs://host:port/hadoop/hadoopfile
Read input from standard input.
Return value:

Successfully returns 0, fails to return -1.

rm
How to use: hadoop fs -rm URI [URI ...]

Delete the specified file. Only non-empty directories and files are deleted. For more information, refer to the rmr command for recursive deletion.
Example:

hadoop fs -rm hdfs://host:port/file /user/hadoop/emptydir
Return value:

Successfully returns 0, fails to return -1.

rmr
How to use: hadoop fs -rmr URI [URI ...]

delete.
Example:

hadoop fs -rmr /user/hadoop/dir
hadoop fs -rmr hdfs://host:port/user/hadoop/dir
Return value:

Successfully returns 0, fails to return -1.

setrep
How to use: hadoop fs -setrep [-R<path>]

Change the copy factor of a file. The -R option is used to recursively change the copy factor of all files in the directory.

Example:

hadoop fs -setrep -w 3 -R /user/hadoop/dir1
Return value:

Successfully returns 0, fails to return -1.

stat
How to use: hadoop fs -stat URI [URI ...]

Returns statistics for the specified path.

Example:

hadoop fs -stat path
Return value:
Successfully returns 0, fails to return -1.

tail
How to use: hadoop fs -tail [-f] URI

Output the contents of the last 1K bytes of the file to stdout. Support the -f option, the behavior is consistent with Unix.

Example:

hadoop fs -tail pathname
Return value:
Successfully returns 0, fails to return -1.

test
How to use: hadoop fs -test -[ezd] URI

Options:
-e Check if the file exists. Returns 0 if present.
-z Check if the file is 0 bytes. If yes, returns 0.
-d Returns 1 if the path is a directory, otherwise 0.
Example:

hadoop fs -test -e filename

text
How to use: hadoop fs <src>-text
Output the source file as text format. Allowed formats are zip and TextRecordInputStream.

touchz

How to use: hadoop fs -touchz URI [URI ...]
Create an empty file with 0 bytes.

Example:

hadoop -touchz pathname
Return value:
Successfully returns 0, fails to return -1.

Official links:The hyperlink login is visible.




Previous:Master the Java API interface access of HDFS
Next:API operations for Hadoop - HDFS
Disclaimer:
All software, programming materials or articles published by Code Farmer Network are only for learning and research purposes; The above content shall not be used for commercial or illegal purposes, otherwise, users shall bear all consequences. The information on this site comes from the Internet, and copyright disputes have nothing to do with this site. You must completely delete the above content from your computer within 24 hours of downloading. If you like the program, please support genuine software, purchase registration, and get better genuine services. If there is any infringement, please contact us by email.

Mail To:help@itsvse.com