This article is a mirror article of machine translation, please click here to jump to the original article.

View: 14408|Reply: 0

[Source] Master the Java API interface access of HDFS

[Copy link]
Posted on 7/5/2019 1:35:05 PM | | |
The main purpose of HDFS design is to store massive amounts of data, that is, it can store a large number of files (terabytes of files). HDFS provides two access interfaces: the shell interface and the Java API interface, which operate the files in HDFS, and which DataNode each block is placed on, which is transparent to the developer.

1. Get the file system

2. Create a file directory


3. Delete the file or file directory


4. Obtain the file in the directory according to the filter


5. Upload the file to HDFS

6. Download the file from HDFS


7. Obtain the node information of the HDFS cluster


8. Find the location of a file in the HDFS cluster


9. Rename the file


10. Determine whether the catalog exists


(End)





Previous:A thorough understanding of what selinux in linux is
Next:Hadoop HDFS Shell command rollup
Disclaimer:
All software, programming materials or articles published by Code Farmer Network are only for learning and research purposes; The above content shall not be used for commercial or illegal purposes, otherwise, users shall bear all consequences. The information on this site comes from the Internet, and copyright disputes have nothing to do with this site. You must completely delete the above content from your computer within 24 hours of downloading. If you like the program, please support genuine software, purchase registration, and get better genuine services. If there is any infringement, please contact us by email.

Mail To:help@itsvse.com