1. 程式人生 > >HDFS Java Client對hdfs文件增刪查改

HDFS Java Client對hdfs文件增刪查改

apache pom.xml onf != open readline inpu test .get

step1:增加依賴 pom.xml ... <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common --> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version
>2.2.0</version> <exclusions> <exclusion> <artifactId>jdk.tools</artifactId> <groupId>jdk.tools</groupId> </exclusion> </exclusions
> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>2.2.0</version> </dependency> ... step2: 拷貝配置文件 ‘hdfs-site.xml’和‘core-site.xml’ step3:測試代碼
package
cjkjcn.demo.hadoop.hdfs; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.LinkedList; import java.util.List; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; /** * * @author junhong * * 2017年5月18日 */ public class HDFSDao { private static Configuration conf = new Configuration(); private FileSystem hdfs; final static String ROOT_PATH = "/user"; public HDFSDao() { conf.addResource("hdfs-site.xml"); conf.addResource("core-site.xml"); try { hdfs = FileSystem.get(conf); // 初始化hdfs } catch (IOException e) { e.printStackTrace(); } System.out.println("param size=" + conf.size()); } /** * 掃描測試文件是否存在 */ public void scanFiles() { try { Path path = new Path(ROOT_PATH); System.out.println(hdfs.exists(path)); } catch (IOException e) { e.printStackTrace(); } } /** * 按行讀取文本文件 * @param file * @return */ public List<String> lines(String file) { List<String> list = new LinkedList<>(); Path f = new Path(file); try { FSDataInputStream input = hdfs.open(f); InputStreamReader inr = new InputStreamReader(input); BufferedReader read = new BufferedReader(inr); String line; while ((line = read.readLine()) != null) { list.add(line); } } catch (IOException e) { e.printStackTrace(); } return list; } }


註意: 1)若缺少依賴 <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>2.2.0</version> </dependency> 將導致如下錯誤!! java.io.Exception: No FileSystem for scheme: hdfs 技術分享 2)測試寫文件或者創建目錄方法接口時,可能會出現權限問題 Pemission denied for test

HDFS Java Client對hdfs文件增刪查改