1. 程式人生 > >Hadoop的distcp命令出現Permission denied錯誤

Hadoop的distcp命令出現Permission denied錯誤

Hadoop的distcp命令可以實現將檔案從一個hdfs檔案系統中拷貝到另外一個檔案系統中,如下所示:

$ bin/hadoop distcp -overwrite hdfs://123.123.23.111:9000/hsd/t_url hdfs://123.123.23.156:9000/data/t_url
正常情況下應該出現如下執行結果:
Java HotSpot(TM) 64-Bit Server VM warning: Insufficient space for shared memory file:
   /tmp/hsperfdata_hugetable/16744
Try using the -Djava.io.tmpdir= option to select an alternate temp location.
15/04/29 20:35:07 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[hdfs://192.168.34.135:9000/zyx/t_url], targetPath=hdfs://192.168.34.156:9000/data/t_url, targetPathExists=false, preserveRawXattrs=false}
15/04/29 20:35:07 INFO client.RMProxy: Connecting to ResourceManager at compute-23-06.local/192.168.34.135:8032
15/04/29 20:35:08 INFO Configuration.deprecation: io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb
15/04/29 20:35:08 INFO Configuration.deprecation: io.sort.factor is deprecated. Instead, use mapreduce.task.io.sort.factor
15/04/29 20:35:08 WARN conf.Configuration: bad conf file: element not <property>
15/04/29 20:35:08 WARN conf.Configuration: bad conf file: element not <property>
15/04/29 20:35:08 INFO client.RMProxy: Connecting to ResourceManager at compute-23-06.local/192.168.34.135:8032
15/04/29 20:35:09 INFO mapreduce.JobSubmitter: number of splits:21
15/04/29 20:35:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1429262156603_0032
15/04/29 20:35:10 INFO impl.YarnClientImpl: Submitted application application_1429262156603_0032
15/04/29 20:35:10 INFO mapreduce.Job: The url to track the job: http://compute-23-06.local:8088/proxy/application_1429262156603_0032/
15/04/29 20:35:10 INFO tools.DistCp: DistCp job-id: job_1429262156603_0032
15/04/29 20:35:10 INFO mapreduce.Job: Running job: job_1429262156603_0032
15/04/29 20:35:21 INFO mapreduce.Job: Job job_1429262156603_0032 running in uber mode : false
15/04/29 20:35:21 INFO mapreduce.Job:  map 0% reduce 0%
15/04/29 20:35:32 INFO mapreduce.Job:  map 10% reduce 0%
15/04/29 20:35:33 INFO mapreduce.Job:  map 18% reduce 0%
15/04/29 20:35:34 INFO mapreduce.Job:  map 25% reduce 0%
……

但是我在執行的過程中出現了Permission denied錯誤,具體如下:

Error: java.io.IOException: File copy failed: hdfs://192.168.34.135:9000/zyx/t_url/000031_0 --> hdfs://192.168.34.156:9000/data/000031_0
	at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:284)
	at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:252)
	at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
	at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://192.168.34.135:9000/zyx/t_url/000031_0 to hdfs://192.168.34.156:9000/data/000031_0
	at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
	at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:280)
	... 10 more
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=hugetable, access=WRITE, inode="/data":root:supergroup:drwxr-xr-x
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5584)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5566)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5540)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2282)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2235)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2188)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:505)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
從命令中可以很明顯的看出來,是許可權不夠,“supergroup:drwxr-x-r-x”,通過hadoop fs -ls /來檢視,可見對於組外的其他使用者的確沒有寫許可權(drwxr-xr-x的最後3個是組外的其他使用者許可權,目前只有r:read和x:executable兩個許可權)。可見使用者對hdfs檔案沒有寫許可權。


可以通過chmod 來修改許可權:

具體如下:

$ hadoop fs -chmod 777 /lgh
$ hadoop fs -chmod 777 /data
上述程式碼是給hdfs檔案/lgh和/data兩個資料夾賦予777許可權(777許可權即所有使用者的所有許可權),執行完了再次執行hadoop fs /可見許可權已經全部賦予上去了。