1. 程式人生 > >hive資料處理及hdfs檔案操作

hive資料處理及hdfs檔案操作

寫在前面:

本想使用hive呼叫python指令碼實現統計分析movielens資料,但是最後一步呼叫指令碼的地方不成功沒找到問題所在,於是將過程中的一些經驗寫出來,非常詳盡,對新手來說應該挺有用的。

另外呼叫指令碼的程式和報錯我會貼出來,應該是指令碼寫的有問題,後面找到問題或者有人告訴我我會更新。

還拿hive與movie lens資料說事兒。

1、首先進入hive資料庫建立基表
這裡寫圖片描述
2、在linxu檔案工作資料夾下下載資料資源並且解壓,我的目錄是opt/jskp/jinjiwei

wget http://files.grouplens.org/datasets/movielens/ml-100
k.zip

這裡寫圖片描述
這裡寫圖片描述
3、在hdfs上新建自己的工作資料夾,我的是hdfs dfs -mkdir 檔名(JJW)
這裡寫圖片描述
4、將本地解壓的檔案上傳到hdfs:

hdfs dfs -put /opt/jskp/jinjiwei/ml-100k /JJW(hdfs目錄)

在hdfs上面檢視上傳結果:
這裡寫圖片描述
5、將ml-100k檔案下的u.data檔案載入到hive資料庫前面建的基表JJW中:
這裡寫圖片描述
可以看到我第一次載入檔案路徑是本地路徑是錯誤的,第二次是hdfs上面路徑,結果正確,下面驗證載入結果:
這裡寫圖片描述
可以在hive中及進行一些簡單的統計如:
這裡寫圖片描述
6、建立子表JJW_new,用於把基表JJW資料匯入子表(因為我呼叫python指令碼不成功,這裡就直接匯入了)

CREATE TABLE JJW_new (
  userid INT,
  movieid INT,
  rating INT,
  weekday INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t';

這裡寫圖片描述
7、編寫python指令碼,功能僅僅將unix時間改為正常時間戳:

import sys
import datetime

for line in sys.stdin:
  line = line.strip()
  userid, movieid, rating, unixtime = line.split('\t')
  weekday=datetime.datetime.fromtimestamp(float(unixtime)).isoweekday()
  print '\t'
.join([userid, movieid, rating, str(weekday)])

8、新增本地python指令碼如下圖,路徑為本地絕對路徑:
這裡寫圖片描述
9、最終不呼叫python指令碼方法:

INSERT OVERWRITE TABLE JJW_new
SELECT
  userid, movieid, ratingid, unixtime
FROM JJW;

驗證:
這裡寫圖片描述
10、引用指令碼方法:

INSERT OVERWRITE TABLE JJW_new
SELECT
  TRANSFORM (userid, movieid, ratingid, unixtime)
  USING 'python weekday_mapper.py'
  AS (userid, movieid, rating, weekday)
FROM JJW;

報錯:

hive> INSERT OVERWRITE TABLE JJW_new
    > SELECT
    >   TRANSFORM (userid, movieid, ratingid, unixtime)
    >   USING 'python tansform.py'
    >   AS (userid, movieid, rating, weekday) 
    > FROM JJW;
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1526968712310_2578, Tracking URL = http://hm:8088/proxy/application_1526968712310_2578/
Kill Command = /opt/software/hadoop/hadoop-2.6.4/bin/hadoop job  -kill job_1526968712310_2578
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2018-06-28 13:00:12,907 Stage-1 map = 0%,  reduce = 0%
2018-06-28 13:00:42,417 Stage-1 map = 100%,  reduce = 0%
Ended Job = job_1526968712310_2578 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1526968712310_2578_m_000000 (and more) from job job_1526968712310_2578

Task with the most failures(4): 
-----
Task ID:
  task_1526968712310_2578_m_000000

URL:
  http://hm:8088/taskdetails.jsp?jobid=job_1526968712310_2578&tipid=task_1526968712310_2578_m_000000
-----
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"userid":47,"movieid":324,"ratingid":3,"unixtime":"879439078"}
    at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:195)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"userid":47,"movieid":324,"ratingid":3,"unixtime":"879439078"}
    at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:550)
    at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
    ... 8 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20001]: An error occurred while reading or writing to your custom script. It may have crashed with an error.
    at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:410)
    at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
    at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
    at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
    at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
    at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
    at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:540)
    ... 9 more
Caused by: java.io.IOException: Stream closed
    at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:433)
    at java.io.OutputStream.write(OutputStream.java:116)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
    at java.io.DataOutputStream.write(DataOutputStream.java:107)
    at org.apache.hadoop.hive.ql.exec.TextRecordWriter.write(TextRecordWriter.java:53)
    at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:378)
    ... 15 more


FAILED: Execution Error, return code 20001 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. An error occurred while reading or writing to your custom script. It may have crashed with an error.
MapReduce Jobs Launched: 
Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

11、使用到的命令總結一波:

本地資料載入到hive表(可覆蓋表內容)

LOAD DATA LOCAL INPATH '/opt/jskp/jinjiwei/ml-100k/u.data
OVERWRITE INTO TABLE jjw; 

本地資料載入到hive表(不可覆蓋表內容)

LOAD DATA INPATH '/opt/jskp/jinjiwei/ml-100k/u.data' INTO table testkv;

本地資料上傳到hdfs

hdfs dfs -put /opt/jskp/jinjiwei/ml-100k.zip /JJW

修改hdfs的一個檔案:

獲取
hdfs dfs -get JJW/transform.py
修改
vi transform.py
上傳
hdfs dfs -put -f test.txt yourHdfsPath/test.txt

基本hadoop dfs與hdfs dfs可互換,後面跟的引數大多為linux命令,如hdfs dfs -ls , hdfs dfs -mkdir等。
另外hive與linux互動:可在hive環境下用!linux命令!find等。
其他一些命令可以參考這裡(不是我寫的哦):HDFS 常用檔案操作命令