1. 程式人生 > >java.lang.OutOfMemoryError: Java heap space Hadoop

java.lang.OutOfMemoryError: Java heap space Hadoop

       最近,隨著大資料的興起,Java實現的Hadoop成了這個資料領域的領跑者,不管是HDFS,還是MapReduce,還是Hive都成了很熱的詞彙。也從Hadoop這個軟體本體催生了一個依存此的大資料領域,也和時下熱的不能再熱的雲端計算拉上了關係。

       於是,作為一個程式設計師,不得不學習新的技術和知識,來保證自己的飯碗,這真是一個極為辛苦的差事。於是,開始接觸Hadoop。結果也就是難免不出現錯誤。

       《Hadoop Beginner‘s Guide》這本的指引,進行搭建環境,發現Hadoop已經提供deb的安裝版本,於是也就省卻了許多的多餘工作,直接在ubuntu的軟體中心雙擊之後安裝即可。

        安裝完成之後嘗試執行第一個示例程式。

        hadoop jar hadoop-examples-1.0.4.jar pi 4 1,不幸的是出現瞭如下的錯誤,

[email protected]:/usr/share/hadoop$ sudo hadoop jar hadoop-examples-1.2.1.jar pi 4 1
Number of Maps  = 4
Samples per Map = 1
15/04/17 21:54:44 INFO util.NativeCodeLoader: Loaded the native-hadoop library
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Starting Job
15/04/17 21:54:44 INFO mapred.FileInputFormat: Total input paths to process : 4
15/04/17 21:54:44 INFO mapred.JobClient: Running job: job_local1032904958_0001
15/04/17 21:54:44 INFO mapred.LocalJobRunner: Waiting for map tasks
15/04/17 21:54:44 INFO mapred.LocalJobRunner: Starting task: attempt_local1032904958_0001_m_000000_0
15/04/17 21:54:44 INFO util.ProcessTree: setsid exited with exit code 0
15/04/17 21:54:44 INFO mapred.Task:  Using ResourceCalculatorPlugin : 
[email protected]
15/04/17 21:54:44 INFO mapred.MapTask: Processing split: file:/usr/share/hadoop/PiEstimator_TMP_3_141592654/in/part2:0+118 15/04/17 21:54:44 INFO mapred.MapTask: numReduceTasks: 1 15/04/17 21:54:45 INFO mapred.MapTask: io.sort.mb = 100 15/04/17 21:54:45 INFO mapred.LocalJobRunner: Starting task: attempt_local1032904958_0001_m_000001_0 15/04/17 21:54:45 INFO mapred.Task: Using ResourceCalculatorPlugin :
[email protected]
15/04/17 21:54:45 INFO mapred.MapTask: Processing split: file:/usr/share/hadoop/PiEstimator_TMP_3_141592654/in/part1:0+118 15/04/17 21:54:45 INFO mapred.MapTask: numReduceTasks: 1 15/04/17 21:54:45 INFO mapred.MapTask: io.sort.mb = 100 15/04/17 21:54:45 INFO mapred.LocalJobRunner: Starting task: attempt_local1032904958_0001_m_000002_0 15/04/17 21:54:45 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 15/04/17 21:54:45 INFO mapred.MapTask: Processing split: file:/usr/share/hadoop/PiEstimator_TMP_3_141592654/in/part0:0+118 15/04/17 21:54:45 INFO mapred.MapTask: numReduceTasks: 1 15/04/17 21:54:45 INFO mapred.MapTask: io.sort.mb = 100 15/04/17 21:54:45 INFO mapred.LocalJobRunner: Starting task: attempt_local1032904958_0001_m_000003_0 15/04/17 21:54:45 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 15/04/17 21:54:45 INFO mapred.MapTask: Processing split: file:/usr/share/hadoop/PiEstimator_TMP_3_141592654/in/part3:0+118 15/04/17 21:54:45 INFO mapred.MapTask: numReduceTasks: 1 15/04/17 21:54:45 INFO mapred.MapTask: io.sort.mb = 100 15/04/17 21:54:45 INFO mapred.LocalJobRunner: Map task executor complete. 15/04/17 21:54:45 WARN mapred.LocalJobRunner: job_local1032904958_0001 java.lang.Exception: java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354) Caused by: java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:954) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:422) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:744) 15/04/17 21:54:45 INFO mapred.JobClient: map 0% reduce 0% 15/04/17 21:54:45 INFO mapred.JobClient: Job complete: job_local1032904958_0001 15/04/17 21:54:45 INFO mapred.JobClient: Counters: 0 15/04/17 21:54:45 INFO mapred.JobClient: Job Failed: NA java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357) at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297) at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.util.RunJar.main(RunJar.java:160) [email protected]:/usr/share/hadoop$ sudo hadoop jar hadoop-examples-1.2.1.jar pi 1 1 Number of Maps = 1 Samples per Map = 1 15/04/17 21:54:51 INFO util.NativeCodeLoader: Loaded the native-hadoop library Wrote input for Map #0 Starting Job 15/04/17 21:54:51 INFO mapred.FileInputFormat: Total input paths to process : 1 15/04/17 21:54:51 INFO mapred.JobClient: Running job: job_local406287877_0001 15/04/17 21:54:52 INFO mapred.LocalJobRunner: Waiting for map tasks 15/04/17 21:54:52 INFO mapred.LocalJobRunner: Starting task: attempt_local406287877_0001_m_000000_0 15/04/17 21:54:52 INFO util.ProcessTree: setsid exited with exit code 0 15/04/17 21:54:52 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 15/04/17 21:54:52 INFO mapred.MapTask: Processing split: file:/usr/share/hadoop/PiEstimator_TMP_3_141592654/in/part0:0+118 15/04/17 21:54:52 INFO mapred.MapTask: numReduceTasks: 1 15/04/17 21:54:52 INFO mapred.MapTask: io.sort.mb = 100 15/04/17 21:54:52 INFO mapred.LocalJobRunner: Map task executor complete. 15/04/17 21:54:52 WARN mapred.LocalJobRunner: job_local406287877_0001 java.lang.Exception: java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354) Caused by: java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:954) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:422) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:744) 15/04/17 21:54:52 INFO mapred.JobClient: map 0% reduce 0% 15/04/17 21:54:52 INFO mapred.JobClient: Job complete: job_local406287877_0001 15/04/17 21:54:52 INFO mapred.JobClient: Counters: 0 15/04/17 21:54:52 INFO mapred.JobClient: Job Failed: NA java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357) at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297) at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.util.RunJar.main(RunJar.java:160)

        查閱了無數的資料,都沒有作用,通過千方百計的查詢,除錯,發現關鍵在於/etc/hadoop/hadoop-env.sh檔案分配的記憶體不夠,導致記憶體不夠的錯誤。現將修改之後的檔案黏貼如下:
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME. All others are
# optional. When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.
<span style="color:#ff0000;">export JAVA_HOME=/usr/lib/jvm/jdk1.8.0</span>
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}

# The maximum amount of heap to use, in MB. Default is 1000.
<span style="color:#ff6600;">export HADOOP_HEAPSIZE=100</span>
#export HADOOP_NAMENODE_INIT_HEAPSIZE=""

# Extra Java runtime options. Empty by default.
export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true $HADOOP_CLIENT_OPTS"

# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT $HADOOP_NAMENODE_OPTS"
HADOOP_JOBTRACKER_OPTS="-Dhadoop.security.logger=INFO,DRFAS -Dmapred.audit.logger=INFO,MRAUDIT -Dhadoop.mapreduce.jobsummary.logger=INFO,JSA $HADOOP_JOBTRACKER_OPTS"
HADOOP_TASKTRACKER_OPTS="-Dhadoop.security.logger=ERROR,console -Dmapred.audit.logger=ERROR,console $HADOOP_TASKTRACKER_OPTS"
HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,DRFAS $HADOOP_DATANODE_OPTS"

export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT $HADOOP_SECONDARYNAMENODE_OPTS"

# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
<span style="color:#ff6600;">export HADOOP_CLIENT_OPTS="-Xmx200m $HADOOP_CLIENT_OPTS"</span>
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"

# On secure datanodes, user to run the datanode as after dropping privileges
export HADOOP_SECURE_DN_USER=

# Where log files are stored. $HADOOP_HOME/logs by default.
export HADOOP_LOG_DIR=/var/log/hadoop/$USER

# Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=/var/log/hadoop/

# The directory where pid files are stored. /tmp by default.
export HADOOP_PID_DIR=/var/run/hadoop
export HADOOP_SECURE_DN_PID_DIR=/var/run/hadoop

# A string representing this instance of hadoop. $USER by default.
export HADOOP_IDENT_STRING=$USER

        修改完成之後使之立即生效,執行source hadoop-env.sh即可。

相關推薦

java.lang.OutOfMemoryError: Java heap space Hadoop

       最近,隨著大資料的興起,Java實現的Hadoop成了這個資料領域的領跑者,不管是HDFS,還是MapReduce,還是Hive都成了很熱的詞彙。也從Hadoop這個軟體本體催生了一個依存此的大資料領域,也和時下熱的不能再熱的雲端計算拉上了關係。      

Hadoop執行Mapreduce作業時報錯:java.lang.OutOfMemoryError: Java heap space

  一、概述       當在Hadoop上執行Mapreduce作業來處理稍微大一點的資料量時,都會遇到報錯:java.lang.OutOfMemoryError: Java heap space的問題。我現在用的是CDH4,是基於Cloudera Manager來安裝的

Linux運行Java出現“Exception in thread "main" java.lang.OutOfMemoryError: Java heap space”報錯

blog inux ict jar mem car dict enc cnblogs 在運行如下程序時出現“Exception in thread "main" java.lang.OutOfMemoryError: Java heap space”

解決sqoop報錯:java.lang.OutOfMemoryError: Java heap space

keep image ces use ati size tex 問題 -- 報錯棧: 2017-06-15 16:24:50,449 INFO [main] org.apache.sqoop.mapreduce.db.DBRecordReader: Executing

排查sqoop報錯:Error running child : java.lang.OutOfMemoryError: Java heap space

date 行數 content sin mapper native reader exti 占用 報錯棧: 2017-06-16 19:50:51,002 INFO [main] org.apache.hadoop.mapred.MapTask: Processing

jmeter出現java.lang.OutOfMemoryError: Java heap space的解決辦法

space jmeter blank 容易 設備 lan xmx 腳本 解決 大並發或者循環次數過多的時候,jmeter容易出現 java.lang.OutOfMemoryError: Java heap space這樣的異常,其中修改jmeter.bat 或者jmeter

eclipse運行程序時報java.lang.OutOfMemoryError: Java heap space內存不足問題

new 技術分享 heap could not def jvm baidu 默認 eight System.setProperty("webdriver.firefox.bin", "D:\\Mozilla Firefox\\firefox.exe")

正確使用MySQL JDBC setFetchSize()方法解決JDBC處理大結果集 java.lang.OutOfMemoryError: Java heap space

() lai 設置 從服務器 rest direction tools start 記錄 昨天在項目中需要對日誌的查詢結果進行導出功能。 日誌導出功能的實現是這樣的,輸入查詢條件,然後對查詢結果進行導出。由於日誌數據量比較大。多的時候,有上億條記錄。 之前的

解決eclipse maven install 造成JVM 內存溢出(java.lang.OutOfMemoryError: Java heap space)

add eap tor pac task 報錯信息 efi trace alt maven install 報錯信息: The system is out of resources.Consult the following stack trace for details.

at java.util.Arrays.copyOfRange(Arrays.java:3209)導致的java.lang.OutOfMemoryError: Java heap space 錯誤的解決辦法

home 修改 arr 解決辦法 cal copy server tom space 手動設置Heap size 修改TOMCAT_HOME/bin/catalina.bat,在“echo "Using CATALINA_BASE: $CATALINA_BASE"”上面加入

Java 記憶體溢位(java.lang.OutOfMemoryError: Java heap space)分析與解決

說明:下面出現的問題為本人在myeclips開發過程中,開發工具時獲取海量資料時出現的問題報錯。由於本人開發電腦使用的4g記憶體,虛擬記憶體與myeclips相關記憶體設定無法滿足要求。 問題分析:(網上資料整合與翻譯) java.lang.OutOfMemo

Maven編譯出現 java lang OutOfMemoryError Java heap space 問題及解決辦

分享一下我老師大神的人工智慧教程!零基礎,通俗易懂!http://blog.csdn.net/jiangjunshow 也歡迎大家轉載本篇文章。分享知識,造福人民,實現我們中華民族偉大復興!        

spark報錯java.lang.OutOfMemoryError: Java heap space

針對spark報錯: java.lang.OutOfMemoryError: Java heap space 解決方式:     在spark/conf/spark-env.sh中加大SPARK_WORKER_MEMORY值,如下,我加大至6GB export SPAR

執行ant命令時出現java.lang.OutOfMemoryError: Java heap space

 當使用Ant編譯大量的Java原始檔時,會出現java.lang.OutOfMemoryError:Java heap space異常,解決方法:    <target name="compile"depends="init">   <javac srcdir="${src}"destd

java虛擬機器常見錯誤 -- java.lang.OutOfMemoryError: Java heap space解決辦法

//首先檢查程式有沒有限入死迴圈 這個問題主要還是由這個問題 java.lang.OutOfMemoryError: Java heap space 引起的。第一次出現這樣的的問題以後,引發了其他的問題。在網上一查可能是JAVA的堆疊設定太小的原因。 跟據網上的答案大致

jenkins 錯誤java.lang.OutOfMemoryError: Java heap space

Jenkins job 總是失敗 即使執行的主體部分已經成功, log裡面: Build step 'Execute Windows batch command' marked build as failure FATAL: Remote call on JNLP4-con

java.lang.OutOfMemoryError: Java heap space錯誤及處理辦法

以下是從網上找到的關於堆空間溢位的錯誤解決辦法:java.lang.OutOfMemoryError: Java heap space =================================================== 使用Java程式從資料庫中查詢大量的資料時出現異常:java.lan

java.lang.OutOfMemoryError: Java heap space記憶體不足問題

問題描述 Exception in thread "main" java.lang.OutOfMemoryError: Java heap space 解決方案[轉] 一直都知道可以設定jvm heap大小,一直用eclipse寫/除錯java程式。一直用命令列or console加引數跑程式。現象:在ec

spark執行中的java.lang.OutOfMemoryError: Java heap space錯誤

問題描述:           我在執行我的spark程式碼過程中,出現瞭如標題所示的問題             以下為我執行的主要程式碼: ss=e_Task_test.engine()          diag_hos=l_patient.map(lambda x

“Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

背景: 我在Eclipse+tomcat下使用http協議的post方法向伺服器上傳大檔案的時候出錯。 Error: Exception in thread "main" java.lang.Out