1. 程式人生 > >IMF 傳奇行動 啟動SPARK master無法啟動 記憶體不夠問題解決) failed to map 715849728 bytes for committing reserved memory.

IMF 傳奇行動 啟動SPARK master無法啟動 記憶體不夠問題解決) failed to map 715849728 bytes for committing reserved memory.

[[email protected] hadoop]#cd /usr/local
[[email protected] local]#ls
apache-hive-1.2.1  hadoop-2.6.0   lib           share
bin                IMFdatatest    libexec       spark-1.0.0-bin-hadoop1
derby.log          IMFlinuxshell  metastore_db  spark-1.6.0-bin-hadoop2.6
etc                include        sbin          src
games              jdk1.7.0_79    scala-2.10.4
hadoop-1.2.1       jdk1.8.0_65    setup_tools


[[email protected] local]#cd hadoop-2.6.0
[[email protected] hadoop-2.6.0]#ls
bin  file:    lib      LICENSE.txt  NOTICE.txt  sbin   testdata
etc  include  libexec  logs         README.txt  share  tmp
[[email protected] hadoop-2.6.0]#cd sbin
[[email protected] sbin]#ls
                         refresh-namenodes.sh  stop-all.sh
derby.log                slaves.sh             stop-balancer.sh
distribute-exclude.sh    start-all.cmd         stop-dfs.cmd
hadoop-daemon.sh         start-all.sh          stop-dfs.sh
hadoop-daemons.sh        start-balancer.sh     stop-secure-dns.sh
hdfs-config.cmd          start-dfs.cmd         stop-yarn.cmd
hdfs-config.sh           start-dfs.sh          stop-yarn.sh
httpfs.sh                start-secure-dns.sh   yarn-daemon.sh
kms.sh                   start-yarn.cmd        yarn-daemons.sh
metastore_db             start-yarn.sh
mr-jobhistory-daemon.sh  stop-all.cmd


[[email protected] sbin]#start-dfs.sh
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
16/03/13 08:06:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Master]
Master: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-namenode-master.out
localhost: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-master.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-master.out
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See

http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
16/03/13 08:06:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable


[[email protected] sbin]#jps
2481 NameNode
2851 Jps
2571 DataNode
2718 SecondaryNameNode
[[email protected] sbin]#cd ..
[[email protected] hadoop-2.6.0]#cd ..
[[email protected] local]#ls
apache-hive-1.2.1  hadoop-2.6.0   lib           share
bin                IMFdatatest    libexec       spark-1.0.0-bin-hadoop1
derby.log          IMFlinuxshell  metastore_db  spark-1.6.0-bin-hadoop2.6
etc                include        sbin          src
games              jdk1.7.0_79    scala-2.10.4
hadoop-1.2.1       jdk1.8.0_65    setup_tools
[[email protected] local]#cd  spark-1.6.0-bin-hadoop2.6
[[email protected] spark-1.6.0-bin-hadoop2.6]#ls
bin          data      IMF2016  licenses  python     RELEASE
CHANGES.txt  ec2       lib      logs      R          sbin
conf         examples  LICENSE  NOTICE    README.md  work
[[email protected] spark-1.6.0-bin-hadoop2.6]#cd sbin
[[email protected] sbin]#ls
derby.log                       start-slave.sh
metastore_db                    start-slaves.sh
slaves.sh                       start-thriftserver.sh
spark-config.sh                 stop-all.sh
spark-daemon.sh                 stop-history-server.sh
spark-daemons.sh                stop-master.sh
start-all.sh                    stop-mesos-dispatcher.sh
start-history-server.sh         stop-mesos-shuffle-service.sh
start-master.sh                 stop-shuffle-service.sh
start-mesos-dispatcher.sh       stop-slave.sh
start-mesos-shuffle-service.sh  stop-slaves.sh
start-shuffle-service.sh        stop-thriftserver.sh


[[email protected] sbin]#start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out
failed to launch org.apache.spark.deploy.master.Master:
  # An error report file with more information is saved as:
  # /usr/local/spark-1.6.0-bin-hadoop2.6/sbin/hs_err_pid2878.log

full log in /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out
localhost: failed to launch org.apache.spark.deploy.worker.Worker:
localhost:   # An error report file with more information is saved as:
localhost:   # /usr/local/spark-1.6.0-bin-hadoop2.6/hs_err_pid2933.log
localhost: full log in /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out
[[email protected] sbin]#jps
2481 NameNode
2571 DataNode
2956 Jps
2718 SecondaryNameNode
[[email protected] sbin]#stop-all.sh
localhost: no org.apache.spark.deploy.worker.Worker to stop
no org.apache.spark.deploy.master.Master to stop
[[email protected] sbin]#start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out
failed to launch org.apache.spark.deploy.master.Master:
  # An error report file with more information is saved as:
  # /usr/local/spark-1.6.0-bin-hadoop2.6/sbin/hs_err_pid3017.log
full log in /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out
localhost: failed to launch org.apache.spark.deploy.worker.Worker:
localhost:   # An error report file with more information is saved as:
localhost:   # /usr/local/spark-1.6.0-bin-hadoop2.6/hs_err_pid3074.log
localhost: full log in /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out
[[email protected] sbin]#ls
derby.log           spark-config.sh          start-master.sh                 start-slaves.sh         stop-mesos-dispatcher.sh       stop-thriftserver.sh
hs_err_pid2878.log  spark-daemon.sh          start-mesos-dispatcher.sh       start-thriftserver.sh   stop-mesos-shuffle-service.sh
hs_err_pid3017.log  spark-daemons.sh         start-mesos-shuffle-service.sh  stop-all.sh             stop-shuffle-service.sh
metastore_db        start-all.sh             start-shuffle-service.sh        stop-history-server.sh  stop-slave.sh
slaves.sh           start-history-server.sh  start-slave.sh                  stop-master.sh          stop-slaves.sh
[[email protected] sbin]#start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out
failed to launch org.apache.spark.deploy.master.Master:
  # An error report file with more information is saved as:
  # /usr/local/spark-1.6.0-bin-hadoop2.6/sbin/hs_err_pid3116.log
full log in /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out
localhost: failed to launch org.apache.spark.deploy.worker.Worker:
localhost:   # An error report file with more information is saved as:
localhost:   # /usr/local/spark-1.6.0-bin-hadoop2.6/hs_err_pid3173.log
localhost: full log in /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out
[[email protected] sbin]#cagt /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out
bash: cagt: command not found


[[email protected] sbin]#cat /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out

Spark Command: /usr/local/jdk1.8.0_65/bin/java -cp /usr/local/spark-1.6.0-bin-hadoop2.6/conf/:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/hadoop-2.6.0/etc/hadoop/ -Xms1g -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://192.168.2.100:7077
========================================
Java HotSpot(TM) Client VM warning: INFO: os::commit_memory(0x8a950000, 715849728, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 715849728 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/local/spark-1.6.0-bin-hadoop2.6/hs_err_pid3173.log
[[email protected] sbin]#topas
bash: topas: command not found


[[email protected] sbin]#cat /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out
Spark Command: /usr/local/jdk1.8.0_65/bin/java -cp /usr/local/spark-1.6.0-bin-hadoop2.6/conf/:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/hadoop-2.6.0/etc/hadoop/ -Xms1g -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://192.168.2.100:7077
========================================
Java HotSpot(TM) Client VM warning: INFO: os::commit_memory(0x8a950000, 715849728, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 715849728 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/local/spark-1.6.0-bin-hadoop2.6/hs_err_pid3173.log
[[email protected] sbin]#cat  /usr/local/spark-1.6.0-bin-hadoop2.6/hs_err_pid3173.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 715849728 bytes for committing reserved memory.
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (os_linux.cpp:2627), pid=3173, tid=3077725040
#
# JRE version:  (8.0_65-b17) (build )
# Java VM: Java HotSpot(TM) Client VM (25.65-b01 mixed mode, sharing linux-x86 )
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#

---------------  T H R E A D  ---------------

Current thread (0xb7506800):  JavaThread "Unknown thread" [_thread_in_vm, id=3192, stack(0xb76d5000,0xb7726000)]

Stack: [0xb76d5000,0xb7726000],  sp=0xb7724ab0,  free space=318k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x5b70cf]  VMError::report_and_die()+0x16f
V  [libjvm.so+0x20a8a5]  report_vm_out_of_memory(char const*, int, unsigned int, VMErrorType, char const*)+0x55
V  [libjvm.so+0x49b8f6]  os::Linux::commit_memory_impl(char*, unsigned int, bool)+0xe6
V  [libjvm.so+0x49bdc8]  os::pd_commit_memory(char*, unsigned int, unsigned int, bool)+0x28
V  [libjvm.so+0x495cec]  os::commit_memory(char*, unsigned int, unsigned int, bool)+0x2c
V  [libjvm.so+0x5b316c]  VirtualSpace::expand_by(unsigned int, bool)+0x1ec
V  [libjvm.so+0x5b3c0d]  VirtualSpace::initialize(ReservedSpace, unsigned int)+0xbd
V  [libjvm.so+0x29ae02]  CardGeneration::CardGeneration(ReservedSpace, unsigned int, int, GenRemSet*)+0xa2
V  [libjvm.so+0x56e301]  TenuredGeneration::TenuredGeneration(ReservedSpace, unsigned int, int, GenRemSet*)+0x41
V  [libjvm.so+0x29be25]  GenerationSpec::init(ReservedSpace, int, GenRemSet*)+0x295
V  [libjvm.so+0x28e550]  GenCollectedHeap::initialize()+0x170
V  [libjvm.so+0x58635d]  Universe::initialize_heap()+0xcd
V  [libjvm.so+0x586693]  universe_init()+0x23
V  [libjvm.so+0x2be645]  init_globals()+0x55
V  [libjvm.so+0x57745b]  Threads::create_vm(JavaVMInitArgs*, bool*)+0x24b
V  [libjvm.so+0x338d1c]  JNI_CreateJavaVM+0x5c
C  [libjli.so+0x6e28]  JavaMain+0x98
C  [libpthread.so.0+0x6a49]
C  [libc.so.6+0xe2aee]  clone+0x5e


---------------  P R O C E S S  ---------------

Java Threads: ( => current thread )

Other Threads:

=>0xb7506800 (exited) JavaThread "Unknown thread" [_thread_in_vm, id=3192, stack(0xb76d5000,0xb7726000)]

VM state:not at safepoint (not fully initialized)

VM Mutex/Monitor currently owned by a thread: None

GC Heap History (0 events):
No events

Deoptimization events (0 events):
No events

Internal exceptions (0 events):
No events

Events (0 events):
No events


Dynamic libraries:
00157000-002e7000 r-xp 00000000 08:02 41770      /lib/libc-2.12.so
002e7000-002e8000 ---p 00190000 08:02 41770      /lib/libc-2.12.so
002e8000-002ea000 r--p 00190000 08:02 41770      /lib/libc-2.12.so
002ea000-002eb000 rw-p 00192000 08:02 41770      /lib/libc-2.12.so
002eb000-002ee000 rw-p 00000000 00:00 0
002ee000-00307000 r-xp 00000000 08:02 43471      /usr/local/jdk1.8.0_65/jre/lib/i386/libzip.so
00307000-00308000 rw-p 00019000 08:02 43471      /usr/local/jdk1.8.0_65/jre/lib/i386/libzip.so
0045c000-00468000 r-xp 00000000 08:02 9168       /lib/libnss_files-2.12.so
00468000-00469000 r--p 0000b000 08:02 9168       /lib/libnss_files-2.12.so
00469000-0046a000 rw-p 0000c000 08:02 9168       /lib/libnss_files-2.12.so
0085f000-0087d000 r-xp 00000000 08:02 41766      /lib/ld-2.12.so
0087d000-0087e000 r--p 0001d000 08:02 41766      /lib/ld-2.12.so
0087e000-0087f000 rw-p 0001e000 08:02 41766      /lib/ld-2.12.so
008b7000-008cb000 r-xp 00000000 08:02 31145      /usr/local/jdk1.8.0_65/lib/i386/jli/libjli.so
008cb000-008cc000 rw-p 00014000 08:02 31145      /usr/local/jdk1.8.0_65/lib/i386/jli/libjli.so
00a1e000-00a21000 r-xp 00000000 08:02 41781      /lib/libdl-2.12.so
00a21000-00a22000 r--p 00002000 08:02 41781      /lib/libdl-2.12.so
00a22000-00a23000 rw-p 00003000 08:02 41781      /lib/libdl-2.12.so
00a25000-00a3c000 r-xp 00000000 08:02 41771      /lib/libpthread-2.12.so
00a3c000-00a3d000 r--p 00016000 08:02 41771      /lib/libpthread-2.12.so
00a3d000-00a3e000 rw-p 00017000 08:02 41771      /lib/libpthread-2.12.so
00a3e000-00a40000 rw-p 00000000 00:00 0
00b08000-00b2c000 r-xp 00000000 08:02 43466      /usr/local/jdk1.8.0_65/jre/lib/i386/libjava.so
00b2c000-00b2d000 rw-p 00023000 08:02 43466      /usr/local/jdk1.8.0_65/jre/lib/i386/libjava.so
00b2d000-00b55000 r-xp 00000000 08:02 41784      /lib/libm-2.12.so
00b55000-00b56000 r--p 00027000 08:02 41784      /lib/libm-2.12.so
00b56000-00b57000 rw-p 00028000 08:02 41784      /lib/libm-2.12.so
00b59000-00b60000 r-xp 00000000 08:02 41772      /lib/librt-2.12.so
00b60000-00b61000 r--p 00006000 08:02 41772      /lib/librt-2.12.so
00b61000-00b62000 rw-p 00007000 08:02 41772      /lib/librt-2.12.so
00d6d000-00d78000 r-xp 00000000 08:02 43420      /usr/local/jdk1.8.0_65/jre/lib/i386/libverify.so
00d78000-00d79000 rw-p 0000b000 08:02 43420      /usr/local/jdk1.8.0_65/jre/lib/i386/libverify.so
00daf000-00db0000 r-xp 00000000 00:00 0          [vdso]
00db0000-01400000 r-xp 00000000 08:02 43442      /usr/local/jdk1.8.0_65/jre/lib/i386/client/libjvm.so
01400000-01426000 rw-p 00650000 08:02 43442      /usr/local/jdk1.8.0_65/jre/lib/i386/client/libjvm.so
01426000-01848000 rw-p 00000000 00:00 0
08048000-08049000 r-xp 00000000 08:02 43248      /usr/local/jdk1.8.0_65/bin/java
08049000-0804a000 rw-p 00000000 08:02 43248      /usr/local/jdk1.8.0_65/bin/java
09a16000-09a37000 rw-p 00000000 00:00 0          [heap]
751ff000-752aa000 rw-p 00000000 00:00 0
752aa000-753ff000 ---p 00000000 00:00 0
753ff000-8a950000 rw-p 00000000 00:00 0
b5388000-b5500000 rw-p 00000000 00:00 0
b5500000-b5528000 rwxp 00000000 00:00 0
b5528000-b7500000 ---p 00000000 00:00 0
b7500000-b7522000 rw-p 00000000 00:00 0
b7522000-b7600000 ---p 00000000 00:00 0
b7655000-b7656000 rw-p 00000000 00:00 0
b7656000-b76d5000 ---p 00000000 00:00 0
b76d5000-b76d8000 ---p 00000000 00:00 0
b76d8000-b7728000 rw-p 00000000 00:00 0
b772a000-b7732000 rw-s 00000000 08:02 392492     /tmp/hsperfdata_root/3173
b7732000-b7733000 rw-p 00000000 00:00 0
b7733000-b7734000 r--p 00000000 00:00 0
b7734000-b7735000 rw-p 00000000 00:00 0
bff73000-bff74000 rwxp 00000000 00:00 0
bffb2000-bffc7000 rw-p 00000000 00:00 0          [stack]

VM Arguments:
jvm_args: -Xms1g -Xmx1g
java_command: org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://192.168.2.100:7077
java_class_path (initial): /usr/local/spark-1.6.0-bin-hadoop2.6/conf/:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/hadoop-2.6.0/etc/hadoop/
Launcher Type: SUN_STANDARD

Environment Variables:
JAVA_HOME=/usr/local/jdk1.8.0_65
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/hadoop-2.6.0/sbin:/usr/local/hadoop-2.6.0/bin
SHELL=/bin/bash

Signal Handlers:
SIGSEGV: [libjvm.so+0x5b7be0], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGBUS: [libjvm.so+0x5b7be0], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGFPE: [libjvm.so+0x4985d0], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGPIPE: [libjvm.so+0x4985d0], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGXFSZ: [libjvm.so+0x4985d0], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGILL: [libjvm.so+0x4985d0], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGUSR1: SIG_DFL, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGUSR2: [libjvm.so+0x499c50], sa_mask[0]=00000000000000000000000000000000, sa_flags=SA_RESTART|SA_SIGINFO
SIGHUP: SIG_IGN, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGINT: SIG_IGN, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGTERM: SIG_DFL, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGQUIT: SIG_IGN, sa_mask[0]=00000000000000000000000000000000, sa_flags=none


---------------  S Y S T E M  ---------------

OS:CentOS release 6.4 (Final)

uname:Linux 2.6.32-358.el6.i686 #1 SMP Thu Feb 21 21:50:49 UTC 2013 i686
libc:glibc 2.12 NPTL 2.12
rlimit: STACK 10240k, CORE 0k, NPROC 7930, NOFILE 4096, AS infinity
load average:0.38 0.69 0.36

/proc/meminfo:
MemTotal:        1030680 kB
MemFree:          420636 kB
Buffers:           48928 kB
Cached:           197544 kB
SwapCached:            0 kB
Active:           412472 kB
Inactive:         141728 kB
Active(anon):     307956 kB
Inactive(anon):     1100 kB
Active(file):     104516 kB
Inactive(file):   140628 kB
Unevictable:           0 kB
Mlocked:               0 kB
HighTotal:        141256 kB
HighFree:            280 kB
LowTotal:         889424 kB
LowFree:          420356 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:               228 kB
Writeback:             0 kB
AnonPages:        307756 kB
Mapped:            51740 kB
Shmem:              1332 kB
Slab:              40724 kB
SReclaimable:       9608 kB
SUnreclaim:        31116 kB
KernelStack:        2368 kB
PageTables:         4300 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:      515340 kB
Committed_AS:    1205500 kB
VmallocTotal:     122880 kB
VmallocUsed:        4432 kB
VmallocChunk:     104908 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       10232 kB
DirectMap2M:      897024 kB


CPU:total 1 (2 cores per cpu, 2 threads per core) family 6 model 42 stepping 7, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, clmul, ht, tsc, tscinvbit

/proc/cpuinfo:
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 42
model name      : Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz
stepping        : 7
cpu MHz         : 2294.829
cache size      : 3072 KB
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss nx rdtscp constant_tsc up arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 sse4_1 sse4_2 popcnt xsave avx hypervisor ida arat epb xsaveopt pln pts dts
bogomips        : 4589.65
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

Memory: 4k page, physical 1030680k(420636k free), swap 0k(0k free)

vm_info: Java HotSpot(TM) Client VM (25.65-b01) for linux-x86 JRE (1.8.0_65-b17), built on Oct  6 2015 15:39:23 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)

time: Sun Mar 13 08:08:45 2016
elapsed time: 0 seconds (0d 0h 0m 0s)

[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#
[[email protected] sbin]#cat  /usr/local/spark-1.6.0-bin-hadoop2.6/hs_err_pid3173.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 715849728 bytes for committing reserved memory.
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (os_linux.cpp:2627), pid=3173, tid=3077725040
#
# JRE version:  (8.0_65-b17) (build )
# Java VM: Java HotSpot(TM) Client VM (25.65-b01 mixed mode, sharing linux-x86 )
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#

---------------  T H R E A D  ---------------

Current thread (0xb7506800):  JavaThread "Unknown thread" [_thread_in_vm, id=3192, stack(0xb76d5000,0xb7726000)]

Stack: [0xb76d5000,0xb7726000],  sp=0xb7724ab0,  free space=318k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x5b70cf]  VMError::report_and_die()+0x16f
V  [libjvm.so+0x20a8a5]  report_vm_out_of_memory(char const*, int, unsigned int, VMErrorType, char const*)+0x55
V  [libjvm.so+0x49b8f6]  os::Linux::commit_memory_impl(char*, unsigned int, bool)+0xe6
V  [libjvm.so+0x49bdc8]  os::pd_commit_memory(char*, unsigned int, unsigned int, bool)+0x28
V  [libjvm.so+0x495cec]  os::commit_memory(char*, unsigned int, unsigned int, bool)+0x2c
V  [libjvm.so+0x5b316c]  VirtualSpace::expand_by(unsigned int, bool)+0x1ec
V  [libjvm.so+0x5b3c0d]  VirtualSpace::initialize(ReservedSpace, unsigned int)+0xbd
V  [libjvm.so+0x29ae02]  CardGeneration::CardGeneration(ReservedSpace, unsigned int, int, GenRemSet*)+0xa2
V  [libjvm.so+0x56e301]  TenuredGeneration::TenuredGeneration(ReservedSpace, unsigned int, int, GenRemSet*)+0x41
V  [libjvm.so+0x29be25]  GenerationSpec::init(ReservedSpace, int, GenRemSet*)+0x295
V  [libjvm.so+0x28e550]  GenCollectedHeap::initialize()+0x170
V  [libjvm.so+0x58635d]  Universe::initialize_heap()+0xcd
V  [libjvm.so+0x586693]  universe_init()+0x23
V  [libjvm.so+0x2be645]  init_globals()+0x55
V  [libjvm.so+0x57745b]  Threads::create_vm(JavaVMInitArgs*, bool*)+0x24b
V  [libjvm.so+0x338d1c]  JNI_CreateJavaVM+0x5c
C  [libjli.so+0x6e28]  JavaMain+0x98
C  [libpthread.so.0+0x6a49]
C  [libc.so.6+0xe2aee]  clone+0x5e


---------------  P R O C E S S  ---------------

Java Threads: ( => current thread )

Other Threads:

=>0xb7506800 (exited) JavaThread "Unknown thread" [_thread_in_vm, id=3192, stack(0xb76d5000,0xb7726000)]

VM state:not at safepoint (not fully initialized)

VM Mutex/Monitor currently owned by a thread: None

GC Heap History (0 events):
No events

Deoptimization events (0 events):
No events

Internal exceptions (0 events):
No events

Events (0 events):
No events


Dynamic libraries:
00157000-002e7000 r-xp 00000000 08:02 41770      /lib/libc-2.12.so
002e7000-002e8000 ---p 00190000 08:02 41770      /lib/libc-2.12.so
002e8000-002ea000 r--p 00190000 08:02 41770      /lib/libc-2.12.so
002ea000-002eb000 rw-p 00192000 08:02 41770      /lib/libc-2.12.so
002eb000-002ee000 rw-p 00000000 00:00 0
002ee000-00307000 r-xp 00000000 08:02 43471      /usr/local/jdk1.8.0_65/jre/lib/i386/libzip.so
00307000-00308000 rw-p 00019000 08:02 43471      /usr/local/jdk1.8.0_65/jre/lib/i386/libzip.so
0045c000-00468000 r-xp 00000000 08:02 9168       /lib/libnss_files-2.12.so
00468000-00469000 r--p 0000b000 08:02 9168       /lib/libnss_files-2.12.so
00469000-0046a000 rw-p 0000c000 08:02 9168       /lib/libnss_files-2.12.so
0085f000-0087d000 r-xp 00000000 08:02 41766      /lib/ld-2.12.so
0087d000-0087e000 r--p 0001d000 08:02 41766      /lib/ld-2.12.so
0087e000-0087f000 rw-p 0001e000 08:02 41766      /lib/ld-2.12.so
008b7000-008cb000 r-xp 00000000 08:02 31145      /usr/local/jdk1.8.0_65/lib/i386/jli/libjli.so
008cb000-008cc000 rw-p 00014000 08:02 31145      /usr/local/jdk1.8.0_65/lib/i386/jli/libjli.so
00a1e000-00a21000 r-xp 00000000 08:02 41781      /lib/libdl-2.12.so
00a21000-00a22000 r--p 00002000 08:02 41781      /lib/libdl-2.12.so
00a22000-00a23000 rw-p 00003000 08:02 41781      /lib/libdl-2.12.so
00a25000-00a3c000 r-xp 00000000 08:02 41771      /lib/libpthread-2.12.so
00a3c000-00a3d000 r--p 00016000 08:02 41771      /lib/libpthread-2.12.so
00a3d000-00a3e000 rw-p 00017000 08:02 41771      /lib/libpthread-2.12.so
00a3e000-00a40000 rw-p 00000000 00:00 0
00b08000-00b2c000 r-xp 00000000 08:02 43466      /usr/local/jdk1.8.0_65/jre/lib/i386/libjava.so
00b2c000-00b2d000 rw-p 00023000 08:02 43466      /usr/local/jdk1.8.0_65/jre/lib/i386/libjava.so
00b2d000-00b55000 r-xp 00000000 08:02 41784      /lib/libm-2.12.so
00b55000-00b56000 r--p 00027000 08:02 41784      /lib/libm-2.12.so
00b56000-00b57000 rw-p 00028000 08:02 41784      /lib/libm-2.12.so
00b59000-00b60000 r-xp 00000000 08:02 41772      /lib/librt-2.12.so
00b60000-00b61000 r--p 00006000 08:02 41772      /lib/librt-2.12.so
00b61000-00b62000 rw-p 00007000 08:02 41772      /lib/librt-2.12.so
00d6d000-00d78000 r-xp 00000000 08:02 43420      /usr/local/jdk1.8.0_65/jre/lib/i386/libverify.so
00d78000-00d79000 rw-p 0000b000 08:02 43420      /usr/local/jdk1.8.0_65/jre/lib/i386/libverify.so
00daf000-00db0000 r-xp 00000000 00:00 0          [vdso]
00db0000-01400000 r-xp 00000000 08:02 43442      /usr/local/jdk1.8.0_65/jre/lib/i386/client/libjvm.so
01400000-01426000 rw-p 00650000 08:02 43442      /usr/local/jdk1.8.0_65/jre/lib/i386/client/libjvm.so
01426000-01848000 rw-p 00000000 00:00 0
08048000-08049000 r-xp 00000000 08:02 43248      /usr/local/jdk1.8.0_65/bin/java
08049000-0804a000 rw-p 00000000 08:02 43248      /usr/local/jdk1.8.0_65/bin/java
09a16000-09a37000 rw-p 00000000 00:00 0          [heap]
751ff000-752aa000 rw-p 00000000 00:00 0
752aa000-753ff000 ---p 00000000 00:00 0
753ff000-8a950000 rw-p 00000000 00:00 0
b5388000-b5500000 rw-p 00000000 00:00 0
b5500000-b5528000 rwxp 00000000 00:00 0
b5528000-b7500000 ---p 00000000 00:00 0
b7500000-b7522000 rw-p 00000000 00:00 0
b7522000-b7600000 ---p 00000000 00:00 0
b7655000-b7656000 rw-p 00000000 00:00 0
b7656000-b76d5000 ---p 00000000 00:00 0
b76d5000-b76d8000 ---p 00000000 00:00 0
b76d8000-b7728000 rw-p 00000000 00:00 0
b772a000-b7732000 rw-s 00000000 08:02 392492     /tmp/hsperfdata_root/3173
b7732000-b7733000 rw-p 00000000 00:00 0
b7733000-b7734000 r--p 00000000 00:00 0
b7734000-b7735000 rw-p 00000000 00:00 0
bff73000-bff74000 rwxp 00000000 00:00 0
bffb2000-bffc7000 rw-p 00000000 00:00 0          [stack]

VM Arguments:
jvm_args: -Xms1g -Xmx1g
java_command: org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://192.168.2.100:7077
java_class_path (initial): /usr/local/spark-1.6.0-bin-hadoop2.6/conf/:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/hadoop-2.6.0/etc/hadoop/
Launcher Type: SUN_STANDARD

Environment Variables:
JAVA_HOME=/usr/local/jdk1.8.0_65
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/hadoop-2.6.0/sbin:/usr/local/hadoop-2.6.0/bin
SHELL=/bin/bash

Signal Handlers:
SIGSEGV: [libjvm.so+0x5b7be0], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGBUS: [libjvm.so+0x5b7be0], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGFPE: [libjvm.so+0x4985d0], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGPIPE: [libjvm.so+0x4985d0], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGXFSZ: [libjvm.so+0x4985d0], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGILL: [libjvm.so+0x4985d0], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGUSR1: SIG_DFL, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGUSR2: [libjvm.so+0x499c50], sa_mask[0]=00000000000000000000000000000000, sa_flags=SA_RESTART|SA_SIGINFO
SIGHUP: SIG_IGN, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGINT: SIG_IGN, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGTERM: SIG_DFL, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGQUIT: SIG_IGN, sa_mask[0]=00000000000000000000000000000000, sa_flags=none


---------------  S Y S T E M  ---------------

OS:CentOS release 6.4 (Final)

uname:Linux 2.6.32-358.el6.i686 #1 SMP Thu Feb 21 21:50:49 UTC 2013 i686
libc:glibc 2.12 NPTL 2.12
rlimit: STACK 10240k, CORE 0k, NPROC 7930, NOFILE 4096, AS infinity
load average:0.38 0.69 0.36

/proc/meminfo:
MemTotal:        1030680 kB
MemFree:          420636 kB
Buffers:           48928 kB
Cached:           197544 kB
SwapCached:            0 kB
Active:           412472 kB
Inactive:         141728 kB
Active(anon):     307956 kB
Inactive(anon):     1100 kB
Active(file):     104516 kB
Inactive(file):   140628 kB
Unevictable:           0 kB
Mlocked:               0 kB
HighTotal:        141256 kB
HighFree:            280 kB
LowTotal:         889424 kB
LowFree:          420356 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:               228 kB
Writeback:             0 kB
AnonPages:        307756 kB
Mapped:            51740 kB
Shmem:              1332 kB
Slab:              40724 kB
SReclaimable:       9608 kB
SUnreclaim:        31116 kB
KernelStack:        2368 kB
PageTables:         4300 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:      515340 kB
Committed_AS:    1205500 kB
VmallocTotal:     122880 kB
VmallocUsed:        4432 kB
VmallocChunk:     104908 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       10232 kB
DirectMap2M:      897024 kB


CPU:total 1 (2 cores per cpu, 2 threads per core) family 6 model 42 stepping 7, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, clmul, ht, tsc, tscinvbit

/proc/cpuinfo:
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 42
model name      : Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz
stepping        : 7
cpu MHz         : 2294.829
cache size      : 3072 KB
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss nx rdtscp constant_tsc up arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 sse4_1 sse4_2 popcnt xsave avx hypervisor ida arat epb xsaveopt pln pts dts
bogomips        : 4589.65
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

Memory: 4k page, physical 1030680k(420636k free), swap 0k(0k free)

vm_info: Java HotSpot(TM) Client VM (25.65-b01) for linux-x86 JRE (1.8.0_65-b17), built on Oct  6 2015 15:39:23 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)

time: Sun Mar 13 08:08:45 2016
elapsed time: 0 seconds (0d 0h 0m 0s)

[[email protected] sbin]#jps
2481 NameNode
3226 Jps
2571 DataNode
2718 SecondaryNameNode
[[email protected] sbin]#free
             total       used       free     shared    buffers     cached
Mem:       1030680     606468     424212          0      48944     197556
-/+ buffers/cache:     359968     670712
Swap:            0          0          0


[[email protected] sbin]#swapon -s
Filename                                Type            Size    Used    Priority
[[email protected] sbin]#df -hal
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              13G  4.9G  7.0G  42% /
proc                     0     0     0   -  /proc
sysfs                    0     0     0   -  /sys
devpts                   0     0     0   -  /dev/pts
tmpfs                 504M   72K  504M   1% /dev/shm
/dev/sda1             291M   32M  245M  12% /boot
none                     0     0     0   -  /proc/sys/fs/binfmt_misc
vmware-vmblock           0     0     0   -  /var/run/vmblock-fuse


[[email protected] sbin]#dd if=/dev/zero of=/swapfile bs=1024 count=512k

^C385101+0 records in
385101+0 records out
394343424 bytes (394 MB) copied, 37.2297 s, 10.6 MB/s

[[email protected] sbin]#^C
[[email protected] sbin]#dd if=/dev/zero of=/swapfile bs=1024 count=512k
524288+0 records in
524288+0 records out
536870912 bytes (537 MB) copied, 112.917 s, 4.8 MB/s


[[email protected] sbin]#mkswap /swapfile
mkswap: /swapfile: warning: don't erase bootbits sectors
        on whole disk. Use -f to force.
Setting up swapspace version 1, size = 524284 KiB
no label, UUID=40a83a3e-6d0c-41bf-b2c8-8e83aecfcb65


[[email protected] sbin]#swapon /swapfile
[[email protected] sbin]#swapon -s
Filename                                Type            Size    Used    Priority
/swapfile                               file            524280  0       -1
[[email protected] sbin]#vi /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Sep  9 04:49:54 2013
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=6e428aef-f3cc-424a-bf48-e66a1a3e15fe /                       ext4    defaults        1 1
UUID=6379983d-c153-49a1-b689-e6184afa679d /boot                   ext4    defaults        1 2
UUID=2ebaf8ae-1ece-46a4-a73f-a06260c5f1dc swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0


#/dev/sda4   /disk4 ext2 defaults 0 0
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~

#
# /etc/fstab
# Created by anaconda on Mon Sep  9 04:49:54 2013
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=6e428aef-f3cc-424a-bf48-e66a1a3e15fe /                       ext4    defaults        1 1
UUID=6379983d-c153-49a1-b689-e6184afa679d /boot                   ext4    defaults        1 2
UUID=2ebaf8ae-1ece-46a4-a73f-a06260c5f1dc swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0


"/etc/fstab" 20L, 909C written
[[email protected] sbin]#chown root:root /swapfile
[[email protected] sbin]#chmod 0600 /swapfile
[[email protected] sbin]#swapon -s
Filename                                Type            Size    Used    Priority
/swapfile                               file            524280  0       -1


[[email protected] sbin]#pwd
/usr/local/spark-1.6.0-bin-hadoop2.6/sbin
[[email protected] sbin]#start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out
failed to launch org.apache.spark.deploy.master.Master:
full log in /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out
[[email protected] sbin]#start-all.sh
org.apache.spark.deploy.master.Master running as process 3299.  Stop it first.
localhost: org.apache.spark.deploy.worker.Worker running as process 3355.  Stop it first.
[[email protected] sbin]#jps
2481 NameNode
3299 Master
3463 Jps
2571 DataNode
3355 Worker
2718 SecondaryNameNode

相關推薦

IMF 傳奇行動 啟動SPARK master無法啟動 記憶體不夠問題解決) failed to map 715849728 bytes for committing reserved memory.

[[email protected] hadoop]#cd /usr/local[[email protected] local]#lsapache-hive-1.2.1  hadoop-2.6.0   lib           sharebin    

dubbo-monitor啟動異常之Native memory allocation (mmap) failed to map 1879048192 bytes for committing rese

1.異常描述 [root@izwz91h49n3mj8r232gqwez bin]# pwd /opt/dubbo/dubbo-monitor-simple-2.5.3/bin [root@izwz91h49n3mj8r232gqwez bin]# ll total 24 -rwxr-x

大資料IMF傳奇行動絕密課程第87課:Flume推送資料到Spark Streaming案例實戰和內幕原始碼解密

Flume推送資料到Spark Streaming案例實戰和內幕原始碼解密 1、Flume on HDFS案例回顧 2、Flume推送資料到Spark Streaming實戰 3、原理繪圖剖析 一、配置.bashrc vi ~/.bashrc

大資料IMF傳奇行動絕密課程第54課:Spark效能優化第十季之Spark統一記憶體管理

Spark效能優化第十季之Spark統一記憶體管理 1、傳統的Spark記憶體管理的問題 2、Spark統一記憶體管理 3、展望 Spark記憶體分為三部分:Execution、Sotrage、Other; Shuffle,當記憶體不夠的時候下,磁碟I

大資料IMF傳奇行動絕密課程第63課:Spark SQL下Parquet內幕深度解密

Spark SQL下Parquet內幕深度解密 1、Spark SQL下的Parquet意義再思考 2、Spark SQL下的Parquet內幕揭祕 一、Spark SQL下的Parquet意義再思考 1、如果說HDFS是大資料時代分散式檔案系統儲存的事

大資料IMF傳奇行動絕密課程第64課:Spark SQL下Parquet的資料切分和壓縮內幕詳解

Spark SQL下Parquet的資料切分和壓縮內幕詳解 1、Spark SQL下的Parquet資料切分 2、Spark SQL下的Parquet資料壓縮 parquetBlocksize總體上講是壓縮後的大小 private static fina

WAS Server 啟動報錯無法找到秘鑰解決辦法

webshpere報錯日誌:[7/18/17 18:04:24:582 CST] 0000000a ORBRas E com.ibm.ws.orbimpl.transport.WSTransport createServerSocket P=254162:O=0:CT ORBX0390E: Ca

Android Studio無法啟動 Gradle ,無法啟動守護程序

bsp fin tar gradle ace option star brush intro Error:Unable to start the daemon process. This problem might be caused by incorrect confi

“MySQL 服務正在啟動 . MySQL 服務無法啟動。 服務沒有報告任何錯誤。”的解決方案

span 圖片 ffffff back 無法啟動 環境 ont class str Mysql安裝版本:5.7.20 安裝教程: 在 https://dev.mysql.com/downloads/mysql/ 頁面選擇zip包下載(archive 為存檔的意思)

問題:mysql服務正在啟動 mysql服務無法啟動 && mysql啟動指令碼 mysql關閉指令碼

操作流程:   1、解壓縮mysql_x64(mysql-5.7.22-winx64.zip)包,拷貝start_mysql.bat指令碼到解壓目錄,cmd方式執行指令碼結果如下   //start_mysql.bat指令碼內容echo off set path=%~dp0 echo

javaweb專案成功啟動但瀏覽器無法開啟web頁面解決

今天同事幫我新配了一個server,導致我專案啟動了,但是頁面卻顯示無法訪問,控制檯也不報錯大家遇到應該檢視一下tomcat的配置因為我之前配的埠號是9900,這次新配置server沒有改,所以改了就好了。大家可以先通過http://localhost:port/來檢驗你的t

centos7 中啟動chrome時無法啟動的問題,開啟chrome,沒反應,自行關閉

[[email protected] bin]# /usr/bin/google-chrome [8239:8239:1230/172329.621009:ERROR:zygote_host_impl_linux.cc(89)] Running as root without --no

解決ubuntu+win10雙系統迴圈進入啟動介面導致無法啟動win10的問題

文章參考了:http://www.it165.net/os/html/201508/14533.html,在此做一下備忘。 本人在win10上安裝了ubuntu雙系統,但是在選擇啟動系統的介面選擇win10卻一直回到這個選擇介面,在網上尋找答案,需要更改一個檔案: 啟動ub

Hbase單機安裝啟動時遇到無法啟動zk埠2181問題解決辦法

Hbase單機安裝啟動時遇到無法啟動zk埠2181問題 解壓、配置完後執行start-hbase.sh後使用jps未看到HMaster: 檢視log,提示zk執行的埠應該是2182而不是預設的2181 Could not start ZK at re

SpringCloud Eureka引起的 啟動報錯 無法啟動問題

java.lang.IllegalStateException: ApplicationEventMulticaster not initialized - call ‘refresh’ before multicasting events via the context:

MySQL服務啟動提示window無法啟動MySQL服務 錯誤1067:程序意外終止

伺服器MySQL在操作的時候意外蹦了,然後所有表都訪問不了了,關閉服務,然後重新啟動,發現提示window無法啟動MySQL服務 錯誤1067:程序意外終止,然後就開始各種百度補救措施,折騰了一天,還好找到了恢復的方法…. 我的MySQL是5.6.21版本的

tomcat8中startup可以啟動,tomcat8w無法啟動

今天,更改了一下tomcat的埠,將8080改為8090 結果在tomcat8/bin下電極startup可以啟動tomcat,http://localhost:8090/可以開啟 但是不用star

解決辦法】Hadoop2.6.4 datanode 無法啟動,錯誤:All specified directories are failed to load.

在部署hadoop2.6.4 叢集的時候碰到下面的錯誤 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registerin

【專案啟動】 tomcat啟動,專案無法啟動無法正常訪問/專案可以啟動,報錯:java.lang.ClassNotFoundException: ContextLoaderListener

使用maven搭建專案(這個錯誤和是不是使用maven搭建專案沒有關係),然後部署到tomcat中執行。 出現問題1: tomcat跑起來了,但是啟動時間很短,沒有報錯,專案不能正常訪問 專案啟動時間很短,並且沒有報錯 並且專案無法正常訪問 發現問題過程: 檢視t

啟動Mysql時,MySQL 服務正在啟動 .. MySQL 服務無法啟動。 服務沒有報告任何錯誤。

遇到這個問題很是難受,開啟電腦的服務,找到MySQL手動啟動也不成功,最後在命令列中採用mysqld --console時,記錄發現兩個錯誤。[ERROR] Fatal error: Can't open and lock privilege tables: Table 'm