通过 spark-submit 脚本提交集群的jar包在运行过程中爆内存溢出的问题

收藏
sparkCore
6
Feb 1, 2018

具体的报错的异常如下

NFO cluster.ClusterTaskSetManager: Starting task 1.0:24 as TID 33 on executor 9: Salve7.Hadoop (NODE_LOCAL)

INFO cluster.ClusterTaskSetManager: Serialized task 1.0:24 as 30618515 bytes in 210 ms

INFO cluster.ClusterTaskSetManager: Starting task 1.0:36 as TID 34 on executor 2: Salve11.Hadoop (NODE_LOCAL)

INFO cluster.ClusterTaskSetManager: Serialized task 1.0:36 as 30618515 bytes in 449 ms

INFO cluster.ClusterTaskSetManager: Starting task 1.0:32 as TID 35 on executor 7: Salve4.Hadoop (NODE_LOCAL)

Uncaught error from thread [spark-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[spark]

java.lang.OutOfMemoryError: Java heap space

回答

铁木真回答

spark 集群中的 java的堆内存异常  可能是driver-memory设置的太小,导致集群内存不足的问题

建议如下 增加driver-memory内存的大小

spark-1.6.1/bin/spark-submit
  --class "xx.class
  --driver-memory 12g
  --master local[*] 
  target/scala-2.10/simple-project_2.10-1.0.jar 

(0)

提交成功