sqoop将mysql数据导入hive--报错
数据量在几百万,该怎么解决了?增加mr也不管用
Error: GC overhead limit exceeded
17/05/27 15:47:25 INFO mapreduce.Job: Task Id : attempt_149555519_0057_m_000000_1, Status : FAILED
Error: GC overhead limit exceeded
17/05/27 15:48:16 INFO mapreduce.Job: Task Id : attempt_149555519_0057_m_000000_2, Status : FAILED
Error: GC overhead limit exceeded
17/05/27 15:49:10 INFO mapreduce.Job: map 100% reduce 0%
17/05/27 15:49:10 INFO mapreduce.Job: Job job_149555519_0057 failed with state FAILED due to: Task failed task_149555519_0057_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
17/05/27 15:49:10 INFO mapreduce.Job: Counters: 11
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=4
Total time spent by all maps in occupied slots (ms)=197894
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=197894
Total vcore-seconds taken by all map tasks=197894
Total megabyte-seconds taken by all map tasks=202643456
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
17/05/27 15:49:10 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
17/05/27 15:49:10 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 214.2624 seconds (0 bytes/sec)
17/05/27 15:49:10 INFO mapreduce.ImportJobBase: Retrieved 0 records.
17/05/27 15:49:10 ERROR tool.ImportTool: Error during import: Import job failed