解決するCurrent usage: 35.5 MB of 1 GB physical memory used; 16.8 G

12760 ワード

1、会社でクラスタを作ったばかりで、wordcountテストを実行して正常に使用できるかどうかを見て、以下のエラーを発見しました(私は自分のパソコンでも同じバージョンで、エラーを報告していません)
[root@S1PA124 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /input /output
14/08/20 09:51:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/08/20 09:51:35 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/08/20 09:51:36 INFO input.FileInputFormat: Total input paths to process : 1
14/08/20 09:51:36 INFO mapreduce.JobSubmitter: number of splits:1
14/08/20 09:51:36 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/08/20 09:51:36 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/08/20 09:51:36 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/08/20 09:51:37 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
14/08/20 09:51:37 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
14/08/20 09:51:37 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/08/20 09:51:37 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
14/08/20 09:51:37 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/08/20 09:51:37 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/08/20 09:51:37 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/08/20 09:51:37 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/08/20 09:51:37 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/08/20 09:51:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1408499127545_0001
14/08/20 09:51:37 INFO impl.YarnClientImpl: Submitted application application_1408499127545_0001 to ResourceManager at /0.0.0.0:8032
14/08/20 09:51:37 INFO mapreduce.Job: The url to track the job: http://S1PA124:8088/proxy/application_1408499127545_0001/
14/08/20 09:51:37 INFO mapreduce.Job: Running job: job_1408499127545_0001
14/08/20 09:51:44 INFO mapreduce.Job: Job job_1408499127545_0001 running in uber mode : false
14/08/20 09:51:44 INFO mapreduce.Job:  map 0% reduce 0%
14/08/20 09:51:49 INFO mapreduce.Job:  map 100% reduce 0%
14/08/20 09:51:54 INFO mapreduce.Job: Task Id : attempt_1408499127545_0001_r_000000_0, Status : FAILED
Container [pid=26042,containerID=container_1408499127545_0001_01_000003] is running beyond virtual memory limits. Current usage: 35.5 MB of 1 GB physical memory used; 16.8 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1408499127545_0001_01_000003 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 26047 26042 26042 26042 (java) 36 3 17963216896 8801 /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_0 3 
        |- 26042 25026 26042 26042 (bash) 0 0 65409024 276 /bin/bash -c /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_0 3 1>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000003/stdout 2>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000003/stderr  

Container killed on request. Exit code is 143

14/08/20 09:52:00 INFO mapreduce.Job: Task Id : attempt_1408499127545_0001_r_000000_1, Status : FAILED
Container [pid=26111,containerID=container_1408499127545_0001_01_000004] is running beyond virtual memory limits. Current usage: 100.3 MB of 1 GB physical memory used; 16.8 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1408499127545_0001_01_000004 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 26116 26111 26111 26111 (java) 275 8 18016677888 25393 /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_1 4 
        |- 26111 25026 26111 26111 (bash) 0 0 65409024 275 /bin/bash -c /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_1 4 1>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000004/stdout 2>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000004/stderr  

Container killed on request. Exit code is 143

14/08/20 09:52:06 INFO mapreduce.Job: Task Id : attempt_1408499127545_0001_r_000000_2, Status : FAILED
Container [pid=26185,containerID=container_1408499127545_0001_01_000005] is running beyond virtual memory limits. Current usage: 100.4 MB of 1 GB physical memory used; 16.8 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1408499127545_0001_01_000005 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 26190 26185 26185 26185 (java) 271 7 18025807872 25414 /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_2 5 
        |- 26185 25026 26185 26185 (bash) 0 0 65409024 276 /bin/bash -c /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_2 5 1>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000005/stdout 2>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000005/stderr  

Container killed on request. Exit code is 143

14/08/20 09:52:13 INFO mapreduce.Job:  map 100% reduce 100%
14/08/20 09:52:13 INFO mapreduce.Job: Job job_1408499127545_0001 failed with state FAILED due to: Task failed task_1408499127545_0001_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1

14/08/20 09:52:13 INFO mapreduce.Job: Counters: 32
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=80425
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=895
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=3
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=0
        Job Counters 
                Failed reduce tasks=4
                Launched map tasks=1
                Launched reduce tasks=4
                Rack-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=3082
                Total time spent by all reduces in occupied slots (ms)=11065
        Map-Reduce Framework
                Map input records=56
                Map output records=56
                Map output bytes=1023
                Map output materialized bytes=1141
                Input split bytes=96
                Combine input records=56
                Combine output records=56
                Spilled Records=56
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=25
                CPU time spent (ms)=680
                Physical memory (bytes) snapshot=253157376
                Virtual memory (bytes) snapshot=18103181312
                Total committed heap usage (bytes)=1011875840
        File Input Format Counters 
                Bytes Read=799
2、mapred-site.xmlプロファイルの構成は次のとおりです.

         
                mapreduce.cluster.local.dir
                /root/install/hadoop/mapred/local
        
        
                mapreduce.cluster.system.dir
                /root/install/hadoop/mapred/system
        
        
                mapreduce.framework.name
                yarn
        
        
                mapreduce.jobhistory.address
                S1PA124:10020
        
        
                mapreduce.jobhistory.webapp.address
                S1PA124:19888
        

、解決方法
私はmapred-site.xmlプロファイルにJVMとメモリ領域を実行するように構成されている数行の構成コメントを削除し、クラスタを再起動すると解決します.具体的な原因はまだ検討する時間がなく,機器JVMの割り当て状況と関係があることが分かるだろう.