Hadoop map-reduce operation is failing on writing output

openbas2 picture openbas2 · Apr 3, 2012 · Viewed 8.8k times · Source

I am finally able to start a map-reduce job on Hadoop (running on a single debian machine). However, the map reduce job always fails with the following error:

hadoopmachine@debian:~$ ./hadoop-1.0.1/bin/hadoop jar hadooptest/main.jar nl.mydomain.hadoop.debian.test.Main /user/hadoopmachine/input /user/hadoopmachine/output
Warning: $HADOOP_HOME is deprecated.

12/04/03 07:29:35 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
****hdfs://localhost:9000/user/hadoopmachine/input
12/04/03 07:29:35 INFO input.FileInputFormat: Total input paths to process : 1
12/04/03 07:29:35 INFO mapred.JobClient: Running job: job_201204030722_0002
12/04/03 07:29:36 INFO mapred.JobClient:  map 0% reduce 0%
12/04/03 07:29:41 INFO mapred.JobClient: Task Id : attempt_201204030722_0002_m_000002_0, Status : FAILED
Error initializing attempt_201204030722_0002_m_000002_0:
ENOENT: No such file or directory
at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method)
at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:692)
at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:647)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:239)
at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:196)
at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1226)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1201)
at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1116)
at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2404)
at java.lang.Thread.run(Thread.java:636)

12/04/03 07:29:41 WARN mapred.JobClient: Error reading task outputhttp://localhost:50060/tasklog?plaintext=true&attemptid=attempt_201204030722_0002_m_000002_0&filter=stdout
12/04/03 07:29:41 WARN mapred.JobClient: Error reading task outputhttp://localhost:50060/tasklog?plaintext=true&attemptid=attempt_201204030722_0002_m_000002_0&filter=stderr

Unfortunately, it only says: "ENOENT: No such file or directory", it doesn't say what directory it actually tries to access. Pinging localhost works, and the input directory does exist. The jar location is also correct.

Can anybody give me a pointer on how to fix for this error, or how to find out which file Hadoop is trying to access?

I found several similar problems on the Hadoop mailing list, but no responses on those...

Thanks!

P.S. The config for mapred.local.dir looks like this (in mapred-site.xml):

<property>
  <name>mapred.local.dir</name>
  <value>/home/hadoopmachine/hadoop_data/mapred</value>
  <final>true</final>
</property>

As requested, the output of ps auxww | grep TaskTracker is:

1000      4249  2.2  0.8 1181992 30176 ?       Sl   12:09   0:00
/usr/lib/jvm/java-6-openjdk/bin/java -Dproc_tasktracker -Xmx1000m -Dhadoop.log.dir=/home/hadoopmachine/hadoop-1.0.1/libexec/../logs
-Dhadoop.log.file=hadoop-hadoopmachine-tasktracker-debian.log -Dhadoop.home.dir=/home/hadoopmachine/hadoop-1.0.1/libexec/.. 
-Dhadoop.id.str=hadoopmachine -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender
-Djava.library.path=/home/hadoopmachine/hadoop-1.0.1/libexec/../lib/native/Linux-i386-32 
-Dhadoop.policy.file=hadoop-policy.xml -classpath [ommitted very long list of jars] org.apache.hadoop.mapred.TaskTracker

Answer

Chris White picture Chris White · Apr 3, 2012

From the job tracker, identify which hadoop node this task executed on. SSH to that node and identify the location of the hadoop.log.dir directory (check the mapred-site.xml for this node) - my guess is the hadoop user does not have the correct permissions to create sub-directories in this folder

The actual folder it's trying to create lies under the ${hadoop.log.dir}/userlogs folder - check this folder has the correct permissions

In your case, looking at the ps output, i'm guessing this is the folder you need to examine the permission of:

/home/hadoopmachine/hadoop-1.0.1/libexec/../logs