Hi!
I keep getting Exceeded MAX_FAILED_UNIQUE_FETCHES; on the reduce phase even though I tried all the solutions I could find online. Please help me, I have a project presentation in three hours and my solution doesn't scale.
I have one master that is NameNode and JobTracker (172.16.8.3) and 3 workers (172.16.8.{11, 12, 13})
Here are the corresponding configuration files (I can't make the construction to show):
//////// 172.16.8.3 ////////////////////
// core-site.xml
    
        hadoop.tmp.dir
        /usr/local/hadoop-datastore/hadoop-${user.name}
        Hadoop Data Store
    
    
        fs.default.name
        hdfs://172.16.8.3:54310/ 
    
// mapred-site.xml
    
        mapred.job.tracker
        172.16.8.3:54311
    
//////// 172.16.8.11 ////////////////
// core-site.xml
    
        hadoop.tmp.dir
        /usr/local/hadoop-datastore/hadoop-${user.name}
        Hadoop Data Store
    
    
        fs.default.name
        hdfs://172.16.8.3:54310/ 
    
// mapred-site.xml
    
        mapred.job.tracker
        172.16.8.3:54311
    
/////// 172.16.8.12 //////////////
// core-site.xml
< configuration >
    
        hadoop.tmp.dir
        /usr/local/hadoop-datastore/hadoop-${user.name}
        Hadoop Data Store
    
    
        fs.default.name
        hdfs://172.16.8.3:54310/ 
    
< /configuration >
// mapred-site.xml
    
        mapred.job.tracker
        172.16.8.3:54311
    
///////// 172.16.8.13 ////////
// core-site.xml
    
        hadoop.tmp.dir
        /usr/local/hadoop-datastore/hadoop-${user.name}
        Hadoop Data Store
    
    
        fs.default.name
        hdfs://172.16.8.3:54310/ 
    
// mapred-site.xml
    
        mapred.job.tracker
        172.16.8.3:54311