Incompatibilité de Type de clé à partir de la carte: org.apache.hadoop.io.Texte, reçu org.apache.hadoop.io.LongWritable

Je suis en train de lancer une carte/réducteur en java. Ci-dessous sont mes fichiers

WordCount.java

package counter;


public class WordCount extends Configured implements Tool {

public int run(String[] arg0) throws Exception {
    Configuration conf = new Configuration();

    Job job = new Job(conf, "wordcount");

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    job.setMapperClass(WordCountMapper.class);
    job.setReducerClass(WordCountReducer.class);

    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);

    FileInputFormat.addInputPath(job, new Path("counterinput"));
    //Erase previous run output (if any)
    FileSystem.get(conf).delete(new Path("counteroutput"), true);
    FileOutputFormat.setOutputPath(job, new Path("counteroutput"));

    job.waitForCompletion(true);
    return 0;
}   

public static void main(String[] args) throws Exception {
    int res = ToolRunner.run(new Configuration(), new WordCount(), args);
    System.exit(res);

    }
}

WordCountMapper.java

public class WordCountMapper extends
Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(LongWritable key, Text value, OutputCollector<Text,IntWritable> output, Reporter reporter)
    throws IOException, InterruptedException {
        System.out.println("hi");
    String line = value.toString();
    StringTokenizer tokenizer = new StringTokenizer(line);
    while (tokenizer.hasMoreTokens()) {
        word.set(tokenizer.nextToken());
        output.collect(word, one);
        }
    }
}

WordCountReducer.java

public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
    public void reduce(Text key, Iterator<IntWritable> values,
        OutputCollector<Text,IntWritable> output, Reporter reporter) throws IOException, InterruptedException {
        System.out.println("hello");
        int sum = 0;
        while (values.hasNext()) {
            sum += values.next().get();
        }
        output.collect(key, new IntWritable(sum));
    }
}

Je suis d'erreur suivant

13/06/23 23:13:25 INFO jvm.JvmMetrics: Initializing JVM Metrics with  
processName=JobTracker, sessionId=

13/06/23 23:13:25 WARN mapred.JobClient: Use GenericOptionsParser for parsing the 
arguments. Applications should implement Tool for the same.
13/06/23 23:13:26 INFO input.FileInputFormat: Total input paths to process : 1
13/06/23 23:13:26 INFO mapred.JobClient: Running job: job_local_0001
13/06/23 23:13:26 INFO input.FileInputFormat: Total input paths to process : 1
13/06/23 23:13:26 INFO mapred.MapTask: io.sort.mb = 100
13/06/23 23:13:26 INFO mapred.MapTask: data buffer = 79691776/99614720
13/06/23 23:13:26 INFO mapred.MapTask: record buffer = 262144/327680
13/06/23 23:13:26 WARN mapred.LocalJobRunner: job_local_0001
java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, 
recieved org.apache.hadoop.io.LongWritable
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:845)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:541)
at org.
apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:124)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
13/06/23 23:13:27 INFO mapred.JobClient:  map 0% reduce 0%
13/06/23 23:13:27 INFO mapred.JobClient: Job complete: job_local_0001
13/06/23 23:13:27 INFO mapred.JobClient: Counters: 0

Je pense qu'il n'est pas en mesure de trouver Mappeur et réducteur de classe. J'ai écrit le code dans la classe principale,
Elle est prise en défaut Mappeur et réducteur de classe.

OriginalL'auteur Neil | 2013-06-23