Mysql Mahout/Hadoop:SQL到SequenceFile

Mysql Mahout/Hadoop:SQL到SequenceFile,mysql,hadoop,mahout,Mysql,Hadoop,Mahout,我开始使用Mahout进行集群,但我很难将sqlmysql转储转换为与Mahout兼容的SequenceFile。我正在使用上面的代码 SQL示例 (1, 318145, '[running with jentopia, sotm]', '2011-04-27 21:47:16'), (2, 318138, '[fonts, textile, valentines day]', '2011-04-27 21:47:16'), ... 爪哇 谢谢 你的具体问题是什么?您是否收到错误/异常?这也不

我开始使用Mahout进行集群,但我很难将sqlmysql转储转换为与Mahout兼容的SequenceFile。我正在使用上面的代码

SQL示例

(1, 318145, '[running with jentopia, sotm]', '2011-04-27 21:47:16'),
(2, 318138, '[fonts, textile, valentines day]', '2011-04-27 21:47:16'),
...
爪哇


谢谢

你的具体问题是什么?您是否收到错误/异常?这也不是试图创建SequenceFile的代码@卡洛斯,你真的应该展示你要问的代码!这将启动一个作业,该作业将文件作为输入,并写出一个SequenceFile表示。我相信他的文件只能在本地获得,他必须将其上传到HDFS。@Thomas-这听起来像是对这个问题的一个非常可靠的猜测。
    File url = new File(inputFile);

    // starts the conf
    Configuration conf = new Configuration();

    // opens a buffer to save file
    Job job = new Job(conf);
    job.setJobName("Convert Text");
    job.setJarByClass(Mapper.class);

    job.setMapperClass(Mapper.class);
    job.setReducerClass(Reducer.class);

    job.setNumReduceTasks(0);

    job.setOutputKeyClass(LongWritable.class);
    job.setOutputValueClass(Text.class);

    job.setOutputFormatClass(SequenceFileOutputFormat.class);
    job.setInputFormatClass(TextInputFormat.class);

    TextInputFormat.addInputPath(job, new Path(inputFile));
    SequenceFileOutputFormat.setOutputPath(job, new Path(SequenceFileCreator.SEQUENCE_FOLDER_PATH));

    // submit and wait for completion
    job.waitForCompletion(true);