如何在Map/Reduce中读取CSV文件?
我有一个大的CSV文件,大小为6GB,以逗号分隔。下面是mapper函数如何在Map/Reduce中读取CSV文件?,csv,hadoop,mapreduce,Csv,Hadoop,Mapreduce,我有一个大的CSV文件,大小为6GB,以逗号分隔。下面是mapper函数 @Override public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String[] tokens = value.toString().split(","); String crimeType = tokens[5].
@Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] tokens = value.toString().split(",");
String crimeType = tokens[5].trim(); // column #5 is the crime type in the CSV file, serving key
// int year = Integer.parseInt(tokens[17].trim()); // the year when the crime happened
int year = 2010;
CrimeTypeKey crimeTypeYearKey = new CrimeTypeKey(crimeType, year);
context.write(crimeTypeYearKey, ONE);
}
如您所见,我使用“.split”来分解每一行(或列?)。我想知道在这种情况下如何使用OpenCSV?请给我举个例子,非常感谢以一种有效的方式,可能不会。您想使用OpenCSV有什么特别的原因吗?