本机snappy库不可用:此版本的libhadoop是在不支持snappy的情况下构建的
我在使用MLUtils saveAsLibSVMFile时遇到上述错误。尝试了以下各种方法,但均无效本机snappy库不可用:此版本的libhadoop是在不支持snappy的情况下构建的,hadoop,apache-spark,apache-spark-mllib,snappy,Hadoop,Apache Spark,Apache Spark Mllib,Snappy,我在使用MLUtils saveAsLibSVMFile时遇到上述错误。尝试了以下各种方法,但均无效 /* conf.set(“spark.io.compression.codec”,“org.apache.spark.io.LZFCompressionCodec”) */ /* conf.set(“spark.executor.extraClassPath”,“/usr/hdp/current/hadoop-client/lib/snappy-java-*.jar”) conf.set(“s
/*
conf.set(“spark.io.compression.codec”,“org.apache.spark.io.LZFCompressionCodec”)
*/
/*
conf.set(“spark.executor.extraClassPath”,“/usr/hdp/current/hadoop-client/lib/snappy-java-*.jar”)
conf.set(“spark.driver.extraClassPath”,“/usr/hdp/current/hadoop-client/lib/snappy-java-*.jar”)
conf.set(“spark.executor.extraLibraryPath”,“/usr/hdp/2.3.4.0-3485/hadoop/lib/native”)
conf.set(“spark.driver.extraLibraryPath”,“/usr/hdp/2.3.4.0-3485/hadoop/lib/native”)
*/
sc.hadoopConfiguration.set(“mapreduce.output.fileoutputformat.compress”,“true”)
sc.hadoopConfiguration.set(“mapreduce.output.fileoutputformat.compress.type”、CompressionType.BLOCK.toString)
sc.hadoopConfiguration.set(“mapreduce.output.fileoutputformat.compress.codec”、“org.apache.hadoop.io.compress.BZip2Codec”)
sc.hadoopConfiguration.set(“mapreduce.map.output.compress”,“true”)
sc.hadoopConfiguration.set(“mapreduce.map.output.compress.codec”,“org.apache.hadoop.io.compress.BZip2Codec”)
/usr/hdp//hadoop/lib/native/
,作为spark提交作业的参数(在命令行中)