Java 使用配置单元UDF解压缩列数据

Java 使用配置单元UDF解压缩列数据,java,hive,udf,compression,Java,Hive,Udf,Compression,上下文: 使用配置单元UDF evaluate()方法解压缩列数据 例外情况: import java.io.ByteArrayInputStream; import java.io.IOException; import java.nio.charset.Charset; import java.util.Arrays; import java.util.zip.DataFormatException; import java.util.zip.InflaterInputStream; im

上下文: 使用配置单元UDF evaluate()方法解压缩列数据

例外情况:

import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.nio.charset.Charset;
import java.util.Arrays;
import java.util.zip.DataFormatException;
import java.util.zip.InflaterInputStream;

import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaStringObjectInspector;
import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;

public class Decompress extends UDF{
public static String evaluate(String data1) throws IOException, DataFormatException{
ByteArrayInputStream bao=new ByteArrayInputStream(data1.getBytes());
InflaterInputStream iis= new InflaterInputStream(bao);
String out="";
byte[] bt=new byte[1024];
int len=-1;
while ((len =iis.read(bt))!=-1){ 
out += new String(Arrays.copyOf(bt, len));
}
JavaStringObjectInspector stringInspector;
stringInspector = PrimitiveObjectInspectorFactory.javaStringObjectInspector;
String ip = stringInspector.getPrimitiveJavaObject(out);

//return new String(ip.getBytes(Charset.forName("UTF-8")));
//return new String(ip.getBytes(Charset.forName("UTF-8")));
return ip;
}
}
异常失败 java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException: 无法执行方法public static org.apache.hadoop.io.Text Test.UDFDecompressor.evaluate(java.lang.String)抛出 对象上的org.apache.hadoop.hive.ql.metadata.HiveException 测试。UDFDecompressor@1008df1e类的测试。UDFDecompressor与 论据 {xï½ï½ïï½ïïï½ïïïïïïïïï½ïïïï½ïïïïïïïïï½ïïïïïïïïïïïïïïï }一号的

源代码:

import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.nio.charset.Charset;
import java.util.Arrays;
import java.util.zip.DataFormatException;
import java.util.zip.InflaterInputStream;

import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaStringObjectInspector;
import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;

public class Decompress extends UDF{
public static String evaluate(String data1) throws IOException, DataFormatException{
ByteArrayInputStream bao=new ByteArrayInputStream(data1.getBytes());
InflaterInputStream iis= new InflaterInputStream(bao);
String out="";
byte[] bt=new byte[1024];
int len=-1;
while ((len =iis.read(bt))!=-1){ 
out += new String(Arrays.copyOf(bt, len));
}
JavaStringObjectInspector stringInspector;
stringInspector = PrimitiveObjectInspectorFactory.javaStringObjectInspector;
String ip = stringInspector.getPrimitiveJavaObject(out);

//return new String(ip.getBytes(Charset.forName("UTF-8")));
//return new String(ip.getBytes(Charset.forName("UTF-8")));
return ip;
}
}
我尝试了多种使用gZib、zLIb Java Api解压的方法,但我遇到了相同的错误。有谁能帮助我解决这个问题,并建议使用Hive UDF解压列数据的正确方法吗

import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.nio.charset.Charset;
import java.util.Arrays;
import java.util.zip.DataFormatException;
import java.util.zip.InflaterInputStream;

import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaStringObjectInspector;
import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;

public class Decompress extends UDF{
public static String evaluate(String data1) throws IOException, DataFormatException{
ByteArrayInputStream bao=new ByteArrayInputStream(data1.getBytes());
InflaterInputStream iis= new InflaterInputStream(bao);
String out="";
byte[] bt=new byte[1024];
int len=-1;
while ((len =iis.read(bt))!=-1){ 
out += new String(Arrays.copyOf(bt, len));
}
JavaStringObjectInspector stringInspector;
stringInspector = PrimitiveObjectInspectorFactory.javaStringObjectInspector;
String ip = stringInspector.getPrimitiveJavaObject(out);

//return new String(ip.getBytes(Charset.forName("UTF-8")));
//return new String(ip.getBytes(Charset.forName("UTF-8")));
return ip;
}
}

提前感谢。

欢迎来到Stack Overflow!这个答案可能是做原始海报试图做的事情的更好方法,但是他们特别要求解释他们做错了什么。这不是问题。这不会帮助他们下次学习,这是Stack Overflow的一个重要部分。你能添加一些评论吗了解他们做错了什么,以及你为什么推荐这种方法?欢迎来到Stack Overflow!这个答案可能是做原始海报试图做的事情的更好的方法,但他们特别要求解释他们做错了什么。这不是问题。这不会帮助他们下次学习,这是一个大问题堆栈溢出的一部分。您是否可以添加一些注释,说明他们做错了什么,以及为什么推荐这种方法?
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.Text;

import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.zip.InflaterInputStream;

public class Decompress extends UDF {

    private final Text r = new Text();

    public Text evaluate(BytesWritable bw) throws IOException {
        ByteArrayInputStream zipped = new ByteArrayInputStream(bw.getBytes());
        InflaterInputStream inflater = new InflaterInputStream(zipped);
        ByteArrayOutputStream unzipped = new ByteArrayOutputStream();
        byte[] bt = new byte[1024];
        int len;
        while ((len = inflater.read(bt)) != -1) {
            unzipped.write(bt, 0, len);
        }

        r.clear();
        r.set(unzipped.toByteArray());
        return r;
    }
}