Hadoop 在phoenix for hbase中,插入90000行数据时向上插入OOM

Hadoop 在phoenix for hbase中,插入90000行数据时向上插入OOM,hadoop,hbase,upsert,phoenix,Hadoop,Hbase,Upsert,Phoenix,运行cmd: ./jsvc64/jsvc64-pidfile./log/jsvc.pid-outfile./log/out.txt-errfile./log/error.txt-Xmx512m-Djava.util.Arrays.useLegacyMergeSort=true-cp:./tools/lib/:./tools/com.g2us.hbase.cmdlog.monitor.CmdLogHbase/ SQL: 向上插入CMDLOG_20130818(游戏、角色ID、otime、日志类型、

运行cmd:

./jsvc64/jsvc64-pidfile./log/jsvc.pid-outfile./log/out.txt-errfile./log/error.txt-Xmx512m-Djava.util.Arrays.useLegacyMergeSort=true-cp:./tools/lib/:./tools/com.g2us.hbase.cmdlog.monitor.CmdLogHbase/

SQL:

向上插入CMDLOG_20130818(游戏、角色ID、otime、日志类型、passport、子游戏、CMID、异常、moreinfo、pname_0、pname_1、pname_2)值(?、、?、?、?、?、?、?、、?、?、?、?)

向上插入90000行数据,出现异常

如何解决这个问题

Exception in thread "Thread-0" java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.lang.reflect.Method.copy(Method.java:143)
    at java.lang.reflect.ReflectAccess.copyMethod(ReflectAccess.java:118)
    at sun.reflect.ReflectionFactory.copyMethod(ReflectionFactory.java:282)
    at java.lang.Class.copyMethods(Class.java:2748)
    at java.lang.Class.getMethods(Class.java:1410)
    at org.apache.hadoop.hbase.ipc.Invocation.<init>(Invocation.java:67)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
    at $Proxy8.getClosestRowBefore(Unknown Source)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1019)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:885)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:846)
    at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:271)
    at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:211)
    at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:160)
    at org.apache.hadoop.hbase.client.MetaScanner.access$000(MetaScanner.java:54)
    at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:133)
    at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:130)
    at org.apache.hadoop.hbase.client.HConnectionManager.execute(HConnectionManager.java:383)
    at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:130)
    at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:105)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:947)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1002)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:889)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:846)
    at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:271)
    at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:263)
    at com.salesforce.phoenix.query.HTableFactory$HTableFactoryImpl.getTable(HTableFactory.java:60)
    at com.salesforce.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:133)
    at com.salesforce.phoenix.execute.MutationState.commit(MutationState.java:227)
    at com.salesforce.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:244)
    at com.g2us.hbase.phoenix.HBaseHelper.executeUpdate(HBaseHelper.java:62)
    at com.g2us.hbase.cmdlog.io.BaseLogPoster.upsertRow(BaseLogPoster.java:153)
线程“thread-0”java.lang.OutOfMemoryError中出现异常:超出GC开销限制 位于java.lang.reflect.Method.copy(Method.java:143) 位于java.lang.reflect.ReflectAccess.copyMethod(ReflectAccess.java:118) 位于sun.reflect.ReflectionFactory.copyMethod(ReflectionFactory.java:282) 位于java.lang.Class.copyMethods(Class.java:2748) 位于java.lang.Class.getMethods(Class.java:1410) 位于org.apache.hadoop.hbase.ipc.Invocation.(Invocation.java:67) 位于org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86) 在$Proxy8.getClosestRowBefore处(未知来源) 位于org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionMeta(HConnectionManager.java:1019) 位于org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:885) 位于org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:846) 位于org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:271) 位于org.apache.hadoop.hbase.client.HTable.(HTable.java:211) 位于org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:160) 位于org.apache.hadoop.hbase.client.MetaScanner.access$000(MetaScanner.java:54) 位于org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:133) 位于org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:130) 位于org.apache.hadoop.hbase.client.HConnectionManager.execute(HConnectionManager.java:383) 位于org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:130) 位于org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:105) 位于org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:947) 位于org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionMeta(HConnectionManager.java:1002) 位于org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:889) 位于org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:846) 位于org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:271) 位于org.apache.hadoop.hbase.client.HTable.(HTable.java:263) 位于com.salesforce.phoenix.query.HTableFactory$HTableFactoryImpl.getTable(HTableFactory.java:60) 位于com.salesforce.phoenix.query.ConnectionQueryServiceSiml.getTable(ConnectionQueryServiceSiml.java:133) 位于com.salesforce.phoenix.execute.MutationState.commit(MutationState.java:227) 位于com.salesforce.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:244) 位于com.g2us.hbase.phoenix.hbaseheloper.executeUpdate(hbaseheloper.java:62) 位于com.g2us.hbase.cmdlog.io.BaseLogPoster.upsertRow(BaseLogPoster.java:153)
我找到了问题并解决了它

问题是,preStat将其定义为类字段var,因此多次调用executeQuery()时没有关闭它,然后调用OutOfMemoryError

错误代码:

public class F{
PreparedStatement preStat = null;  

public ResultSet executeQuery(String sql, Object... args) throws Exception {
    ResultSet rsResultSet = null;
    Connection conn = null;
    Statement stat = null;
    try {

        conn = HBaseUtility.getConnection();
        preStat = conn.prepareStatement(sql);
        if (args != null) {
            for (int i = 0; i < args.length; i++) {
                preStat.setObject(i + 1, args[i]);
            }
        }
        rsResultSet = preStat.executeQuery();
    } catch (Exception e) {
        dispos(conn, stat);
        Log.error(Log.DB, "queryerror|", e);
        throw new RuntimeException("hbase query error");
    } finally {
        HBaseUtility.release(conn);
    }
    return rsResultSet;
}

}
公共类F{
PreparedStatement preStat=null;
public ResultSet executeQuery(字符串sql、对象…args)引发异常{
ResultSet rsResultSet=null;
连接conn=null;
语句stat=null;
试一试{
conn=hbaeutility.getConnection();
preStat=conn.prepareStatement(sql);
如果(args!=null){
对于(int i=0;i
固定代码:

public class F{
public ResultSet executeQuery(String sql, Object... args) throws Exception {
    ResultSet rsResultSet = null;
    Connection conn = null;
    Statement stat = null;
    try {
        PreparedStatement preStat = null;  //this var as a class var ,and no close every query .
        conn = HBaseUtility.getConnection();
        preStat = conn.prepareStatement(sql);
        if (args != null) {
            for (int i = 0; i < args.length; i++) {
                preStat.setObject(i + 1, args[i]);
            }
        }
        rsResultSet = preStat.executeQuery();
        preStat.close();  //must be close.
    } catch (Exception e) {
        dispos(conn, stat);
        Log.error(Log.DB, "queryerror|", e);
        throw new RuntimeException("hbase query error");
    } finally {
        HBaseUtility.release(conn);
    }
    return rsResultSet;
}

}
公共类F{
public ResultSet executeQuery(字符串sql、对象…args)引发异常{
ResultSet rsResultSet=null;
连接conn=null;
语句stat=null;
试一试{
PreparedStatement preStat=null;//此变量作为类变量,不关闭每个查询。
conn=hbaeutility.getConnection();
preStat=conn.prepareStatement(sql);
如果(args!=null){
对于(int i=0;i
我找到了问题并解决了它

问题是,preStat将其定义为类字段var,因此多次调用executeQuery()时没有关闭它,然后调用OutOfMemoryError

错误代码:

public class F{
PreparedStatement preStat = null;  

public ResultSet executeQuery(String sql, Object... args) throws Exception {
    ResultSet rsResultSet = null;
    Connection conn = null;
    Statement stat = null;
    try {

        conn = HBaseUtility.getConnection();
        preStat = conn.prepareStatement(sql);
        if (args != null) {
            for (int i = 0; i < args.length; i++) {
                preStat.setObject(i + 1, args[i]);
            }
        }
        rsResultSet = preStat.executeQuery();
    } catch (Exception e) {
        dispos(conn, stat);
        Log.error(Log.DB, "queryerror|", e);
        throw new RuntimeException("hbase query error");
    } finally {
        HBaseUtility.release(conn);
    }
    return rsResultSet;
}

}
公共类F{
PreparedStatement preStat=null;
public ResultSet executeQuery(字符串sql、对象…args)引发异常{
ResultSet rsResultSet=null;
连接conn=null;
语句stat=null;
试一试{
conn=hbaeutility.getConnection();
preStat=conn.prepareStatement(sql);
如果(args!=null){
对于(int i=0;i