Java 如何使用RowDeletingIterator

Java 如何使用RowDeletingIterator,java,hadoop,accumulo,Java,Hadoop,Accumulo,我正试图使用RowDeletingIterator删除Accumulo表中的条目,但当我随后扫描该表时,这些条目不会被删除。假设在表tableName中有一个row\u id条目,下面是我要做的: Connection connection = new ZooKeeperInstance(instance, zookeepers).getConnector(username, new PasswordToken(password)); String tableName = "tableName"

我正试图使用RowDeletingIterator删除Accumulo表中的条目,但当我随后扫描该表时,这些条目不会被删除。假设在表
tableName
中有一个
row\u id
条目,下面是我要做的:

Connection connection = new ZooKeeperInstance(instance, zookeepers).getConnector(username, new PasswordToken(password));
String tableName = "tableName";
connection.tableOperations().create(tableName);
connection.tableOperations().attachIterator(tableName, new IteratorSetting(1, RowDeletingIterator.class));

// Write record with row key "row_id"
String row_id = "row_id";
Text colf = new Text("");
Text colq = new Text("data");
ColumnVisibility colv = new ColumnVisibility("");
BatchWriter writer = connection.createBatchWriter(tableName, new BatchWriterConfig()
            .setMaxMemory(memBuf)
            .setMaxLatency(timeout, TimeUnit.MILLISECONDS)
            .setMaxWriteThreads(numThreads));
Mutation mutation = new Mutation(row_id);
mutation.put(colf, colq, colv, System.nanoTime(), new Value("stuff".getBytes()));
writer.addMutation(mutation);
writer.close();

... //More work takes place

// Delete the record with row key "row_id"
BatchWriter batchWriter = connection.createBatchWriter(tableName, new BatchWriterConfig()
            .setMaxMemory(memBuf)
            .setMaxLatency(timeout, TimeUnit.MILLISECONDS)
            .setMaxWriteThreads(numThreads));
Mutation mutation = new Mutation(row_id);
mutation.put(new Text(), new Text(), new ColumnVisibility(), RowDeletingIterator.DELETE_ROW_VALUE);
batchWriter.addMutation(mutation);
batchWriter.close();

原来问题出在第一次插入的
System.nanoTime()
部分。通过移除并使初始插入物:

mutation.put(colf, colq, colv, new Value("stuff".getBytes()));
RowDeletingIterator
工作正常