Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在scala中为集成测试旋转Hbase模拟样式测试的示例_Scala_Apache Spark_Hbase_Rdd - Fatal编程技术网

在scala中为集成测试旋转Hbase模拟样式测试的示例

在scala中为集成测试旋转Hbase模拟样式测试的示例,scala,apache-spark,hbase,rdd,Scala,Apache Spark,Hbase,Rdd,我试图找到一个如何以模拟或集成方式启动Hbase服务器的示例,以便在IDE中本地测试代码。 我尝试了伪造的hbase和hbase测试实用程序,并收到错误,尤其是在尝试启动群集时。 请参阅下面我在运行以下代码时收到的异常: hbaseTestUtil = new HBaseTestingUtility(conf) hbaseTestUtil.startMiniCluster(3) 错误如下: 16/03/14 12:29:00 WARN datanode.DataNode: IOExcept

我试图找到一个如何以模拟或集成方式启动Hbase服务器的示例,以便在IDE中本地测试代码。 我尝试了伪造的hbase和hbase测试实用程序,并收到错误,尤其是在尝试启动群集时。 请参阅下面我在运行以下代码时收到的异常:

 hbaseTestUtil = new HBaseTestingUtility(conf)
 hbaseTestUtil.startMiniCluster(3)
错误如下:

16/03/14 12:29:00 WARN datanode.DataNode: IOException in BlockReceiver.run(): 
java.io.IOException: Failed to move meta file for ReplicaBeingWritten, blk_1073741825_1001, RBW
 getNumBytes()     = 7
 getBytesOnDisk()  = 7
  getVisibleLength()= 7
  getVolume()       = C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-bed8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\data\data1\current
  getBlockFile()    = C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-bed8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\data\data1\current\BP-1081755239-10.66.90.86-1457954925705\current\rbw\blk_1073741825
  bytesAcked=7
bytesOnDisk=7 from C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-bed8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\data\data1\current\BP-1081755239-10.66.90.86-1457954925705\current\rbw\blk_1073741825_1001.meta to C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-bed8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\data\data1\current\BP-1081755239-10.66.90.86-1457954925705\current\finalized\subdir0\subdir0\blk_1073741825_1001.meta
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:615)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addBlock(BlockPoolSlice.java:250)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlock(FsVolumeImpl.java:229)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1119)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1100)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.finalizeBlock(BlockReceiver.java:1293)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1233)
at java.lang.Thread.run(Thread.java:745)
Caused by: 3: The system cannot find the path specified.

at org.apache.hadoop.io.nativeio.NativeIO.renameTo0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:830)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:613)
... 7 more
16/03/14 12:29:00 INFO datanode.DataNode: Starting CheckDiskError Thread

有人在scala中有这样做的例子吗?

我认为这不是HBase的问题,而是Windows的问题。该路径非常接近windows的最大路径长度。它可能无法创建文件。试着以某种方式减少路径。看起来是这样的Martin。。。谢谢知道如何指定一个更好的基本目录来与HBaseteStangulity一起使用吗?