Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在hadoop gremlin raise中使用OneTimeBulkLoader的janusgraph;图形不支持添加顶点";_Hadoop_Graph_Gremlin_Vertices_Janusgraph - Fatal编程技术网

在hadoop gremlin raise中使用OneTimeBulkLoader的janusgraph;图形不支持添加顶点";

在hadoop gremlin raise中使用OneTimeBulkLoader的janusgraph;图形不支持添加顶点";,hadoop,graph,gremlin,vertices,janusgraph,Hadoop,Graph,Gremlin,Vertices,Janusgraph,我的目标: 使用SparkGraphComputer将本地数据批量加载到janusgraph,然后在hbase和ES上构建混合索引 我的问题: Caused by: java.lang.UnsupportedOperationException: Graph does not support adding vertices at org.apache.tinkerpop.gremlin.structure.Graph$Exceptions.vertexAdditionsNotSuppor

我的目标: 使用SparkGraphComputer将本地数据批量加载到janusgraph,然后在hbase和ES上构建混合索引

我的问题:

Caused by: java.lang.UnsupportedOperationException: Graph does not support adding vertices
    at org.apache.tinkerpop.gremlin.structure.Graph$Exceptions.vertexAdditionsNotSupported(Graph.java:1133)
    at org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph.addVertex(HadoopGraph.java:187)
    at org.apache.tinkerpop.gremlin.process.traversal.step.map.AddVertexStartStep.processNextStart(AddVertexStartStep.java:91)
    at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:128)
    at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:38)
    at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.next(DefaultTraversal.java:200)
    at org.apache.tinkerpop.gremlin.process.computer.bulkloading.OneTimeBulkLoader.getOrCreateVertex(OneTimeBulkLoader.java:49)
    at org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoaderVertexProgram.executeInternal(BulkLoaderVertexProgram.java:210)
    at org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoaderVertexProgram.execute(BulkLoaderVertexProgram.java:197)
    at org.apache.tinkerpop.gremlin.spark.process.computer.SparkExecutor.lambda$null$4(SparkExecutor.java:118)
    at org.apache.tinkerpop.gremlin.util.iterator.IteratorUtils$3.next(IteratorUtils.java:247)
    at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
    at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191)
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
    ... 3 more
依赖项:

janusgraph-all-0.3.1 janusgraph-es-0.3.1 hadoop-gremlin-3.3.3

以下是配置:

  • janusgraph-hbase-es.properties

    storage.backend=hbase
    gremlin.graph=XXX.XXX.XXX.gremlin.hadoop.structure.HadoopGraph
    storage.hostname=<ip>
    storage.hbase.table=hadoop-test-3
    storage.batch-loading=true
    schema.default = none
    cache.db-cache = true
    cache.db-cache-clean-wait = 20
    cache.db-cache-time = 180000
    cache.db-cache-size = 0.5
    index.search.backend=elasticsearch
    index.search.hostname=<ip>
    index.search.index-name=hadoop_test_3
    
    gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
    gremlin.hadoop.graphReader=org.apache.tinkerpop.gremlin.hadoop.structure.io.graphson.GraphSONInputFormat
    gremlin.hadoop.graphWriter=org.apache.tinkerpop.gremlin.hadoop.structure.io.graphson.GraphSONOutputFormat
    gremlin.hadoop.inputLocation=data/tinkerpop-modern.json
    gremlin.hadoop.outputLocation=output
    gremlin.hadoop.jarsInDistributedCache=true
    
    giraph.minWorkers=2
    giraph.maxWorkers=2
    giraph.useOutOfCoreGraph=true
    giraph.useOutOfCoreMessages=true
    mapred.map.child.java.opts=-Xmx1024m
    mapred.reduce.child.java.opts=-Xmx1024m
    giraph.numInputThreads=4
    giraph.numComputeThreads=4
    giraph.maxMessagesInMemory=100000
    
    spark.master=local[*]
    spark.serializer=org.apache.spark.serializer.KryoSerializer
    
  • schema.groovy

    def defineGratefulDeadSchema(janusGraph) {
        JanusGraphManagement m = janusGraph.openManagement()
        VertexLabel person = m.makeVertexLabel("person").make()
        //使用IncrementBulkLoader导入时,去掉下面注释         
        //blid=m.makePropertyKey("bulkLoader.vertex.id")
          .dataType(Long.class).make()
        PropertyKey birth = 
          m.makePropertyKey("birth").dataType(Date.class).make()
        PropertyKey age = 
          m.makePropertyKey("age").dataType(Integer.class).make()
        PropertyKey name = 
          m.makePropertyKey("name").dataType(String.class).make()
        //index 
        //JanusGraphIndex index = m
          .buildIndex("nameCompositeIndex", 
          Vertex.class).addKey(name).unique().buildCompositeIndex()
        JanusGraphIndex index = m.buildIndex("mixedIndex", 
          Vertex.class).addKey(name).buildMixedIndex("search")
          //不支持唯一性检查,search为index.search.backend中的search
        //使用IncrementBulkLoader导入时,去掉下面注释
        //bidIndex = m.buildIndex("byBulkLoaderVertexId",     
          Vertex.class).addKey(blid).indexOnly(person)
          .buildCompositeIndex()
        m.commit()
    }
    
  • 相关代码

    JanusGraph janusGraph = JanusGraphFactory.open
      ("config/janusgraph-hbase-es.properties");
    JanusgraphSchema janusgraphSchema = new JanusgraphSchema();
    janusgraphSchema.defineGratefulDeadSchema(janusGraph);
    janusGraph.close();
    
    Graph graph = GraphFactory.open("config/hadoop-
      graphson.properties");
    BulkLoaderVertexProgram blvp = BulkLoaderVertexProgram.
      build().bulkLoader(OneTimeBulkLoader.class).
      writeGraph("config/janusgraph-hbase-es.properties").
      create(graph);
    graph.compute(SparkGraphComputer.class).program(blvp).
      submit().get();
    graph.close();
    
    JanusGraph janusGraph1 = JanusGraphFactory.open
      ("config/janusgraph-hbase-es.properties");
    List<Map<String, Object>> list = janusGraph1.traversal().V().
      valueMap().toList();
    System.out.println("size: " + list.size());
    janusGraph1.close();
    

    在我用默认值gremlin.graph=org.janusgraph.core.JanusGraphFactory重置gremlin.graph后,上述错误不会出现。

    在我用默认值gremlin.graph=org.janusgraph.core.JanusGraphFactory重置gremlin.graph后,上述错误不会出现

    data success to import hbase, but fail to build index in ES