Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark gremlin控制台外的Janusgraph OLAP查询_Apache Spark_Cassandra_Janusgraph - Fatal编程技术网

Apache spark gremlin控制台外的Janusgraph OLAP查询

Apache spark gremlin控制台外的Janusgraph OLAP查询,apache-spark,cassandra,janusgraph,Apache Spark,Cassandra,Janusgraph,我有一个图,其中一些节点有数百万条传入边。我需要定期获取此类节点的边缘计数。我使用cassandar作为存储后端。 查询: 所有可用的文档都解释了如何利用ApacheSpark从gremlin控制台执行此操作。 我是否可以在gremlin控制台外编写逻辑作为spark作业,并在hadoop集群上定期运行id 以下是我不使用spark时在gremlin控制台上的查询输出: 14108889[gremlin-server-session-1]WARN org.apache.tinkerpop.gre

我有一个图,其中一些节点有数百万条传入边。我需要定期获取此类节点的边缘计数。我使用cassandar作为存储后端。 查询:

所有可用的文档都解释了如何利用ApacheSpark从gremlin控制台执行此操作。 我是否可以在gremlin控制台外编写逻辑作为spark作业,并在hadoop集群上定期运行id

以下是我不使用spark时在gremlin控制台上的查询输出:

14108889[gremlin-server-session-1]WARN org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor- 根据请求处理脚本时出现异常[RequestMessage{, requestId=c3d902b7-0fdd-491d-8639-546963212474,op='eval', 处理器='会话', args={gremlin=g.V().has('vid','qwerty').inE().count().next(), 会话=2831d264-4566-4d15-99c5-d9bbb202b1f8,绑定={}, manageTransaction=false,batchSize=64}]。位于的TimedOutException() cassandra$multiget\u slice\u result$multiget\u slice\u results standardscheme.read(cassandra.java:14696)位于 cassandra$multiget\u slice\u result$multiget\u slice\u results standardscheme.read(cassandra.java:14633)位于 org.apache.cassandra.thrift.cassandra$multiget\u slice\u result.read(cassandra.java:14559) 在 org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) 在 org.apache.cassandra.thrift.cassandra$Client.recv_multiget_slice(cassandra.java:741) 在 org.apache.cassandra.thrift.cassandra$Client.multiget_slice(cassandra.java:725) 在 org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftKeyColumnValueStore.getNamesSlice(CassandraThriftKeyColumnValueStore.java:143) 在 org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftKeyColumnValueStore.getSlice(CassandraThriftKeyColumnValueStore.java:100) 在 org.janusgraph.diskstorage.keycolumnvalue.KCVSProxy.getSlice(KCVSProxy.java:82) 在 org.janusgraph.diskstorage.keycolumnvalue.cache.ExpirationKCVSCache.getSlice(ExpirationKCVSCache.java:129) 在 org.janusgraph.diskstorage.BackendTransaction$2.call(BackendTransaction.java:288) 在 org.janusgraph.diskstorage.BackendTransaction$2.call(BackendTransaction.java:285) 在 org.janusgraph.diskstorage.util.backinOperation.executeDirect(backinOperation.java:69) 在 org.janusgraph.diskstorage.util.backinOperation.execute(backinOperation.java:55) 在 org.janusgraph.diskstorage.BackendTransaction.executeRead(BackendTransaction.java:470) 在 org.janusgraph.diskstorage.BackendTransaction.edgeStoreMultiQuery(BackendTransaction.java:285) 在 org.janusgraph.graphdb.database.StandardJanusGraph.edgeMultiQuery(StandardJanusGraph.java:441) 在 org.janusgraph.graphdb.transaction.StandardJanusGraphTx.lambda$executeMultiQuery$3(StandardJanusGraphTx.java:1054) 在 org.janusgraph.graphdb.query.profile.QueryProfiler.profile(QueryProfiler.java:98) 在 org.janusgraph.graphdb.query.profile.QueryProfiler.profile(QueryProfiler.java:90) 在 org.janusgraph.graphdb.transaction.StandardJanusGraphTx.executeMultiQuery(StandardJanusGraphTx.java:1054) 在 org.janusgraph.graphdb.query.vertex.MultiVertexCentricQueryBuilder.execute(MultiVertexCentricQueryBuilder.java:113) 在 org.janusgraph.graphdb.query.vertex.MultiVertexCentricQueryBuilder.edges(MultiVertexCentricQueryBuilder.java:133) 在 org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphVertexStep.initialize(JanusGraphVertexStep.java:95) 在 org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphVertexStep.processNextStart(JanusGraphVertexStep.java:101) 在 org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143) 在 org.apache.tinkerpop.gremlin.process.traversal.step.util.ExpandableStepIterator.hasNext(ExpandableStepIterator.java:42) 在 org.apache.tinkerpop.gremlin.process.traversal.step.util.ReduceingBarrierStep.processAllStarts(ReduceingBarrierStep.java:83) 在 org.apache.tinkerpop.gremlin.process.traversal.step.util.ReduceingBarrierStep.processNextStart(ReduceingBarrierStep.java:113) 在 org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:128) 在 org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:38) 在 org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.next(DefaultTraversal.java:200) 在java_util_迭代器$next.call(未知源代码)处 org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) 在 org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) 在 org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117) 在Script13.run(Script13.groovy:1)中 org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.eval(GremlinGroovyScriptEngine.java:843)位于 org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.eval(GremlinGroovyScriptEngine.java:548)位于 eval(AbstractScriptEngine.java:233) 在 org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines.eval(ScriptEngines.java:120) 在 org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$0(GremlinExecutor.java:290) 在java.util.concurrent.FutureTask.run(FutureTask.java:266)处 Executors$RunnableAdapter.call(Executors.java:511) 在java.util.concurrent.FutureTask.run(FutureTask.java:266)处 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 在 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 运行(Thread.java:748)


但是
g.V().has('vid','qwerty').inE().limit(10000).count().next()
工作正常,并给出
==>10000
以下是Java客户端,它使用SparkGraphComputer创建图形:

public class FollowCountSpark {

    private static Graph hgraph;
    private static GraphTraversalSource traversalSource;

    public static void main(String[] args) {
        createHGraph();
        System.exit(0);
    }

    private static void createHGraph() {
        hgraph = GraphFactory.open("/resources/jp_spark.properties");

        traversalSource = hgraph.traversal().withComputer(SparkGraphComputer.class);
        System.out.println("traversalSource = "+traversalSource);
        getAllEdgesFromHGraph();
    }

    static long getAllEdgesFromHGraph(){
        try{
            GraphTraversal<Vertex, Vertex> allV = traversalSource.V();
            GraphTraversal<Vertex, Vertex> gt = allV.has("vid", "supernode");
            GraphTraversal<Vertex, Long> c = gt.inE()
//                    .limit(600000)
                    .count();
            long l = c.next();
            System.out.println("All edges = "+l);
            return l;
        }catch (Exception e) {
            System.out.println("Error while fetching the edges for : ");
            e.printStackTrace();
        }
        return -1;
    }
}
以及相应的pom.xm
public class FollowCountSpark {

    private static Graph hgraph;
    private static GraphTraversalSource traversalSource;

    public static void main(String[] args) {
        createHGraph();
        System.exit(0);
    }

    private static void createHGraph() {
        hgraph = GraphFactory.open("/resources/jp_spark.properties");

        traversalSource = hgraph.traversal().withComputer(SparkGraphComputer.class);
        System.out.println("traversalSource = "+traversalSource);
        getAllEdgesFromHGraph();
    }

    static long getAllEdgesFromHGraph(){
        try{
            GraphTraversal<Vertex, Vertex> allV = traversalSource.V();
            GraphTraversal<Vertex, Vertex> gt = allV.has("vid", "supernode");
            GraphTraversal<Vertex, Long> c = gt.inE()
//                    .limit(600000)
                    .count();
            long l = c.next();
            System.out.println("All edges = "+l);
            return l;
        }catch (Exception e) {
            System.out.println("Error while fetching the edges for : ");
            e.printStackTrace();
        }
        return -1;
    }
}
storage.backend=cassandrathrift
storage.cassandra.keyspace=t_graph

cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.5
ids.block-size = 100000
storage.batch-loading = true
storage.buffer-size = 1000

# read-cassandra-3.properties
#
# Hadoop Graph Configuration
#
gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
gremlin.hadoop.graphReader=org.janusgraph.hadoop.formats.cassandra.Cassandra3InputFormat
gremlin.hadoop.graphWriter=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat

gremlin.hadoop.jarsInDistributedCache=true
gremlin.hadoop.inputLocation=none
gremlin.hadoop.outputLocation=output

#
# JanusGraph Cassandra InputFormat configuration
#
# These properties defines the connection properties which were used while write data to JanusGraph.
janusgraphmr.ioformat.conf.storage.backend=cassandrathrift
# This specifies the hostname & port for Cassandra data store.
#janusgraphmr.ioformat.conf.storage.hostname=10.xx.xx.xx,xx.xx.xx.18,xx.xx.xx.141
janusgraphmr.ioformat.conf.storage.port=9160
# This specifies the keyspace where data is stored.
janusgraphmr.ioformat.conf.storage.cassandra.keyspace=t_graph

#
# Apache Cassandra InputFormat configuration
#
cassandra.input.partitioner.class=org.apache.cassandra.dht.Murmur3Partitioner
spark.cassandra.input.split.size=256

#
# SparkGraphComputer Configuration
#
spark.master=local[1]
spark.executor.memory=1g
spark.cassandra.input.split.size_in_mb=512
spark.executor.extraClassPath=/opt/lib/janusgraph/*
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.kryo.registrator=org.apache.tinkerpop.gremlin.spark.structure.io.gryo.GryoRegistrator
<dependencies>
        <dependency>
            <groupId>org.janusgraph</groupId>
            <artifactId>janusgraph-core</artifactId>
            <version>${janusgraph.version}</version>
        </dependency>
        <dependency>
            <groupId>org.janusgraph</groupId>
            <artifactId>janusgraph-cassandra</artifactId>
            <version>${janusgraph.version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.tinkerpop</groupId>
            <artifactId>spark-gremlin</artifactId>
            <version>3.1.0-incubating</version>
            <exclusions>
                <exclusion>
                    <groupId>com.fasterxml.jackson.core</groupId>
                    <artifactId>jackson-databind</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.tinkerpop</groupId>
            <artifactId>spark-gremlin</artifactId>
            <version>3.2.5</version>
            <exclusions>
               <exclusion>
                    <groupId>com.fasterxml.jackson.core</groupId>
                    <artifactId>jackson-databind</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.janusgraph</groupId>
            <artifactId>janusgraph-hadoop-core</artifactId>
            <version>${janusgraph.version}</version>
        </dependency>
        <dependency>
            <groupId>org.janusgraph</groupId>
            <artifactId>janusgraph-hbase</artifactId>
            <version>${janusgraph.version}</version>
        </dependency>

        <dependency>
            <groupId>org.janusgraph</groupId>
            <artifactId>janusgraph-cql</artifactId>
            <version>${janusgraph.version}</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>

        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-core</artifactId>
            <version>2.8.1</version>
        </dependency>

    </dependencies>