Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 什么是允许Cerner Bunsen加载FHIR R4(ukcore)的依赖项?_Apache Spark_Apache Spark Sql_Hl7 Fhir_Hapi Fhir - Fatal编程技术网

Apache spark 什么是允许Cerner Bunsen加载FHIR R4(ukcore)的依赖项?

Apache spark 什么是允许Cerner Bunsen加载FHIR R4(ukcore)的依赖项?,apache-spark,apache-spark-sql,hl7-fhir,hapi-fhir,Apache Spark,Apache Spark Sql,Hl7 Fhir,Hapi Fhir,有人知道Cerner Bunsen库()是否会加载FHIRR4捆绑包并将数据持久化到spark sql数据库中吗?如果有人能给我提供任何指导或指引,那就太好了。目前,我只是想从中加载一个捆绑的样本。最终目标是将传入的bundle持久化到一个配置单元数据库中,以供apachespark集群访问 尝试加载单个条目捆绑包的示例代码为: Bundles bundles = Bundles.forR4(); URL fileUrl = R4Test.class.getClassLoader().getRe

有人知道Cerner Bunsen库()是否会加载FHIRR4捆绑包并将数据持久化到spark sql数据库中吗?如果有人能给我提供任何指导或指引,那就太好了。目前,我只是想从中加载一个捆绑的样本。最终目标是将传入的bundle持久化到一个配置单元数据库中,以供apachespark集群访问

尝试加载单个条目捆绑包的示例代码为:

Bundles bundles = Bundles.forR4();
URL fileUrl = R4Test.class.getClassLoader().getResource("ukcore/UKCore-AllergyIntolerance-Amoxicillin-Example.json");
JavaRDD bundlesRdd = bundles.loadFromDirectory(spark, fileUrl.toExternalForm(), 200);
Object c = bundlesRdd.collect();
bundles.saveAsDatabase(spark, bundlesRdd, "r4database", "AllergyIntolerance");
bundlesRdd.collect()
上,我收到以下警告:

INFO WholeTextFileRDD: Input split: Paths:/path/to/ukcore/UKCore-AllergyIntolerance-Amoxicillin-Example.json:0+2017
WARN LenientErrorHandler: Unknown element 'meta' found while parsing
WARN LenientErrorHandler: Unknown element 'clinicalStatus' found while parsing
WARN LenientErrorHandler: Unknown element 'verificationStatus' found while parsing
WARN LenientErrorHandler: Unknown element 'type' found while parsing
WARN LenientErrorHandler: Unknown element 'category' found while parsing
WARN LenientErrorHandler: Unknown element 'code' found while parsing
WARN LenientErrorHandler: Unknown element 'patient' found while parsing
WARN LenientErrorHandler: Unknown element 'encounter' found while parsing
WARN LenientErrorHandler: Unknown element 'recordedDate' found while parsing
WARN LenientErrorHandler: Unknown element 'recorder' found while parsing
WARN LenientErrorHandler: Unknown element 'asserter' found while parsing
WARN LenientErrorHandler: Unknown element 'reaction' found while parsing
尝试
saveAsDatabase()
时失败,原因是:

java.lang.IllegalArgumentException: Unsupported FHIR version: R4
    at com.cerner.bunsen.definitions.StructureDefinitions.create(StructureDefinitions.java:120)
    at com.cerner.bunsen.spark.SparkRowConverter.forResource(SparkRowConverter.java:75)
    at com.cerner.bunsen.spark.SparkRowConverter.forResource(SparkRowConverter.java:54)
    at com.cerner.bunsen.spark.Bundles.extractEntry(Bundles.java:211)
    at com.cerner.bunsen.spark.Bundles.saveAsDatabase(Bundles.java:290)
我当前使用以下依赖项运行:

    <dependencies>
        <dependency>
            <groupId>com.cerner.bunsen</groupId>
            <artifactId>bunsen-r4</artifactId>
            <version>0.4.5</version>
        </dependency>

        <dependency>
            <groupId>com.cerner.bunsen</groupId>
            <artifactId>bunsen-core</artifactId>
            <version>0.5.7</version>
        </dependency>
        <dependency>
            <groupId>com.cerner.bunsen</groupId>
            <artifactId>bunsen-spark</artifactId>
            <version>0.5.7</version>
        </dependency>

        <!--
        to resolve java.lang.IllegalAccessError:
        "tried to access method com.google.common.base.Stopwatch.<init>()V from class
        org.apache.hadoop.mapreduce.lib.input.FileInputFormat"
        -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-core</artifactId>
            <version>2.7.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.7.2</version>
        </dependency>

        <!-- Spark dependencies -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>2.4.5</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.4.5</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-hive_2.11</artifactId>
            <version>2.4.5</version>
        </dependency>
    </dependencies>

本生
本生r4
0.4.5
本生
本生磁芯
0.5.7
本生
本生火花
0.5.7
org.apache.hadoop
hadoop mapreduce客户端核心
2.7.2
org.apache.hadoop
hadoop通用
2.7.2
org.apache.spark
spark-sql_2.11
2.4.5
org.apache.spark
spark-core_2.11
2.4.5
org.apache.spark
spark-hive_2.11
2.4.5
非常感谢,


Dave

由于0.5.X版本及其在我们的路线图中所做的重大更改,目前不支持R4版本,但我们还没有ETA

如果您试图探索示例数据,请使用同时支持STU3和R4的0.4.6版本进行测试。请注意,旧版本不再维护

谢谢,
Amaresh目前不支持R4版本,因为0.5.X版本及其在我们的路线图中进行了重大更改,但我们还没有ETA

如果您试图探索示例数据,请使用同时支持STU3和R4的0.4.6版本进行测试。请注意,旧版本不再维护

谢谢, 阿马雷什