Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/372.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/visual-studio-2010/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Spark Sql查询失败_Java_Apache Spark_Apache Spark Sql_Cassandra 2.0 - Fatal编程技术网

Java Spark Sql查询失败

Java Spark Sql查询失败,java,apache-spark,apache-spark-sql,cassandra-2.0,Java,Apache Spark,Apache Spark Sql,Cassandra 2.0,使用Sparks 2/java/Cassanda 2.2 尝试运行简单的sparks sql查询时,出现以下错误: 尝试如下,+变体,如'LAX',和'='而不是'=' Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '`LAX`' given input columns: [transdate, origin]; line 1 pos 42; 'Project ['origin]

使用Sparks 2/java/Cassanda 2.2 尝试运行简单的sparks sql查询时,出现以下错误: 尝试如下,+变体,如'LAX',和'='而不是'='

Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '`LAX`' given input columns: [transdate, origin]; line 1 pos 42;
'Project ['origin]
+- 'Filter (origin#1 = 'LAX)
   +- SubqueryAlias origins
      +- LogicalRDD [transdate#0, origin#1]

JavaRDD<TransByDate> originDateRDD = javaFunctions(sc).cassandraTable("trans", "trans_by_date", CassandraJavaUtil.mapRowTo(TransByDate.class)).select(CassandraJavaUtil.column("origin"), CassandraJavaUtil.column("trans_date").as("transdate"));

long cnt1= originDateRDD.count();
System.out.println("sqlLike originDateRDD.count: "+cnt1); --> 406000
Dataset<Row> originDF = sparks.createDataFrame(originDateRDD, TransByDate.class);
originDF.createOrReplaceTempView("origins");
Dataset<Row> originlike = sparks.sql("SELECT origin FROM origins WHERE origin =="+ "LAX");
如果有帮助,我已启用配置单元支持
谢谢

蜂巢不是问题,这是你的问题所在:

线程主org.apache.spark.sql.AnalysisException中的异常:无法解析给定输入列[transdate,origin]的“`LAX`”;第1行位置42;


这意味着在这些列名中,没有一个被命名为LAX。scala DSL要求===当匹配列中的键时,或者类似的东西更理想,比如origins。filter$origin===LAX

将列值放在单引号内。您的查询应该如下所示

Dataset<Row> originlike = spark.sql("SELECT origin FROM origins WHERE origin == "+"'LAX'");
Dataset<Row> originlike = spark.sql("SELECT origin FROM origins WHERE origin like 'LA%'");
你可以参考更多细节

Like查询应该如下所示

Dataset<Row> originlike = spark.sql("SELECT origin FROM origins WHERE origin == "+"'LAX'");
Dataset<Row> originlike = spark.sql("SELECT origin FROM origins WHERE origin like 'LA%'");

我正在使用java,现在当我尝试:从origins中选择origin,其中origin===+LAX Exception位于线程main org.apache.spark.sql.catalyst.parser.ParseException:mismatched input'来自'expecting{,'WHERE','GROUP','ORDER','HAVING','LIMIT LATERAL'WINDOW','UNION','EXCEPT',INTERSECT SORT',CLUSTER',DISTRIBUTE第1行,pos 14==SQL==从原点选择原点,其中原点===LAX我的DF原点和transdate中只有两列。当我尝试像L%这样的查询时,我得到了相同的错误,即我实际需要得到的列名称在Df中是正确的-我可以打印并查看它们。但是Df.show-抛出各种异常。无论我尝试什么,问题仍然存在:==或===,LAX'LAX',或者使用LIKE L%时也是如此。我最终得出结论:LIKE'+a+%'。所以%本身也需要在引号“”中:比如“L%”是的,这个有效==+'LAX'。我以为我已经尝试过这种组合——但还有其他一些错误——谁知道呢。仍然像+a+%a=L仍然不起作用-给出了相同的例外情况:输入不匹配,来自“预期{,”、“其中”、“组”、“顺序”、“有”、“限制”、“横向”、“窗口”、“联合”、“除外”、“相交”、“排序”、“群集”、“分布”}第1行,位置14==SQL==从原点选择原点,其中原点像L%