Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/296.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python ValueError:未在未知的TensorShape上定义as_list()_Python_Apache Spark_Tensorflow_Deep Learning_Bigdata - Fatal编程技术网

Python ValueError:未在未知的TensorShape上定义as_list()

Python ValueError:未在未知的TensorShape上定义as_list(),python,apache-spark,tensorflow,deep-learning,bigdata,Python,Apache Spark,Tensorflow,Deep Learning,Bigdata,我在这个例子的基础上工作,这是我在这之后得到的 jobs_train, jobs_test = jobs_df.randomSplit([0.6, 0.4]) >>> zuckerberg_train, zuckerberg_test = zuckerberg_df.randomSplit([0.6, 0.4]) >>> train_df = jobs_train.unionAll(zuckerberg_train) >>> test_

我在这个例子的基础上工作,这是我在这之后得到的

   jobs_train, jobs_test = jobs_df.randomSplit([0.6, 0.4])
>>> zuckerberg_train, zuckerberg_test = zuckerberg_df.randomSplit([0.6, 0.4])
>>> train_df = jobs_train.unionAll(zuckerberg_train)
>>> test_df = jobs_test.unionAll(zuckerberg_test)
>>> from pyspark.ml.classification import LogisticRegression
>>> from pyspark.ml import Pipeline
>>> from sparkdl import DeepImageFeaturizer
>>> featurizer = DeepImageFeaturizer(inputCol="image", outputCol="features", modelName="InceptionV3")
>>> lr = LogisticRegression(maxIter=20, regParam=0.05, elasticNetParam=0.3, labelCol="label")
>>> p = Pipeline(stages=[featurizer, lr])
>>> p_model = p.fit(train_df)
这就出现了

2018-06-08 20:57:18.985543: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
    INFO:tensorflow:Froze 376 variables.
    Converted 376 variables to const ops.
    Using TensorFlow backend.
    Using TensorFlow backend.
    INFO:tensorflow:Froze 0 variables.
    Converted 0 variables to const ops.
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/opt/spark/python/pyspark/ml/base.py", line 64, in fit
        return self._fit(dataset)
      File "/opt/spark/python/pyspark/ml/pipeline.py", line 106, in _fit
        dataset = stage.transform(dataset)
      File "/opt/spark/python/pyspark/ml/base.py", line 105, in transform
        return self._transform(dataset)
      File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_spark-deep-learning-0.1.0-spark2.1-s_2.11.jar/sparkdl/transformers/named_image.py", line 159, in _transform
      File "/opt/spark/python/pyspark/ml/base.py", line 105, in transform
        return self._transform(dataset)
      File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_spark-deep-learning-0.1.0-spark2.1-s_2.11.jar/sparkdl/transformers/named_image.py", line 222, in _transform
      File "/opt/spark/python/pyspark/ml/base.py", line 105, in transform
        return self._transform(dataset)
      File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_spark-deep-learning-0.1.0-spark2.1-s_2.11.jar/sparkdl/transformers/tf_image.py", line 142, in _transform
      File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_tensorframes-0.2.8-s_2.11.jar/tensorframes/core.py", line 211, in map_rows
      File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_tensorframes-0.2.8-s_2.11.jar/tensorframes/core.py", line 132, in _map
      File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_tensorframes-0.2.8-s_2.11.jar/tensorframes/core.py", line 66, in _add_shapes
      File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_tensorframes-0.2.8-s_2.11.jar/tensorframes/core.py", line 35, in _get_shape
      File "/home/sulistyo/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/tensor_shape.py", line 900, in as_list
        raise ValueError("as_list() is not defined on an unknown TensorShape.")
    ValueError: as_list() is not defined on an unknown TensorShape.
2018-06-08 20:57:18.985543:I tensorflow/core/platform/cpu\u feature\u guard.cc:140]您的cpu支持未编译此tensorflow二进制文件以使用的指令:AVX2 FMA
信息:tensorflow:冻结376个变量。
将376个变量转换为常量。
使用TensorFlow后端。
使用TensorFlow后端。
信息:tensorflow:冻结0个变量。
已将0个变量转换为常量。
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
文件“/opt/spark/python/pyspark/ml/base.py”,第64行,适合
返回自拟合(数据集)
文件“/opt/spark/python/pyspark/ml/pipeline.py”,第106行,格式为
数据集=stage.transform(数据集)
文件“/opt/spark/python/pyspark/ml/base.py”,第105行,在转换中
返回自转换(数据集)
文件“/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_spark-deep-learning-0.1.0-spark2.1-s_2.11.jar/sparkdl/transformers/named_image.py”,第159行,在_变换中
文件“/opt/spark/python/pyspark/ml/base.py”,第105行,在转换中
返回自转换(数据集)
文件“/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_spark-deep-learning-0.1.0-spark2.1-s_2.11.jar/sparkdl/transformers/named_image.py”,第222行,在_变换中
文件“/opt/spark/python/pyspark/ml/base.py”,第105行,在转换中
返回自转换(数据集)
文件“/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_spark-deep-learning-0.1.0-spark2.1-s_2.11.jar/sparkdl/transformers/tf_image.py”,第142行,in_transform
文件“/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_tensorframes-0.2.8-s_2.11.jar/tensorframes/core.py”,第211行,地图行
文件“/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_tensorframes-0.2.8-s_2.11.jar/tensorframes/core.py”,地图第132行
文件“/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_tensorframes-0.2.8-s_2.11.jar/tensorframes/core.py”,第66行,以“添加”的形式显示
文件“/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/DataRicks_tensorframes-0.2.8-s_2.11.jar/tensorframes/core.py”,第35行,呈“get”字形
文件“/home/sulistyo/tensorflow/lib/python3.6/site packages/tensorflow/python/framework/tensor_shape.py”,as_列表第900行
raise VALUERROR(“as_list()未在未知的张量形状上定义。”)
ValueError:未在未知的TensorShape上定义as_list()。

请提供帮助,谢谢

使用以下图片阅读并创建您的培训和测试集

from pyspark.sql.functions import lit
from sparkdl.image import imageIO

img_dir = "/PATH/TO/personalities/"

jobs_df = imageIO.readImagesWithCustomFn(img_dir + "/jobs",decode_f=imageIO.PIL_decode).withColumn("label", lit(1))
zuckerberg_df = imageIO.readImagesWithCustomFn(img_dir + "/zuckerberg", decode_f=imageIO.PIL_decode).withColumn("label", lit(0))

非常感谢,在我的例子中,我忘记了使用Python2,因为源代码是针对Python2的。