Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何在spark支持的深度学习应用程序中直接将numpy数组写入s3_Numpy_Amazon S3_Pyspark_Deep Learning - Fatal编程技术网

如何在spark支持的深度学习应用程序中直接将numpy数组写入s3

如何在spark支持的深度学习应用程序中直接将numpy数组写入s3,numpy,amazon-s3,pyspark,deep-learning,Numpy,Amazon S3,Pyspark,Deep Learning,我们正在使用keras生成大约10k个numpy数组,最后我们必须将这些数组作为.npy文件保存到s3。但是问题是为了在spark的map函数中保存到s3,我们必须创建中间文件,而不是直接创建中间文件流到s3。我使用了这个“Cottoncandy”库,但它在spark map函数中不起作用,抛出错误如下:- pickle.PicklingError: Could not serialize object: TypeError: can't pickle thread.lock objects

我们正在使用keras生成大约10k个numpy数组,最后我们必须将这些数组作为.npy文件保存到s3。但是问题是为了在spark的map函数中保存到s3,我们必须创建中间文件,而不是直接创建中间文件流到s3。我使用了这个“Cottoncandy”库,但它在spark map函数中不起作用,抛出错误如下:-

pickle.PicklingError: Could not serialize object: TypeError: can't pickle thread.lock objects
是否有任何可能的方法/库可供我们在spark map函数内部的深度学习应用程序中使用,以直接将numpy阵列流式传输到s3

我的numpy阵列rdd为:

features_rdd
我尝试过的选项:-

def writePartition(xs):
    cci = cc.get_interface('BUCKET_NAME', ACCESS_KEY=os.environ.get("AWS_ACCESS_KEY_ID"),
                           SECRET_KEY=os.environ.get("AWS_SECRET_ACCESS_KEY"), endpoint_url='https://s3.amazonaws.com')
    #output_path, format_name
    for k,v in xs:
        file_name_with_domain = get_file_with_parents(k, 1)
        file_name = ...
        file_name_without_ext = get_file_name_without_ext(file_name)
        bucket_name = OUTPUT.split('/', 1)[0]
        rest_of_path = OUTPUT.split('/', 1)[1]
        final_path = rest_of_path + '/' + file_name_without_ext + '.' + '.npy'

        LOGGER.info("Saving to S3....")
        response = cci.upload_npy_array(final_path, v)


features_rdd.foreachpartition(writePartition)
备选案文2:-

def writePartition1(xs):
    s3 = boto3.client('s3',region_name='us-east-1')
    for k,v in xs:
         ...
         ...
        np.save(local_dir_full_path, v)
        s3.upload_file(local_dir_full_path, 'BUCKET', s3_full_path)
        os.remove(local_dir_full_path)




features_rdd.foreachpartition(writePartition1)   
错误:-

File "/usr/lib64/python2.7/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/usr/lib64/python2.7/pickle.py", line 655, in save_dict
    self._batch_setitems(obj.iteritems())
  File "/usr/lib64/python2.7/pickle.py", line 687, in _batch_setitems
    save(v)
  File "/usr/lib64/python2.7/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/usr/lib64/python2.7/pickle.py", line 606, in save_list
    self._batch_appends(iter(obj))
  File "/usr/lib64/python2.7/pickle.py", line 642, in _batch_appends
    save(tmp[0])
  File "/usr/lib64/python2.7/pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/cloudpickle.py", line 600, in save_reduce
    save(state)
  File "/usr/lib64/python2.7/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/usr/lib64/python2.7/pickle.py", line 655, in save_dict
    self._batch_setitems(obj.iteritems())
  File "/usr/lib64/python2.7/pickle.py", line 687, in _batch_setitems
    save(v)
  File "/usr/lib64/python2.7/pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/cloudpickle.py", line 600, in save_reduce
    save(state)
  File "/usr/lib64/python2.7/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/usr/lib64/python2.7/pickle.py", line 655, in save_dict
    self._batch_setitems(obj.iteritems())
  File "/usr/lib64/python2.7/pickle.py", line 687, in _batch_setitems
    save(v)
  File "/usr/lib64/python2.7/pickle.py", line 306, in save
    rv = reduce(self.proto)
TypeError: can't pickle thread.lock objects
Traceback (most recent call last):
  File "six_file_boto3_write1.py", line 248, in <module>
    run()
  File "six_file_boto3_write1.py", line 239, in run
    features_rdd.foreachPartition(writePartitionWithBoto)
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/rdd.py", line 799, in foreachPartition
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/rdd.py", line 1041, in count
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/rdd.py", line 1032, in sum
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/rdd.py", line 906, in fold
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/rdd.py", line 809, in collect
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/rdd.py", line 2455, in _jrdd
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/rdd.py", line 2388, in _wrap_function
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/rdd.py", line 2374, in _prepare_for_python_RDD
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/serializers.py", line 464, in dumps
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/cloudpickle.py", line 704, in dumps
  File "/mnt/yarn/usercache/hadoop/appcache/application_1541683970451_0003/container_1541683970451_0003_01_000001/pyspark.zip/pyspark/cloudpickle.py", line 162, in dump
pickle.PicklingError: Could not serialize object: TypeError: can't pickle thread.lock objects
因此,基本上,应用程序工作得非常好,直到特性被删除为止。连我都能核实一下。但是,当我试图保存这些功能的一部分,它不工作。在上面添加了导入

更新:-

def extract_features(model,obj):
    try:
        print('executing vgg16 feature extractor...')
        img = image.load_img(BytesIO(obj), target_size=(224, 224,3))
        img_data = image.img_to_array(img)
        img_data = np.expand_dims(img_data, axis=0)
        img_data = preprocess_input(img_data)
        vgg16_feature = model.predict(img_data)[0]
        print('++++++++++++++++++++++++++++',vgg16_feature.shape)
        return vgg16_feature
    except Exception as e:
        print('Error......{}'.format(e.args))
        return []

def extract_features_(xs):
    model_data = initVGG16()
    for k, v in xs:
        yield k, extract_features(model_data, v)

spark = SparkSession \
                    .builder \
                    .appName('test-app') \
                    .getOrCreate()

    sc = spark.sparkContext
    s3_files_rdd = sc.binaryFiles(RESOLVED_IMAGE_PATH)
    s3_files_rdd.persist()

    features_rdd = s3_files_rdd.mapPartitions(extract_features_)

你能展示一些你迄今为止尝试过的火花代码吗?更新了帖子。选项1带有cottoncandy,选项2带有Boto3这是完整堆栈跟踪吗
pickle.PicklingError:无法序列化对象:TypeError:无法pickle thread.lock对象
@karma4917您可以检查完整堆栈跟踪。我想我已经接近了。你们进口什么?
def extract_features(model,obj):
    try:
        print('executing vgg16 feature extractor...')
        img = image.load_img(BytesIO(obj), target_size=(224, 224,3))
        img_data = image.img_to_array(img)
        img_data = np.expand_dims(img_data, axis=0)
        img_data = preprocess_input(img_data)
        vgg16_feature = model.predict(img_data)[0]
        print('++++++++++++++++++++++++++++',vgg16_feature.shape)
        return vgg16_feature
    except Exception as e:
        print('Error......{}'.format(e.args))
        return []

def extract_features_(xs):
    model_data = initVGG16()
    for k, v in xs:
        yield k, extract_features(model_data, v)

spark = SparkSession \
                    .builder \
                    .appName('test-app') \
                    .getOrCreate()

    sc = spark.sparkContext
    s3_files_rdd = sc.binaryFiles(RESOLVED_IMAGE_PATH)
    s3_files_rdd.persist()

    features_rdd = s3_files_rdd.mapPartitions(extract_features_)