Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/322.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 使用ng flow上传到gae blobstore的文件始终命名为';水滴';_Python_Angularjs_Google App Engine_File Upload_Flow Js - Fatal编程技术网

Python 使用ng flow上传到gae blobstore的文件始终命名为';水滴';

Python 使用ng flow上传到gae blobstore的文件始终命名为';水滴';,python,angularjs,google-app-engine,file-upload,flow-js,Python,Angularjs,Google App Engine,File Upload,Flow Js,我正在尝试创建一个页面,用于将图像上传到Google应用程序引擎blobstore。我使用angularjs和ng flow来实现这一点 除了所有blob都存储为“application/octetstream”并命名为“blob”之外,上传部分似乎工作正常。如何让blobstore识别文件名和内容类型 这是我用来上传文件的代码 内部FlowEventsCtrl: $scope.$on('flow::filesSubmitted', function (event, $flow, files)

我正在尝试创建一个页面,用于将图像上传到Google应用程序引擎blobstore。我使用angularjs和ng flow来实现这一点

除了所有blob都存储为“application/octetstream”并命名为“blob”之外,上传部分似乎工作正常。如何让blobstore识别文件名和内容类型

这是我用来上传文件的代码

内部FlowEventsCtrl:

$scope.$on('flow::filesSubmitted', function (event, $flow, files) {
            $http.get('/files/upload/create').then(function (resp) {
                $flow.opts.target = resp.data.url;
                $flow.upload();
            });
        });
Inside view.html:

<div flow-init="{testChunks:false, singleFile:true}" 
     ng-controller="FlowEventsCtrl">
    <div class="panel">
        <span flow-btn>Upload File</span>
    </div>
    <div class="show-files">...</div>
</div>

上载文件
...
服务器端如中所指定


谢谢

我已经解决了我的问题,现在回想起来,答案似乎显而易见。js和Blobstore上传URL做不同的事情。我将在下面为那些犯下和我一样天真错误的人留下我的解释

blobstore需要一个包含该文件的字段。此字段包含上载数据的文件名和内容类型。此数据作为文件存储在blobstore中。默认情况下,此字段名为“文件”

Flow以块的形式上载数据,并包含许多文件名和其他数据的字段。实际区块数据上传到一个字段中,该字段指定文件名为“blob”,内容类型为“application/octet stream”。服务器需要存储块并重新组装到文件中。因为它只是文件的一部分,而不是整个文件,所以它既不是以文件命名的,也不是以相同的内容类型命名的。默认情况下,此字段名为“文件”

所以问题的答案是:文件存储为“应用程序/八位字节流”并命名为“blob”,因为我存储的是块而不是实际的文件。我之所以能够存储某些内容,似乎是因为这两个字段都使用了相同的默认名称

因此,解决方案是为流请求编写自己的处理程序:

class ImageUploadHandler(webapp2.RequestHandler):
    def post(self):
        chunk_number = int(self.request.params.get('flowChunkNumber'))
        chunk_size = int(self.request.params.get('flowChunkSize'))
        current_chunk_size = int(self.request.params.get('flowCurrentChunkSize'))
        total_size = int(self.request.params.get('flowTotalSize'))
        total_chunks = int(self.request.params.get('flowTotalChunks'))
        identifier = str(self.request.params.get('flowIdentifier'))
        filename = str(self.request.params.get('flowFilename'))
        data = self.request.params.get('file')

        f = ImageFile(filename, identifier, total_chunks, chunk_size, total_size)
        f.write_chunk(chunk_number, current_chunk_size, data)

        if f.ready_to_build():
            info = f.build()
            if info:
                self.response.headers['Content-Type'] = 'application/json'
                self.response.out.write(json.dumps(info.as_dict()))
            else:
                self.error(500)
        else:
            self.response.headers['Content-Type'] = 'application/json'
            self.response.out.write(json.dumps({
                'chunkNumber': chunk_number,
                'chunkSize': chunk_size,
                'message': 'Chunk ' + str(chunk_number) + ' written'
            }))
其中ImageFile是一个写入google云存储的类

编辑:

在ImageFile类下面。唯一缺少的是FileInfo类,它是一个存储生成的url和文件名的简单模型

class ImageFile:
    def __init__(self, filename, identifier, total_chunks, chunk_size, total_size):
        self.bucket_name = os.environ.get('BUCKET_NAME', app_identity.get_default_gcs_bucket_name())
        self.original_filename = filename
        self.filename = '/' + self.bucket_name + '/' + self.original_filename
        self.identifier = identifier
        self.total_chunks = total_chunks
        self.chunk_size = chunk_size
        self.total_size = total_size
        self.stat = None
        self.chunks = []
        self.load_stat()
        self.load_chunks(identifier, total_chunks)

    def load_stat(self):
        try:
            self.stat = gcs.stat(self.filename)
        except gcs.NotFoundError:
            self.stat = None

    def load_chunks(self, identifier, number_of_chunks):
        for n in range(1, number_of_chunks + 1):
            self.chunks.append(Chunk(self.bucket_name, identifier, n))

    def exists(self):
        return not not self.stat

    def content_type(self):
        if self.filename.lower().endswith('.jpg'):
            return 'image/jpeg'
        elif self.filename.lower().endswith('.jpeg'):
            return 'image/jpeg'
        elif self.filename.lower().endswith('.png'):
            return 'image/png'
        elif self.filename.lower().endswith('.git'):
            return 'image/gif'
        else:
            return 'binary/octet-stream'

    def ready(self):
        return self.exists() and self.stat.st_size == self.total_size

    def ready_chunks(self):
        for c in self.chunks:
            if not c.exists():
                return False
        return True

    def delete_chunks(self):
        for c in self.chunks:
            c.delete()

    def ready_to_build(self):
        return not self.ready() and self.ready_chunks()

    def write_chunk(self, chunk_number, current_chunk_size, data):
        chunk = self.chunks[int(chunk_number) - 1]
        chunk.write(current_chunk_size, data)

    def build(self):
        try:
            log.info('File \'' + self.filename + '\': assembling chunks.')
            write_retry_params = gcs.RetryParams(backoff_factor=1.1)
            gcs_file = gcs.open(self.filename,
                                'w',
                                content_type=self.content_type(),
                                options={'x-goog-meta-identifier': self.identifier},
                                retry_params=write_retry_params)
            for c in self.chunks:
                log.info('Writing chunk ' + str(c.chunk_number) + ' of ' + str(self.total_chunks))
                c.write_on(gcs_file)
            gcs_file.close()
        except Exception, e:
            log.error('File \'' + self.filename + '\': Error during assembly - ' + e.message)
        else:
            self.delete_chunks()
            key = blobstore.create_gs_key('/gs' + self.filename)
            url = images.get_serving_url(key)
            info = ImageInfo(name=self.original_filename, url=url)
            info.put()
            return info
区块类:

class Chunk:
    def __init__(self, bucket_name, identifier, chunk_number):
        self.chunk_number = chunk_number
        self.filename = '/' + bucket_name + '/' + identifier + '-chunk-' + str(chunk_number)
        self.stat = None
        self.load_stat()

    def load_stat(self):
        try:
            self.stat = gcs.stat(self.filename)
        except gcs.NotFoundError:
            self.stat = None

    def exists(self):
        return not not self.stat

    def write(self, size, data):
        write_retry_params = gcs.RetryParams(backoff_factor=1.1)
        gcs_file = gcs.open(self.filename, 'w', retry_params=write_retry_params)
        for c in data.file:
            gcs_file.write(c)
        gcs_file.close()
        self.load_stat()

    def write_on(self, stream):
        gcs_file = gcs.open(self.filename)

        try:
            data = gcs_file.read()
            while data:
                stream.write(data)
                data = gcs_file.read()
        except gcs.Error, e:
            log.error('Error writing data to chunk: ' + e.message)
        finally:
            gcs_file.close()

    def delete(self):
        try:
            gcs.delete(self.filename)
            self.stat = None
        except gcs.NotFoundError:
            pass

我已经解决了我的问题,回想起来,答案似乎显而易见。js和Blobstore上传URL做不同的事情。我将在下面为那些犯下和我一样天真错误的人留下我的解释

blobstore需要一个包含该文件的字段。此字段包含上载数据的文件名和内容类型。此数据作为文件存储在blobstore中。默认情况下,此字段名为“文件”

Flow以块的形式上载数据,并包含许多文件名和其他数据的字段。实际区块数据上传到一个字段中,该字段指定文件名为“blob”,内容类型为“application/octet stream”。服务器需要存储块并重新组装到文件中。因为它只是文件的一部分,而不是整个文件,所以它既不是以文件命名的,也不是以相同的内容类型命名的。默认情况下,此字段名为“文件”

所以问题的答案是:文件存储为“应用程序/八位字节流”并命名为“blob”,因为我存储的是块而不是实际的文件。我之所以能够存储某些内容,似乎是因为这两个字段都使用了相同的默认名称

因此,解决方案是为流请求编写自己的处理程序:

class ImageUploadHandler(webapp2.RequestHandler):
    def post(self):
        chunk_number = int(self.request.params.get('flowChunkNumber'))
        chunk_size = int(self.request.params.get('flowChunkSize'))
        current_chunk_size = int(self.request.params.get('flowCurrentChunkSize'))
        total_size = int(self.request.params.get('flowTotalSize'))
        total_chunks = int(self.request.params.get('flowTotalChunks'))
        identifier = str(self.request.params.get('flowIdentifier'))
        filename = str(self.request.params.get('flowFilename'))
        data = self.request.params.get('file')

        f = ImageFile(filename, identifier, total_chunks, chunk_size, total_size)
        f.write_chunk(chunk_number, current_chunk_size, data)

        if f.ready_to_build():
            info = f.build()
            if info:
                self.response.headers['Content-Type'] = 'application/json'
                self.response.out.write(json.dumps(info.as_dict()))
            else:
                self.error(500)
        else:
            self.response.headers['Content-Type'] = 'application/json'
            self.response.out.write(json.dumps({
                'chunkNumber': chunk_number,
                'chunkSize': chunk_size,
                'message': 'Chunk ' + str(chunk_number) + ' written'
            }))
其中ImageFile是一个写入google云存储的类

编辑:

在ImageFile类下面。唯一缺少的是FileInfo类,它是一个存储生成的url和文件名的简单模型

class ImageFile:
    def __init__(self, filename, identifier, total_chunks, chunk_size, total_size):
        self.bucket_name = os.environ.get('BUCKET_NAME', app_identity.get_default_gcs_bucket_name())
        self.original_filename = filename
        self.filename = '/' + self.bucket_name + '/' + self.original_filename
        self.identifier = identifier
        self.total_chunks = total_chunks
        self.chunk_size = chunk_size
        self.total_size = total_size
        self.stat = None
        self.chunks = []
        self.load_stat()
        self.load_chunks(identifier, total_chunks)

    def load_stat(self):
        try:
            self.stat = gcs.stat(self.filename)
        except gcs.NotFoundError:
            self.stat = None

    def load_chunks(self, identifier, number_of_chunks):
        for n in range(1, number_of_chunks + 1):
            self.chunks.append(Chunk(self.bucket_name, identifier, n))

    def exists(self):
        return not not self.stat

    def content_type(self):
        if self.filename.lower().endswith('.jpg'):
            return 'image/jpeg'
        elif self.filename.lower().endswith('.jpeg'):
            return 'image/jpeg'
        elif self.filename.lower().endswith('.png'):
            return 'image/png'
        elif self.filename.lower().endswith('.git'):
            return 'image/gif'
        else:
            return 'binary/octet-stream'

    def ready(self):
        return self.exists() and self.stat.st_size == self.total_size

    def ready_chunks(self):
        for c in self.chunks:
            if not c.exists():
                return False
        return True

    def delete_chunks(self):
        for c in self.chunks:
            c.delete()

    def ready_to_build(self):
        return not self.ready() and self.ready_chunks()

    def write_chunk(self, chunk_number, current_chunk_size, data):
        chunk = self.chunks[int(chunk_number) - 1]
        chunk.write(current_chunk_size, data)

    def build(self):
        try:
            log.info('File \'' + self.filename + '\': assembling chunks.')
            write_retry_params = gcs.RetryParams(backoff_factor=1.1)
            gcs_file = gcs.open(self.filename,
                                'w',
                                content_type=self.content_type(),
                                options={'x-goog-meta-identifier': self.identifier},
                                retry_params=write_retry_params)
            for c in self.chunks:
                log.info('Writing chunk ' + str(c.chunk_number) + ' of ' + str(self.total_chunks))
                c.write_on(gcs_file)
            gcs_file.close()
        except Exception, e:
            log.error('File \'' + self.filename + '\': Error during assembly - ' + e.message)
        else:
            self.delete_chunks()
            key = blobstore.create_gs_key('/gs' + self.filename)
            url = images.get_serving_url(key)
            info = ImageInfo(name=self.original_filename, url=url)
            info.put()
            return info
区块类:

class Chunk:
    def __init__(self, bucket_name, identifier, chunk_number):
        self.chunk_number = chunk_number
        self.filename = '/' + bucket_name + '/' + identifier + '-chunk-' + str(chunk_number)
        self.stat = None
        self.load_stat()

    def load_stat(self):
        try:
            self.stat = gcs.stat(self.filename)
        except gcs.NotFoundError:
            self.stat = None

    def exists(self):
        return not not self.stat

    def write(self, size, data):
        write_retry_params = gcs.RetryParams(backoff_factor=1.1)
        gcs_file = gcs.open(self.filename, 'w', retry_params=write_retry_params)
        for c in data.file:
            gcs_file.write(c)
        gcs_file.close()
        self.load_stat()

    def write_on(self, stream):
        gcs_file = gcs.open(self.filename)

        try:
            data = gcs_file.read()
            while data:
                stream.write(data)
                data = gcs_file.read()
        except gcs.Error, e:
            log.error('Error writing data to chunk: ' + e.message)
        finally:
            gcs_file.close()

    def delete(self):
        try:
            gcs.delete(self.filename)
            self.stat = None
        except gcs.NotFoundError:
            pass

请您也展示一下
ImageFile
类。我对你的实施很感兴趣。没问题。现在添加了它。我尝试了此解决方案,但它将所有块作为单独的文件写入。请同时显示
ImageFile
类。我对你的实施很感兴趣。没问题。现在添加了。我尝试了这个解决方案,但它将所有块作为单独的文件写入。