Python 使用AWS Lambda加载Keras模型
我正在尝试部署一个NN模型,我使用Keras在我的机器上进行了本地培训。我将我的模型(本地)用作: 现在,我需要在lambda函数上使用相同的模型。我将模型上传到Python 使用AWS Lambda加载Keras模型,python,amazon-web-services,amazon-s3,keras,aws-lambda,Python,Amazon Web Services,Amazon S3,Keras,Aws Lambda,我正在尝试部署一个NN模型,我使用Keras在我的机器上进行了本地培训。我将我的模型(本地)用作: 现在,我需要在lambda函数上使用相同的模型。我将模型上传到s3bucket中。然后,我尝试以以下方式访问该文件: model = load_model("https://s3-eu-west-1.amazonaws.com/my-bucket/models/model.h5") 但它告诉我该文件不存在。我想这是一个特权问题。我还尝试了(类似于我从s3读取JSON文件的方式): 但我得到了这个
s3
bucket中。然后,我尝试以以下方式访问该文件:
model = load_model("https://s3-eu-west-1.amazonaws.com/my-bucket/models/model.h5")
但它告诉我该文件不存在。我想这是一个特权问题。我还尝试了(类似于我从s3
读取JSON文件的方式):
但我得到了这个错误:
"stackTrace": [
[
"/var/task/lambda_function.py",
322,
"lambda_handler",
"model = load_model(result[\"Body\"].read())"
],
[
"/var/task/keras/models.py",
227,
"load_model",
"with h5py.File(filepath, mode='r') as f:"
],
[
"/var/task/h5py/_hl/files.py",
269,
"__init__",
"fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)"
],
[
"/var/task/h5py/_hl/files.py",
99,
"make_fid",
"fid = h5f.open(name, flags, fapl=fapl)"
],
[
"h5py/_objects.pyx",
54,
"h5py._objects.with_phil.wrapper",
null
],
[
"h5py/_objects.pyx",
55,
"h5py._objects.with_phil.wrapper",
null
],
[
"h5py/h5f.pyx",
78,
"h5py.h5f.open",
null
],
[
"h5py/defs.pyx",
621,
"h5py.defs.H5Fopen",
null
],
[
"h5py/_errors.pyx",
123,
"h5py._errors.set_exception",
null
]
],
"errorType": "UnicodeDecodeError",
"errorMessage": "'utf8' codec can't decode byte 0x89 in position 29: invalid start byte"
}
我怀疑result[“Body”].read()函数不能与h5py
对象一起使用。从s3
加载h5py
模型的最佳方式是什么
解决方案:解决方案是将文件下载到/tmp/
文件夹中:
result = client_s3.download_file("my-bucket",'model.h5', "/tmp/model.h5")
model = load_model("/tmp/day/model.h5")
问题
boto3.client(“s3”)…get_object(…)[“Body”].read()
返回一个bytestring,但keras.models.load_model
需要一个文件路径()
解决方案
把文件存放在某个地方。可能会派上用场。这对我很有用
现在,loaded_model
可以用来进行预测根据@Raji发布的链接,我发现keras具有从谷歌云存储加载模型文件和/或权重文件的功能
首先在谷歌云上建立一个新的项目,在谷歌云存储上创建一个新的bucket,然后将您的模型文件上传到那里。或者在我的例子中,将权重文件上传到那里(示例地址“gs://my bucket/my_weights.hdf5”)。生成新的服务帐户凭据并将相应的json文件下载到您的repo中,并注意绝对文件路径(如“/path/to/repo/my_credentials.json”)
设置环境变量:
#.env
GOOGLE_APPLICATION_CREDENTIALS=“/path/to/repo/my_CREDENTIALS.json”
MODEL_WEIGHTS_FILEPATH=“gs://my bucket/my_WEIGHTS.hdf5”
仅供参考:在设置“Google_应用程序_凭据”env var以满足凭据的隐式检查之前,我遇到了诸如“所有获取Google身份验证承载令牌的尝试都失败”之类的错误
从google云存储加载权重:
导入操作系统
从dotenv导入加载\u dotenv
加载_dotenv()
MODEL\u WEIGHTS\u FILEPATH=os.getenv(“MODEL\u WEIGHTS\u FILEPATH”)
打印(型号\权重\文件路径)
#可选支票
导入tensorflow作为tf
tf.io.gfile.exists(MODEL\u WEIGHTS\u FILEPATH)
model=未加权的_model()#您的keras模型
加权模型=模型。加载权重(模型权重文件路径)
加权模型。预测(“zyx”)#预测或对你的模型做任何事情
存储桶中的model.h5文件是否可公开访问?如果不使用AWS SDK for S3下载文件并保存到/tmp文件夹中并加载到模型中。它不是公共的,我通过下载来解决。我用答案更新了这个问题
"stackTrace": [
[
"/var/task/lambda_function.py",
322,
"lambda_handler",
"model = load_model(result[\"Body\"].read())"
],
[
"/var/task/keras/models.py",
227,
"load_model",
"with h5py.File(filepath, mode='r') as f:"
],
[
"/var/task/h5py/_hl/files.py",
269,
"__init__",
"fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)"
],
[
"/var/task/h5py/_hl/files.py",
99,
"make_fid",
"fid = h5f.open(name, flags, fapl=fapl)"
],
[
"h5py/_objects.pyx",
54,
"h5py._objects.with_phil.wrapper",
null
],
[
"h5py/_objects.pyx",
55,
"h5py._objects.with_phil.wrapper",
null
],
[
"h5py/h5f.pyx",
78,
"h5py.h5f.open",
null
],
[
"h5py/defs.pyx",
621,
"h5py.defs.H5Fopen",
null
],
[
"h5py/_errors.pyx",
123,
"h5py._errors.set_exception",
null
]
],
"errorType": "UnicodeDecodeError",
"errorMessage": "'utf8' codec can't decode byte 0x89 in position 29: invalid start byte"
}
result = client_s3.download_file("my-bucket",'model.h5', "/tmp/model.h5")
model = load_model("/tmp/day/model.h5")
s3 = boto3.resource('s3')
obj = s3.Object(bucket_name, model_file_name) #.h5 file
body = obj.get()['Body'].read()
file_access_property_list = h5py.h5p.create(h5py.h5p.FILE_ACCESS)
file_access_property_list.set_fapl_core(backing_store=False)
file_access_property_list.set_file_image(body)
file_id_args = {
'fapl': file_access_property_list,
'flags': h5py.h5f.ACC_RDONLY,
'name': b'this should never matter',
}
h5_file_args = {
'backing_store': False,
'driver': 'core',
'mode': 'r',
}
with contextlib.closing(h5py.h5f.open(**file_id_args)) as file_id:
with h5py.File(file_id, **h5_file_args) as h5_file:
loaded_model = load_model(h5_file) #from keras.models