Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/amazon-web-services/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python:无法在AWS Lambda中导入模块应用程序_Python_Amazon Web Services_Aws Lambda - Fatal编程技术网

Python:无法在AWS Lambda中导入模块应用程序

Python:无法在AWS Lambda中导入模块应用程序,python,amazon-web-services,aws-lambda,Python,Amazon Web Services,Aws Lambda,我的app.zip文件的根目录中有app.py文件。函数处理程序也根据处理程序配置正确定义(lambda\u处理程序):app.lambda\u处理程序 但是,我收到错误:无法导入模块“app”:没有名为app的模块 我哪里出错了 我的剧本: from __future__ import print_function import json import urllib import boto3 from collections import Counter from nltk.tokenize

我的app.zip文件的根目录中有app.py文件。函数处理程序也根据处理程序配置正确定义(
lambda\u处理程序
):
app.lambda\u处理程序

但是,我收到错误:
无法导入模块“app”:没有名为app的模块

我哪里出错了

我的剧本:

from __future__ import print_function

import json
import urllib
import boto3
from collections import Counter
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer
from nltk.stem.porter import *
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
stemmer=PorterStemmer()
import sys  
reload(sys) 
sys.setdefaultencoding('utf8')


print('Loading function')

s3 = boto3.client('s3')

number_of_sentences=0
number_of_words=0
word_list=[]
stop_words=set(stopwords.words('english'))
stop_word_list=[ v for v in stop_words]
modal_verbs=['can', 'could', 'may', 'might', 'must', 'shall', 'should', 'will' ,'would','ought']
auxilary_verbs=['be','do','have']
stop_word_list=stop_word_list+modal_verbs+auxilary_verbs
print("Starting Trigram generation")
#Empty Trigram list 
tri_gram_list=[]

def lambda_handler(event, context):
    #print("Received event: " + json.dumps(event, indent=2))

    # Get the object from the event and show its content type
    '''
    '''
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'].encode('utf8'))
    try:
        response = s3.get_object(Bucket=bucket, Key=key)
        print("CONTENT TYPE: " + response['ContentType'])
        text = response['Body'].read()
        print(type(text))
        for line in text.readlines():
            for line in open("input.txt","r").readlines():
                line=unicode(line, errors='ignore')
                if len(line)>1:
                    sentences=sent_tokenize(line)
                    number_of_sentences+=len(sentences)
                    for sentence in sentences: 
                        sentence=sentence.strip().lower()
                        #sentence = sentence.replace('+', ' ').replace('.', ' ').replace(',', ' ').replace(':', ' ').replace('(', ' ').replace(')', ' ').replace(''`'', ' ').strip().lower()
                        words_from_sentence=tokenizer.tokenize(line) 
                        words = [word for word in words_from_sentence if word not in stop_word_list]
                        number_of_words+=len(words)
                        stemmed_words = [stemmer.stem(word) for word in words]
                        word_list.extend(stemmed_words)
                        #generate Trigrams
                        tri_gram_list_t= [ " ".join([words[index],words[index+1],words[index+2]]) for index,value in enumerate(words) if index<len(words)-2]
                        #print tri_gram_list
                        tri_gram_list.extend(tri_gram_list_t)

        print number_of_words
        print number_of_sentences
        print("Conting frequency now...")
        count=Counter()
        for element in tri_gram_list:
            #print element, type(tri_gram_list)
            count[element]=count[element]+1
        print count.most_common(25)
        print "most common 25 words ARE:"
        for element in word_list:
            #print element, type(tri_gram_list)
            count[element]=count[element]+1
        print count.most_common(25)




        # body = obj.get()['Body'].read()

    except Exception as e:
        print(e)
        print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
        raise e
from\uuuuu future\uuuuu导入打印功能
导入json
导入URL库
进口boto3
从收款进口柜台
从nltk.tokenize导入发送\u tokenize,单词\u tokenize
从nltk.corpus导入停止词
从nltk.tokenize导入RegexpTokenizer
从nltk.stem.porter导入*
从nltk.corpus导入停止词
从nltk.tokenize导入RegexpTokenizer
标记器=RegexpTokenizer(r'\w+'))
stemmer=PorterStemmer()
导入系统
重新加载(系统)
sys.setdefaultencoding('utf8')
打印('加载函数')
s3=boto3.client('s3')
句子的数量=0
单词数=0
单词列表=[]
stop_words=set(stopwords.words('english'))
停止单词列表=[v代表停止单词中的v]
情态动词=['can','can','may','may','must','shall','should','will','would','should']
辅助动词=['be','do','have']
停止词列表=停止词列表+情态动词+辅助动词
打印(“开始生成三角图”)
#空三元表
tri_gram_list=[]
def lambda_处理程序(事件、上下文):
#打印(“接收到的事件:+json.dumps(事件,缩进=2))
#从事件中获取对象并显示其内容类型
'''
'''
bucket=event['Records'][0]['s3']['bucket']['name']
key=urllib.unquote_plus(事件['Records'][0]['s3']['object']['key'])。编码('utf8'))
尝试:
response=s3.get_对象(Bucket=Bucket,Key=Key)
打印(“内容类型:”+响应['ContentType'])
text=响应['Body'].read()
打印(打印(文本))
对于文本中的行。readlines():
对于打开的行(“input.txt”、“r”).readlines():
line=unicode(line,errors='ignore')
如果len(线)>1:
句子=已发送\标记化(行)
句子数+=len(句子)
对于句子中的句子:
句子=句子.strip().lower()
#句子=句子。替换(“+”,“”)。替换(“,”)。替换(“,”,“”)。替换(“:”,“”)。替换(“(“,”),“”)。替换(“,”),“”)。替换(“,”).strip().lower()
单词来自句子=标记器。标记化(行)
words=[如果单词不在stop\u单词列表中,则从句子中逐字逐句]
单词数+=len(单词)
词干单词=[词干分析器.词干(单词)表示单词中的单词]
单词列表。扩展(词干单词)
#生成三角图

tri_gram_list_t=[“”。join([words[index],words[index+1],words[index+2]])作为索引,如果索引则枚举(words)中的值尝试检查日志输出。它将为您提供比上面看到的错误更多的信息

最后,请记住,您需要Python 2语法,替换如下调用:


打印字数
by
print(字数)

您的目录中是否有一个uuu init_uuuuuuuuuuuuuuupy.py?您从何处执行脚本?要导入的模块需要位于系统路径或当前路径下。@denvaar否。我需要它吗?@flacklight脚本位于我压缩文件夹的根目录中。不,您不需要
init.py