Python I';“我得到了”;读取文件时出错";在剧本中,我遗漏了什么?

Python I';“我得到了”;读取文件时出错";在剧本中,我遗漏了什么?,python,scrape,Python,Scrape,我正在Python2.7OSX上运行一个刮取脚本,从Aliexpress中刮取产品,但在运行该脚本时,我从终端收到以下错误消息。PS产品链接列在.txt文件中 上次登录:ttys001上的6月7日星期三16:24:05 RSs MacBook Air:~MyMacbookAir$cd桌面 RSs MacBook Air:desktop MyMacbookAir$python aliexpresscrape.py 文件路径:/Users/MyMacbookAir/Desktop/fileurl.t

我正在Python2.7OSX上运行一个刮取脚本,从Aliexpress中刮取产品,但在运行该脚本时,我从终端收到以下错误消息。PS产品链接列在.txt文件中

上次登录:ttys001上的6月7日星期三16:24:05
RSs MacBook Air:~MyMacbookAir$cd桌面
RSs MacBook Air:desktop MyMacbookAir$python aliexpresscrape.py
文件路径:/Users/MyMacbookAir/Desktop/fileurl.txt
回溯(最近一次呼叫最后一次):
文件“aliexpresscrap.py”,第70行,在
阅读(选择)
文件“aliexpresscrap.py”,第69行,已读
刮擦(str(线条[j]))
文件“aliexpresscrap.py”,第18行,在scrape中
title2=str(lxml.html.parse(url.find(“.//title”).text)
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/lxml/html/_init__.py”,第940行,在parse中
返回etree.parse(文件名\u或\u url,解析器,base\u url=base\u url,**kw)
lxml.etree.parse(src/lxml/lxml.etree.c:81716)中的文件“src/lxml/lxml.etree.pyx”,第3442行
文件“src/lxml/parser.pxi”,第1811行,在lxml.etree.\u parseDocument(src/lxml/lxml.etree.c:118635)中
文件“src/lxml/parser.pxi”,第1837行,在lxml.etree.\u parseDocumentFromURL(src/lxml/lxml.etree.c:118982)中
文件“src/lxml/parser.pxi”,第1741行,在lxml.etree.\u parseDocFromFile(src/lxml/lxml.etree.c:117894)中
文件“src/lxml/parser.pxi”,第1138行,在lxml.etree.\u BaseParser.\u parsedocfromfromfile(src/lxml/lxml.etree.c:112440)中
文件“src/lxml/parser.pxi”,第595行,在lxml.etree.\u ParserContext.\u handleParseResultDoc(src/lxml/lxml.etree.c:105896)中
文件“src/lxml/parser.pxi”,第706行,位于lxml.etree.\u handleParseResult(src/lxml/lxml.etree.c:107604)
文件“src/lxml/parser.pxi”,第633行,在lxml.etree中。\u raiseParserError(src/lxml/lxml.etree.c:106415)
IOError:读取文件“”时出错https://www.aliexpress.com/item/AFOFOO-Gothic-Steampunk-Mens-Sunglasses-Vintage-Metal-Men-Coating-Mirror-Sunglasses-Male-Round-Sun-glasses-Retro/32784155756.html?spm=2114.01010208.3.43.M90zfm&ws_ab_test=searchweb0_0,10月18 10 10 10 10 10 10 10 10 10 10 6 6 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10_10182_10078_10079_10073_10123_10189_142-9946_9168,searchweb201603_1,ppcSwitch_5&btsid=fd7dbd9c-11fc-4795-8476-6fb77990127e&algo_expid=41aa2ba2-d7ea-4bf8-ab59-4956f5759e05-6&algo_pvid=41aa2ba2-d7ea-4bf8-ab59-4956f5759e05:加载外部实体失败"https://www.aliexpress.com/item/AFOFOO-Gothic-Steampunk-Mens-Sunglasses-Vintage-Metal-Men-Coating-Mirror-Sunglasses-Male-Round-Sun-glasses-Retro/32784155756.html?spm=2114.01010208.3.43.M90zfm&ws_ab_test=searchweb0_0,10月18 10 10 10 10 10 10 10 10 10 10 6 6 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10_10182_10078_10079_10073_10123_10189_142-9946_9168,searchweb201603_1,ppcSwitch_5&btsid=fd7dbd9c-11fc-4795-8476-6fb77990127e&algo_expid=41aa2ba2-d7ea-4bf8-ab59-4956f5759e05-6&algo_pvid=41aa2ba2-d7ea-4bf8-ab59-4956f5759e05“
RSs MacBook Air:desktop MyMacbookAir$python aliexpresscrape.py
文件路径:/Users/MyMacbookAir/Desktop/fileurl.txt
回溯(最近一次呼叫最后一次):
文件“aliexpresscrap.py”,第70行,在
阅读(选择)
文件“aliexpresscrap.py”,第69行,已读
刮擦(str(线条[j]))
文件“aliexpresscrap.py”,第18行,在scrape中
title2=str(lxml.html.parse(url.find(“.//title”).text)
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/lxml/html/_init__.py”,第940行,在parse中
返回etree.parse(文件名\u或\u url,解析器,base\u url=base\u url,**kw)
lxml.etree.parse(src/lxml/lxml.etree.c:81716)中的文件“src/lxml/lxml.etree.pyx”,第3442行
文件“src/lxml/parser.pxi”,第1811行,在lxml.etree.\u parseDocument(src/lxml/lxml.etree.c:118635)中
文件“src/lxml/parser.pxi”,第1837行,在lxml.etree.\u parseDocumentFromURL(src/lxml/lxml.etree.c:118982)中
文件“src/lxml/parser.pxi”,第1741行,在lxml.etree.\u parseDocFromFile(src/lxml/lxml.etree.c:117894)中
文件“src/lxml/parser.pxi”,第1138行,在lxml.etree.\u BaseParser.\u parsedocfromfromfile(src/lxml/lxml.etree.c:112440)中
文件“src/lxml/parser.pxi”,第595行,在lxml.etree.\u ParserContext.\u handleParseResultDoc(src/lxml/lxml.etree.c:105896)中
文件“src/lxml/parser.pxi”,第706行,位于lxml.etree.\u handleParseResult(src/lxml/lxml.etree.c:107604)
文件“src/lxml/parser.pxi”,第633行,在lxml.etree中。\u raiseParserError(src/lxml/lxml.etree.c:106415)
IOError:读取文件“”时出错https://www.aliexpress.com/category/100003088/shorts.html?spm=2114.20010208.8.79.lz5vt4&site=glo&g=y&pvId=10-100019017&attrRel=or&tc=af':未能加载外部实体”https://www.aliexpress.com/category/100003088/shorts.html?spm=2114.20010208.8.79.lz5vt4&site=glo&g=y&pvId=10-100019017&attrRel=or&tc=af“
RSs MacBook Air:desktop MyMacbookAir$
这是我正在使用的代码:

from lxml import html
import lxml.html
import requests
import csv
from csv import writer
#variables
selection = raw_input("Path to File: ")
csv_header = ("post_title","post_name","ID","post_excerpt","post_content","post_status","menu_order","post_date","post_parent","post_author","comment_status","sku","downloadable","virtual","visibility","stock","stock_status","backorders","manage_stock","regular_price","sale_price","weight","length","width","height","tax_status","tax_class","upsell_ids","crosssell_ids","featured","sale_price_dates_from","sale_price_dates_to","download_limit","download_expiry","product_url","button_text","meta:_yoast_wpseo_focuskw","meta:_yoast_wpseo_title","meta:_yoast_wpseo_metadesc","meta:_yoast_wpseo_metakeywords","images","downloadable_files","tax:product_type","tax:product_cat","tax:product_tag","tax:product_shipping_class","meta:total_sales","attribute:pa_color","attribute_data:pa_color","attribute_default:pa_color","attribute:size","attribute_data:size","attribute_default:size")

#write header to output file (runs once)
with open('output.csv', 'w') as f:
        writer=csv.writer(f)
        writer.writerow(csv_header)

def scrape(url):
    page = requests.get(url)
    tree = html.fromstring(page.content)
    title2 = str(lxml.html.parse(url).find(".//title").text)
    title2 = title2.replace('-' + title2.split("-", 1)[1], '')
    price = tree.xpath("//span[@itemprop='price']//text()")
    i = 0
    for span in tree.cssselect('span'):
        clas = span.get('class')
        rel = span.get('rel')
        if clas == "packaging-des":
            if rel != None:
                if i == 0:
                    weight = rel
                elif i == 1:
                    dim = str(rel)
                i = i+1

    weight = weight
    height = dim.split("|", 3)[0]
    length = dim.split("|", 3)[1]
    width = dim.split("|", 3)[2]
    #Sometimes aliexpress doesn't list a price
    #This dumps a 0 into price in that case to stop the errors
    if len(price) == 1:
        price = float(str(price[0]))
    elif len(price) == 0:
        price = int(0)
    for inpu in tree.cssselect('input'):
        if inpu.get("id") == "hid-product-id":
            sku = inpu.get('value')
    for meta in tree.cssselect('meta'):
        name = meta.get("name")
        prop = meta.get("property")
        content = meta.get('content')
        if prop == 'og:image':
            image = meta.get('content')
        if name == 'keywords':
             keywords = meta.get('content')
        if name == 'description':
            desc = meta.get('content')
    listvar = ([str(title2),str(name), '', '', str(desc), 'publish', '', '', '0', '1', 'open', str(sku), 'no', 'no', 'visible', '', 'instock', 'no', 'no', str(price*2),str(price*1.5), str(weight), str(length), str(width), str(height), 'taxable', '', '', '', 'no', '', '', '', '', '', '', '', '', '', str(keywords), str(image), '', 'simple', '', '', '', '0', '', '', '', '', '', '', '', ''])
    with open("output.csv",'ab') as f:
        writer=csv.writer(f)
        writer.writerow(listvar)

def read(selection):
    lines = []
    j = 0
    with open(selection) as f:
        for line in f:
            lines.append(line)
        lines = map(lambda s: s.strip(), lines)    
    for j in range(len(lines)):
        scrape(str(lines[j]))
read(selection)

对不起,我的水晶球在商店里-你能发布产生错误的代码吗?请让我们看看代码,然后我们可以帮助你也许你正在浏览的网站有一个速率限制,它阻止了你。@Stiffy2000刚刚用原始代码更新了我的问题。@Pedrovohertwig请查看更新的代码。谢谢