Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/292.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 要将sku列表迭代到url中吗_Python_Loops_Python Requests - Fatal编程技术网

Python 要将sku列表迭代到url中吗

Python 要将sku列表迭代到url中吗,python,loops,python-requests,Python,Loops,Python Requests,我在一个sku文本文件中遇到了一个问题,想要将sku列表添加到url文本文件中以搜索名称和图像 这是我的密码: e = Extractor.from_yaml_file('selectors.yml') sku = 'skus.txt' url = 'urls.txt' def scrape(url): headers = { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.

我在一个sku文本文件中遇到了一个问题,想要将sku列表添加到url文本文件中以搜索名称和图像

这是我的密码:

e = Extractor.from_yaml_file('selectors.yml')
sku = 'skus.txt'
url = 'urls.txt'
def scrape(url):  
headers = {
    'dnt': '1',
    'upgrade-insecure-requests': '1',
    'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36',
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
    'sec-fetch-site': 'same-origin',
    'sec-fetch-mode': 'navigate',
    'sec-fetch-user': '?1',
    'sec-fetch-dest': 'document',
    'referer': 'https://www.ebay.com/',
    'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8',
}

print("Downloading...")
r = requests.get(url, headers=headers)
if r.status_code > 500:
    if "To discuss automated access to Ebay data please contact" in r.text:
        print("Page %s was blocked by Ebay. Please try using better proxies\n"%url)
    else:
        print("Page %s must have been blocked by Ebay as the status code was %d"%(url,r.status_code))
    return None
return e.extract(r.text)

with open(url,'r') as urllist, open(sku, 'r') as sku_file, open('output.jsonl','w') as outfile:
for url in urllist.read().splitlines():
        data = scrape(url)
        if data:
            json.dump(data,outfile)
            outfile.write("\n")
以下是我的结果:

{
"name": "Golden State Mint Aztec Calendar 1 oz Silver Round GEM BU SKU55694",
"images": "https://i.ebayimg.com/thumbs/images/g/9DsAAOSw7GheqGWL/s-l225.webp"
}
我想反复浏览我的SKU列表,浏览一个url

https://www.ebay.com/sch/i.html?_from=R40&_trksid=p2380057.m570.l1313&_nkw={sku}&_sacat=0
我的sku列表有500多个sku,我想从一个URL获取,以后可能会更多。似乎无法遍历sku文本文件

文件:main.py、selectors.yml、skus.txt、url.txt