Python 如何用硒刮纸?

Python 如何用硒刮纸?,python,selenium,scrapy,Python,Selenium,Scrapy,我想通过硒刮一个网站,共有10页。我的代码如下,但为什么我只能得到第一页的结果: # -*- coding: utf-8 -*- from selenium import webdriver from scrapy.selector import Selector MAX_PAGE_NUM = 10 MAX_PAGE_DIG = 3 driver = webdriver.Chrome('C:\Users\zhang\Downloads\chromedriver_win32\chromedr

我想通过硒刮一个网站,共有10页。我的代码如下,但为什么我只能得到第一页的结果:

# -*- coding: utf-8 -*-
from selenium import webdriver
from scrapy.selector import Selector


MAX_PAGE_NUM = 10
MAX_PAGE_DIG = 3

driver = webdriver.Chrome('C:\Users\zhang\Downloads\chromedriver_win32\chromedriver.exe')
with open('results.csv', 'w') as f:
    f.write("Buyer, Price \n")

for i in range(1, MAX_PAGE_NUM + 1):
    page_num = (MAX_PAGE_DIG - len(str(i))) * "0" + str(i)
    url = "https://www.oilandgasnewsworldwide.com/Directory1/DREQ/Drilling_Equipment_Suppliers_?page=" + page_num

    driver.get(url)

    names = sel.xpath('//*[@class="fontsubsection nomarginpadding lmargin opensans"]/text()').extract()
    Countries = sel.xpath('//td[text()="Country:"]/following-sibling::td/text()').extract()
    websites = sel.xpath('//td[text()="Website:"]/following-sibling::td/a/@href').extract()

driver.close()
print(len(names), len(Countries), len(websites))

我猜这和你在page_num作业中做的奇怪的事情有关。若要调试,请在调用driver.get(url)后尝试添加此行:


如果它返回您期望的URL,那么最有可能的问题是您的XPATH。

在这里,我首先使用
find\u elements\u by\u XPATH
获取每个页面的名称、国家和网站,并将它们存储到一个列表中。将提取列表中每个元素的文本,并将值添加到新列表中

from selenium import webdriver

MAX_PAGE_NUM = 10

driver = webdriver.Chrome('C:\\Users...\\chromedriver.exe')

names_list = list()
Countries_list = list()
websites_list = list()

# The for loop is to get each of the 10 pages
for i in range(1, MAX_PAGE_NUM):
    page_num = str(i)
    url = "https://www.oilandgasnewsworldwide.com/Directory1/DREQ/Drilling_Equipment_Suppliers_?page=" + page_num

    driver.get(url)

    # Use "driver.find_elements" instead of "driver.find_element" to get all of them. You get a list of WebElements of each page
    names = driver.find_elements_by_xpath("//*[@class='fontsubsection nomarginpadding lmargin opensans']")

    # To get the value of each WebElement in the list. You have to iterate on the list 
    for i in range(0, len(names)):
    # Now you add each value into a new list 
        names_list.append(names[i].text)


    Countries = driver.find_elements_by_xpath("//td[text()='Country:']/following-sibling::td")
    for i in range(0, len(Countries)):
        Countries_list.append(Countries[i].text)

    websites = driver.find_elements_by_xpath("//td[text()='Website:']/following-sibling::td")
    for i in range(0, len(websites)):
        websites_list.append(websites[i].text)

print(names_list)
print(Countries_list)               
print(websites_list)

driver.close()
我希望这对你有用

选项:获取
上包含的节中的所有数据

从selenium导入webdriver

MAX_PAGE_NUM = 10

driver = webdriver.Chrome('C:\\Users\\LVARGAS\\AppData\\Local\\Programs\\Python\\Python36-32\\Scripts\\chromedriver.exe')

data_list = list()

# The for loop is to get each of the 10 pages
for i in range(1, MAX_PAGE_NUM):
    page_num = str(i)
    url = "https://www.oilandgasnewsworldwide.com/Directory1/DREQ/Drilling_Equipment_Suppliers_?page=" + page_num
    driver.get(url)

    rows = driver.find_elements_by_xpath("//*[@class='border fontcontentdet']")

    for i in range(0, len(rows)):

        print(rows[i].text)

        data_list.append(rows[i].text)

        print('---')

driver.close()
print(data_list)

谢谢,它起作用了。但是你能告诉我原因吗,我想是因为你添加了函数:for i in range(1,len(websites)):websites_list.append(websites[i].text)和其他两项。为什么?我在脚本中添加了一些注释。我希望它能帮助你完成每个指令。driver.find_elements可帮助您获取每个页面上要查找的所有WebElements。我复制了您使用的相同XPath。我只是编辑了一下。当您将一个页面的WebElements放入一个列表中时,您无法同时获得该值。因此,我在范围(1,len(names))中为I使用循环。这里,i给了我们一个数字,用于从列表名称[i]中获取每个名称。然后使用.text获得每个WebElement的值,并将其添加到新列表中。实际上,有一点错误。因为指令范围(1,len(names))从1开始与列表交互,但第一个元素是0。因此,正确的指令是range(0,len(names))。我把它们改成了代码。非常感谢您的详细说明,先生。这非常有帮助。但我仍然有一点麻烦,就是名称与正确的国家和网站不对应。这是因为有些名称没有网站或国家标记。因此,在我们获取结果并将其放入表格后,您会发现所有内容都很混乱,因此您可以帮助避免它。我添加了一个选项,以获取目录中每个元素的所有信息。您将得到一个列表,可以根据需要对其进行操作。
MAX_PAGE_NUM = 10

driver = webdriver.Chrome('C:\\Users\\LVARGAS\\AppData\\Local\\Programs\\Python\\Python36-32\\Scripts\\chromedriver.exe')

data_list = list()

# The for loop is to get each of the 10 pages
for i in range(1, MAX_PAGE_NUM):
    page_num = str(i)
    url = "https://www.oilandgasnewsworldwide.com/Directory1/DREQ/Drilling_Equipment_Suppliers_?page=" + page_num
    driver.get(url)

    rows = driver.find_elements_by_xpath("//*[@class='border fontcontentdet']")

    for i in range(0, len(rows)):

        print(rows[i].text)

        data_list.append(rows[i].text)

        print('---')

driver.close()
print(data_list)