模拟单击链接中的链接-Selenium Python

模拟单击链接中的链接-Selenium Python,python,selenium,web-scraping,Python,Selenium,Web Scraping,Python知识:初学者 我设法创建了一个脚本来获取联系信息。由于我是一名初学者,我遵循的流程是提取所有第一个链接并将其复制到文本文件中,这在link=browser中使用。通过链接文本查找元素(str(link\u text))抓取联系人详细信息已确认有效(基于我的单独运行)。问题是,在单击第一个链接后,它不会继续单击其中的链接,因此它无法刮取联系人信息 我的剧本怎么了?请记住,我是一个初学者,所以我的脚本有点手动和冗长。 非常感谢 from selenium import webdriver

Python知识:初学者

我设法创建了一个脚本来获取联系信息。由于我是一名初学者,我遵循的流程是提取所有第一个链接并将其复制到文本文件中,这在link=browser中使用。通过链接文本查找元素(str(link\u text))抓取联系人详细信息已确认有效(基于我的单独运行)。问题是,在单击第一个链接后,它不会继续单击其中的链接,因此它无法刮取联系人信息

我的剧本怎么了?请记住,我是一个初学者,所以我的脚本有点手动和冗长。 非常感谢

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

from selenium.common.exceptions import NoSuchElementException

import requests
from bs4 import BeautifulSoup
import urllib
import re
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
import csv, time, lxml

######################### open file list ####################################
testfile = open("category.txt") # this is where I saved the category
readfile = testfile.read()
readfilesplit = readfile.split("\n")
############################### end ###################################

################### open browser ###############################
browser = webdriver.Firefox()
browser.get('http://aucklandtradesmen.co.nz/')
####################### end ###################################

link_texts = readfilesplit
for link_text in link_texts:

        link = browser.find_element_by_link_text(str(link_text))
        WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, ".add-listing")))

        link.click() #click link
        time.sleep(5)

        print "-------------------------------------------------------------------------------------------------"
        print("Getting listings for '%s'" % link_text)

################# get list name #######################
        urlNoList = 'http://aucklandtradesmen.co.nz/home-mainmenu-1.html'
        r = requests.get(browser.current_url)

        if (urlNoList != browser.current_url):
            soup = BeautifulSoup(r.content, 'html.parser')

            g_data = soup.find_all("div", {"class":"listing-summary"})
            pageRange = soup.find_all("span", {"class":"xlistings"})

            pageR = [pageRange[0].text]
            pageMax = str(pageR)[-4:-2] # get max item for lists

            X = str(pageMax).replace('nd', '0')
            # print "Number of listings: ", X
            Y  = int(X) #convert string to int
            print "Number of listings: ", Y

            for item in g_data:
                try:
                    listingNames = item.contents[1].text
                    lstList = []
                    lstList[len(lstList):] = [listingNames]

                    replStr = re.sub(r"u'",  "'",str(lstList)) #strip u' char

                    replStr1 = re.sub(r"\s+'",  "'",str(replStr)) #strip space and '

                    replStr2 = re.sub(r"\sFeatured",  "",str(replStr1)) #strip Featured string
                    print "Cleaned string: ", replStr2

                    ################ SCRAPE INFO ################
################### This is where the code is not executing #######################
                    count = 0
                    while (count < Y):
                        for info in replStr2:
                            link2 = browser.find_element_by_link_text(str(info))
                            time.sleep(10)
                            link2.click()
                            WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, "#rating-msg")))
                            print "count", count
                            count+= 1
                            print("Contact info for: '%s'" % link_text)

                            r2 = requests.get(browser.current_url)

                            soup2 = BeautifulSoup(r2.content, 'html.parser')

                            g_data2 = soup.find_all("div", {"class":"fields"})

                            for item2 in g_data2:
                            # print item.contents[0]
                                print item2.contents[0].text
                                print item2.contents[1].text
                                print item2.contents[2].text
                                print item2.contents[3].text
                                print item2.contents[4].text
                                print item2.contents[5].text
                                print item2.contents[6].text
                                print item2.contents[7].text
                                print item2.contents[8].text

                    browser.back()
                    WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, ".add-listing")))
################### END ---- This is where the code is not executing END ---#######################
                    ############ END SCRAPE INFO ####################
                except NoSuchElementException:
                    browser.back()
                    WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.CLASS_NAME, "pagenav")))

        else:
            browser.back()
            WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.CLASS_NAME, "pagenav")))
            print "Number of listings: 0"

        browser.back()
        WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.CLASS_NAME, "pagenav")))

我要做的是改变一些逻辑。下面是我建议您使用的逻辑流程。这将消除链接的注销并加快脚本速度

1. Navigate to http://aucklandtradesmen.co.nz/
2. Grab all elements using CSS selector "#index a" and store the attribute "href" of each
   in an array of string (links to each category page)
3. Loop through the href array
   3.1. Navigate to href
        3.1.1. Grab all elements using CSS selector "div.listing-summary a" and store the
               .text of each (company names)
        3.1.2. If an element .by_link_text("Next") exists, click it and return to 3.1.1.
如果您想从公司页面中删除业务联系信息,则需要将href存储在3.1.1中。然后循环浏览列表,从页面上获取你想要的内容



很抱歉,列表的格式很奇怪。它不会让我缩进超过一个级别。

好的,在考虑@jeffC的建议后,我找到了一个解决方案:

  • 提取href值并将其附加到基本url,例如,如果提取的href为/home-main menu-1/alarms-a-security/armed alarms ltd-.html,并告诉浏览器导航到该url..然后我可以在当前页面中执行任何操作

谢谢,这是一些中级学习者的理想选择。现在,坚持我的代码并找出错误是有帮助的。我认为从良好的逻辑流开始比编写代码更重要。你可以写最好的代码,但是如果你的逻辑不正确,你的脚本仍然会很慢,等等。我认为成为一个更好的程序员需要学习这两方面。。学习良好的逻辑流程和良好的编码。
1. Navigate to http://aucklandtradesmen.co.nz/
2. Grab all elements using CSS selector "#index a" and store the attribute "href" of each
   in an array of string (links to each category page)
3. Loop through the href array
   3.1. Navigate to href
        3.1.1. Grab all elements using CSS selector "div.listing-summary a" and store the
               .text of each (company names)
        3.1.2. If an element .by_link_text("Next") exists, click it and return to 3.1.1.