Python 美丽的汤-从多个页面获取文本

Python 美丽的汤-从多个页面获取文本,python,beautifulsoup,Python,Beautifulsoup,我对python和web scraping都是新手,我想浏览以下页面: 我想循环浏览每个参展商的链接,并获取联系方式。然后我需要在所有的77页上都这样做 我可以从一个页面中提取我需要的信息,但是当涉及到构建函数和循环时,我不断地出现错误,并且找不到一个简单的结构来循环多个页面 这是我目前在jupyter笔记本中看到的: from selenium import webdriver from selenium.webdriver.common.desired_capabilities impor

我对python和web scraping都是新手,我想浏览以下页面:

我想循环浏览每个参展商的链接,并获取联系方式。然后我需要在所有的77页上都这样做

我可以从一个页面中提取我需要的信息,但是当涉及到构建函数和循环时,我不断地出现错误,并且找不到一个简单的结构来循环多个页面

这是我目前在jupyter笔记本中看到的:

from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
import time
import pandas as pd
import requests
from bs4 import BeautifulSoup

url = 'http://www.interzum.com/exhibitors-and-products/exhibitor-index/exhibitor-index-15.php'
text = requests.get(url).text
page1 = BeautifulSoup(text, "html.parser")

def get_data(url):
    text = requests.get(url).text
    page2 = BeautifulSoup(text, "html.parser")

    title = page2.find('h1', attrs={'class':'hl_2'}).getText()    
    content = page2.find('div', attrs={'class':'content'}).getText()
    phone = page2.find('div', attrs={'class':'sico ico_phone'}).getText()
    email = page2.find('a', attrs={'class':'sico ico_email'}).getText
    webpage = page2.find('a', attrs={'class':'sico ico_link'}).getText


    data = {'Name': [title],
          'Address': [content],
          'Phone number': [phone],
          'Email': [email],
          'Web': [web]            
         } 

df = pd.DataFrame()
for a in page1.findAll('a', attrs={'class':'initial_noline'}):
    df2 = get_data(a['href'])
    df = pd.concat([df, df2])



AttributeError: 'NoneType' object has no attribute 'getText'
我知道我经常遇到的错误是因为我不熟悉编码,并且在函数和循环的语法中苦苦挣扎


您的代码交替调用函数getText(.getText())并访问返回值可能为none的属性getText(.getText)

python
>>> a = None
>>> type(a)
<type 'NoneType'>
>>> a.foo()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'foo'
>>> a.foo
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'foo'
>>> 
python
>>>a=无
>>>类型(a)
>>>a.傅()
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
AttributeError:“非类型”对象没有属性“foo”
>>>阿福
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
AttributeError:“非类型”对象没有属性“foo”
>>> 

查看BeautifulSoup文档,并确定.find()返回的内容以及如何正确访问其中已解析的数据。欢迎来到Python

您的代码交替调用函数getText(.getText())并访问返回值可能为none的属性getText(.getText)

python
>>> a = None
>>> type(a)
<type 'NoneType'>
>>> a.foo()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'foo'
>>> a.foo
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'foo'
>>> 
python
>>>a=无
>>>类型(a)
>>>a.傅()
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
AttributeError:“非类型”对象没有属性“foo”
>>>阿福
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
AttributeError:“非类型”对象没有属性“foo”
>>> 

查看BeautifulSoup文档,并确定.find()返回的内容以及如何正确访问其中已解析的数据。欢迎来到Python

这是一个经过调试的版本

import pandas as pd
import requests
from bs4 import BeautifulSoup

url = 'http://www.interzum.com/exhibitors-and-products/exhibitor-index/exhibitor-index-15.php'
text = requests.get(url).text
page1 = BeautifulSoup(text, "html.parser")

def get_data(url):
    text = requests.get(url).text
    page2 = BeautifulSoup(text, "html.parser")

    title = page2.find('h1', attrs={'class':'hl_2'}).getText()    
    content = page2.find('div', attrs={'class':'content'}).getText()
    phone = page2.find('div', attrs={'class':'sico ico_phone'}).getText()
    email = page2.find('div', attrs={'class':'sico ico_email'}).getText
    webpage = page2.find('div', attrs={'class':'sico ico_link'}).getText


    data = [[title, content,phone, email, webpage]] 
    return data

df = pd.DataFrame()
for a in page1.findAll('a', attrs={'class':'initial_noline'}):
    if 'kid=' not in a['href'] : continue
    print('http://www.interzum.com' + a['href'])
    data = get_data('http://www.interzum.com' + a['href'])
    df.append(data)

这是一个经过调试的版本

import pandas as pd
import requests
from bs4 import BeautifulSoup

url = 'http://www.interzum.com/exhibitors-and-products/exhibitor-index/exhibitor-index-15.php'
text = requests.get(url).text
page1 = BeautifulSoup(text, "html.parser")

def get_data(url):
    text = requests.get(url).text
    page2 = BeautifulSoup(text, "html.parser")

    title = page2.find('h1', attrs={'class':'hl_2'}).getText()    
    content = page2.find('div', attrs={'class':'content'}).getText()
    phone = page2.find('div', attrs={'class':'sico ico_phone'}).getText()
    email = page2.find('div', attrs={'class':'sico ico_email'}).getText
    webpage = page2.find('div', attrs={'class':'sico ico_link'}).getText


    data = [[title, content,phone, email, webpage]] 
    return data

df = pd.DataFrame()
for a in page1.findAll('a', attrs={'class':'initial_noline'}):
    if 'kid=' not in a['href'] : continue
    print('http://www.interzum.com' + a['href'])
    data = get_data('http://www.interzum.com' + a['href'])
    df.append(data)

非常感谢大家的帮助,我一直在努力,几乎得到了我需要的一切。我的代码如下:

import pandas as pd
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
import time

binary = FirefoxBinary('geckodriver.exe')
driver = webdriver.Firefox()
driver.get('http://www.interzum.com/exhibitors-and-products/exhibitor-        index/exhibitor-index-15.php')

url = 'http://www.interzum.com/exhibitors-and-products/exhibitor-index/exhibitor-index-15.php'
text = requests.get(url).text
page1 = BeautifulSoup(text, "html.parser")

def get_data(url, tries=0, max_tries=3):
    text_test2 = requests.get(url).text
    page2 = BeautifulSoup(text_test2, "html.parser")

    try:
        title = page2.find('h1', attrs={'class':'hl_2'}).text    
        content = page2.find('div', attrs={'class':'cont'}).text
        phone = page2.find('div', attrs={'class':'sico ico_phone'}).text
        email_div = page2.find('div', attrs={'class':'sico ico_email'})
        email = email_div.find('a', attrs={'class': 'xsecondarylink'})['href']


       if page2.find_all("div", {"class": "sico ico_link"}):
            web_div = page2.find('div', attrs={'class':'sico ico_link'})
            web = web_div.find('a', attrs={'class':'xsecondarylink'})['href']

    except:
        if tries < max_tries:
            tries += 1
            print("try {}".format(tries))
            return get_data(url, tries)


    data = {'Name': [title],
            'Street address': [content], 
            'Phone number': [phone],
            'Email': [email],
            'Web': [web]            
            }

    return pd.DataFrame(data=data)


df = pd.DataFrame()
for i in range(0,80):
    print(i)
    page1 = BeautifulSoup(driver.page_source, 'html.parser')


    for div in page1.findAll('div', attrs={'class':'item'}):

        for a in div.findAll('a', attrs={'class':'initial_noline'}):
            if 'kid=' not in a['href'] : continue
            print('http://www.interzum.com' + a['href'])

            data = get_data('http://www.interzum.com' + a['href'])
            df = pd.concat([df, data])

    next_button = driver.find_element_by_class_name('slick-next')
    next_button.click()
    time.sleep(20)

df.to_csv('result.csv')
将熊猫作为pd导入
导入请求
从bs4导入BeautifulSoup
从selenium导入webdriver
从selenium.webdriver.common.desired_功能导入DesiredCapabilities
从selenium.webdriver.firefox.firefox\u二进制文件导入FirefoxBinary
从selenium.webdriver.common.keys导入密钥
从selenium.common.Exception导入NoTouchElementException
导入时间
binary=FirefoxBinary('geckodriver.exe')
driver=webdriver.Firefox()
司机,上车http://www.interzum.com/exhibitors-and-products/exhibitor-        index/exhibitor-index-15.php')
url='1〕http://www.interzum.com/exhibitors-and-products/exhibitor-index/exhibitor-index-15.php'
text=请求.get(url).text
page1=BeautifulSoup(文本,“html.parser”)
def get_数据(url,尝试次数=0,最大尝试次数=3):
text\u test2=requests.get(url).text
page2=BeautifulSoup(text_test2,“html.parser”)
尝试:
title=page2.find('h1',attrs={'class':'hl_2'})。text
content=page2.find('div',attrs={'class':'cont'})。text
phone=page2.find('div',attrs={'class':'sico ico_phone')。text
email_div=page2.find('div',attrs={'class':'sico ico_email'})
email=email_div.find('a',attrs={'class':'xsecondarylink'})['href']
如果page2.find_all(“div”,{“class”:“sico ico_link”}):
web_div=page2.find('div',attrs={'class':'sico ico_link'})
web=web_div.find('a',attrs={'class':'xsecondarylink'})['href']
除:
如果尝试
这段代码一直运行到第二页的第二个链接。 这个链接没有一个网站,我正在努力拼凑一些东西,上面写着:如果这个类的href存在,那么拉网站,如果没有,那么移动到下一个

但是,我得到以下错误: UnboundLocalError:分配前引用的局部变量“web”

所以我的代码显然没有这样做

任何关于如何解决这一问题的指南都将不胜感激


再次感谢大家的帮助:)

非常感谢大家的帮助,我一直在努力,几乎得到了我需要的一切。我的代码如下:

import pandas as pd
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
import time

binary = FirefoxBinary('geckodriver.exe')
driver = webdriver.Firefox()
driver.get('http://www.interzum.com/exhibitors-and-products/exhibitor-        index/exhibitor-index-15.php')

url = 'http://www.interzum.com/exhibitors-and-products/exhibitor-index/exhibitor-index-15.php'
text = requests.get(url).text
page1 = BeautifulSoup(text, "html.parser")

def get_data(url, tries=0, max_tries=3):
    text_test2 = requests.get(url).text
    page2 = BeautifulSoup(text_test2, "html.parser")

    try:
        title = page2.find('h1', attrs={'class':'hl_2'}).text    
        content = page2.find('div', attrs={'class':'cont'}).text
        phone = page2.find('div', attrs={'class':'sico ico_phone'}).text
        email_div = page2.find('div', attrs={'class':'sico ico_email'})
        email = email_div.find('a', attrs={'class': 'xsecondarylink'})['href']


       if page2.find_all("div", {"class": "sico ico_link"}):
            web_div = page2.find('div', attrs={'class':'sico ico_link'})
            web = web_div.find('a', attrs={'class':'xsecondarylink'})['href']

    except:
        if tries < max_tries:
            tries += 1
            print("try {}".format(tries))
            return get_data(url, tries)


    data = {'Name': [title],
            'Street address': [content], 
            'Phone number': [phone],
            'Email': [email],
            'Web': [web]            
            }

    return pd.DataFrame(data=data)


df = pd.DataFrame()
for i in range(0,80):
    print(i)
    page1 = BeautifulSoup(driver.page_source, 'html.parser')


    for div in page1.findAll('div', attrs={'class':'item'}):

        for a in div.findAll('a', attrs={'class':'initial_noline'}):
            if 'kid=' not in a['href'] : continue
            print('http://www.interzum.com' + a['href'])

            data = get_data('http://www.interzum.com' + a['href'])
            df = pd.concat([df, data])

    next_button = driver.find_element_by_class_name('slick-next')
    next_button.click()
    time.sleep(20)

df.to_csv('result.csv')
将熊猫作为pd导入
导入请求
从bs4导入BeautifulSoup
从selenium导入webdriver
从selenium.webdriver.common.desired_功能导入DesiredCapabilities
从selenium.webdriver.firefox.firefox\u二进制文件导入FirefoxBinary
从selenium.webdriver.common.keys导入密钥
从selenium.common.Exception导入NoTouchElementException
导入时间
binary=FirefoxBinary('geckodriver.exe')
driver=webdriver.Firefox()
司机,上车http://www.interzum.com/exhibitors-and-products/exhibitor-        index/exhibitor-index-15.php')
url='1〕http://www.interzum.com/exhibitors-and-products/exhibitor-index/exhibitor-index-15.php'
text=请求.get(url).text
page1=BeautifulSoup(文本,“html.parser”)
def get_数据(url,trys=0,最大值)