Python 如何识别和跟踪链接,然后使用BeautifulSoup从新网页打印数据

Python 如何识别和跟踪链接,然后使用BeautifulSoup从新网页打印数据,python,html,web-scraping,beautifulsoup,urlopen,Python,Html,Web Scraping,Beautifulsoup,Urlopen,我正在尝试(1)从网页中获取标题,(2)打印标题,(3)跟踪下一页的链接,(4)从下一页获取标题,以及(5)从下一页打印标题 步骤(1)和(4)是相同的功能,步骤(2)和(5)是相同的功能。唯一的区别是功能(4)和(5)将在下一页执行 #Imports from urllib.request import urlopen from bs4 import BeautifulSoup import re ##Internet #Link to webpage web_page = urlope

我正在尝试(1)从网页中获取标题,(2)打印标题,(3)跟踪下一页的链接,(4)从下一页获取标题,以及(5)从下一页打印标题

步骤(1)和(4)是相同的功能,步骤(2)和(5)是相同的功能。唯一的区别是功能(4)和(5)将在下一页执行

#Imports
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re


##Internet
#Link to webpage 
web_page = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
#Soup object
soup = BeautifulSoup(web_page, 'html.parser')
def get_link():
    ##Internet
    #Link to webpage 
    html = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
    #Soup object
    soup = BeautifulSoup(html, 'html.parser')
    #Find image
    image = soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]")
    #Follow link
    link = image.parent
    new_link = link.attrs['href']
    new_page = urlopen('http://patft.uspto.gov/'+new_link)
    soup = BeautifulSoup(new_page, 'html.parser')
    #Patent Number
    Patent_Number = soup.title.text
    print(Patent_Number)

get_link()
我对步骤1和2没有任何问题。我的代码能够获得标题并有效地打印出来。步骤1和2:

##Get Data
def get_title():
    #Patent Number
    Patent_Number = soup.title.text
    print(Patent_Number)

get_title()
我得到的输出正是我想要的:

#Print Out
United States Patent: 10530579
我在步骤3中遇到问题。对于第(3)步,我已经能够识别正确的链接,但没有跟随它进入下一页。我正在识别我想要的链接,图像标签上方的href

以下代码是我对步骤3、4和5的工作草案:

#Get
def get_link():
    ##Internet
    #Link to webpage 
    html = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
    #Soup object
    soup = BeautifulSoup(html, 'html.parser')
    #Find image
    ##image = <img valign="MIDDLE" src="/netaicon/PTO/nextdoc.gif" border="0" alt="[NEXT_DOC]">
    #image = soup.find("img", valign="MIDDLE")
    image = soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]")
    #Get new link
    new_link = link.attrs['href']
    print(new_link)

get_link()
输出正是我想要遵循的链接。简而言之,我尝试编写的函数将打开新的链接变量作为新网页,并在新网页上执行(1)和(2)中执行的相同函数。结果输出将是两个标题,而不是一个(一个用于网页,一个用于新网页)

本质上,我需要写一篇:

urlopen(new_link)
函数,而不是:

print(new_link)
功能。然后,在新网页上执行步骤4和5。然而,我很难找到打开新页面并获取标题的方法。一个问题是新链接不是url,而是我要单击的链接。

此函数不是打印(新链接),而是打印下一页的标题

#Imports
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re


##Internet
#Link to webpage 
web_page = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
#Soup object
soup = BeautifulSoup(web_page, 'html.parser')
def get_link():
    ##Internet
    #Link to webpage 
    html = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
    #Soup object
    soup = BeautifulSoup(html, 'html.parser')
    #Find image
    image = soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]")
    #Follow link
    link = image.parent
    new_link = link.attrs['href']
    new_page = urlopen('http://patft.uspto.gov/'+new_link)
    soup = BeautifulSoup(new_page, 'html.parser')
    #Patent Number
    Patent_Number = soup.title.text
    print(Patent_Number)

get_link()

添加“”和新的链接-将链接转换为有效的url。然后,我可以打开url,导航到页面并检索标题。

您可以使用一些正则表达式来提取和格式化链接(以防更改),整个示例代码如下:

# The first link
url = "http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22"

# Test loop (to grab 5 records)
for _ in range(5):
    web_page = urlopen(url)
    soup = BeautifulSoup(web_page, 'html.parser')

    # step 1 & 2 - grabbing and printing title from a webpage
    print(soup.title.text) 

    # step 4 - getting the link from the page
    next_page_link = soup.find('img', {'alt':'[NEXT_DOC]'}).find_parent('a').get('href')

    # extracting the link (determining the prefix (http or https) and getting the site data (everything until the first /))
    match = re.compile("(?P<prefix>http(s)?://)(?P<site>[^/]+)(?:.+)").search(url)
    if match:
        prefix = match.group('prefix')
        site = match.group('site')

    # formatting the link to the next page
    url = '%s%s%s' % (prefix, site, next_page_link)

    # printing the link just for debug purpose
    print(url)

    # continuing with the loop
#第一个链接
url=”http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=和&d=PTXT&s1=(%22deep+learning%22.CLTX.+或+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22”
#测试循环(获取5条记录)
对于范围(5)内的uu:
网页=urlopen(url)
soup=BeautifulSoup(网页“html.parser”)
#步骤1和2-从网页抓取和打印标题
打印(soup.title.text)
#步骤4-从页面获取链接
next_page_link=soup.find('img',{'alt':'[next_DOC]})。find_parent('a')。get('href'))
#提取链接(确定前缀(http或https)并获取站点数据(直到第一个/)
match=re.compile((?Phttp)?:/)(?P[^/]+)(?:.+))。搜索(url)
如果匹配:
prefix=match.group('prefix')
site=match.group('site')
#格式化指向下一页的链接
url=“%s%s%s%”(前缀、站点、下一页链接)
#仅出于调试目的打印链接
打印(url)
#继续循环

尽管您找到了解决方案,以防有人尝试类似的方法。 并非所有情况下都推荐我的以下解决方案。在这种情况下,因为所有页面的url仅因页码不同而不同。我们可以动态地生成这些请求,然后批量请求,如下所示。您只需更改r的上限范围,直到页面存在,它就会工作

from urllib.request import urlopen
from bs4 import BeautifulSoup
import pandas as pd

head = "http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r="  # no trailing /
trail = """&f=G&l=50&co1=AND&d=PTXT&s1=("deep+learning".CLTX.+or+"deep+learning".DCTX.)&OS=ACLM/"deep+learning"""

final_url = []
news_data = []
for r in range(32,38): #change the upper range as per requirement
    final_url.append(head + str(r) + trail)
for url in final_url:
    try:
        page = urlopen(url)
        soup = BeautifulSoup(page, 'html.parser')   
        patentNumber = soup.title.text
        news_articles = [{'page_url':  url,
                     'patentNumber':  patentNumber}
                    ]
        news_data.extend(news_articles)     
    except Exception as e:
        print(e)
        print("continuing....")
        continue
df =  pd.DataFrame(news_data)  

借此机会清理您的代码。我删除了不必要的
re
导入,简化了您的功能:

from urllib.request import urlopen
from bs4 import BeautifulSoup


def get_soup(web_page):
    web_page = urlopen(web_page)
    return BeautifulSoup(web_page, 'html.parser')

def get_title(soup):
    return soup.title.text  # Patent Number

def get_next_link(soup):
    return soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]").parent['href']

base_url = 'http://patft.uspto.gov'
web_page = base_url + '/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22'

soup = get_soup(web_page)

get_title(soup)
> 'United States Patent: 10530579'

get_next_link(soup)
> '/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=32&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/"deep+learning"'

soup = get_soup(base_url + get_next_link(soup))
get_title(soup)
> 'United States Patent: 10529534'

get_next_link(soup)
> '/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=33&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/"deep+learning"'