Python-beautifulsoupwebscrape

Python-beautifulsoupwebscrape,python,html,web-scraping,beautifulsoup,html-parsing,Python,Html,Web Scraping,Beautifulsoup,Html Parsing,我正试图从下面的网站()中删除一个URL列表,但是我在阅读教程后没有任何运气。下面是我尝试过的代码的一个示例: from bs4 import BeautifulSoup import urllib2 url = "http://thedataweb.rm.census.gov/ftp/cps_ftp.html" page = urllib2.urlopen(url) soup = BeautifulSoup(page.read()) cpsLink

我正试图从下面的网站()中删除一个URL列表,但是我在阅读教程后没有任何运气。下面是我尝试过的代码的一个示例:

from bs4 import BeautifulSoup
import urllib2

url         = "http://thedataweb.rm.census.gov/ftp/cps_ftp.html"
page        = urllib2.urlopen(url)
soup        = BeautifulSoup(page.read())
cpsLinks    = soup.findAll(text = 
              "http://thedataweb.rm.census.gov/pub/cps/basic/")

print(cpsLinks)
我正在尝试提取以下链接:

http://thedataweb.rm.census.gov/pub/cps/basic/201501-/jan15pub.dat.gz

大概有200个这样的链接。如何获取它们?

据我所知,您希望提取遵循特定模式的链接<代码>美化组允许您指定为属性值

让我们使用以下模式:
pub/cps/basic/\d+\-/\w+\.dat\.gz$'
。它将匹配
pub/cps/basic/
,后跟一个或多个数字(
\d+
),后跟连字符(
\-
),后跟斜杠,一个或多个字母数字字符(
\w+
),然后在字符串末尾加上
.dat.gz
)。请注意,
-
在正则表达式中有特殊含义,需要用反斜杠转义

守则:

import re
import urllib2

from bs4 import BeautifulSoup


url = "http://thedataweb.rm.census.gov/ftp/cps_ftp.html"
soup = BeautifulSoup(urllib2.urlopen(url))

links = soup.find_all(href=re.compile(r'pub/cps/basic/\d+\-/\w+\.dat\.gz$'))

for link in links:
    print link.text, link['href']
印刷品:

13,232,040 http://thedataweb.rm.census.gov/pub/cps/basic/201501-/jan15pub.dat.gz
13,204,510 http://thedataweb.rm.census.gov/pub/cps/basic/201401-/dec14pub.dat.gz
13,394,607 http://thedataweb.rm.census.gov/pub/cps/basic/201401-/nov14pub.dat.gz
13,409,743 http://thedataweb.rm.census.gov/pub/cps/basic/201401-/oct14pub.dat.gz
13,208,428 http://thedataweb.rm.census.gov/pub/cps/basic/201401-/sep14pub.dat.gz
...
10,866,849 http://thedataweb.rm.census.gov/pub/cps/basic/199801-/jan99pub.dat.gz
3,172,305 http://thedataweb.rm.census.gov/pub/cps/basic/200701-/disability.dat.gz

到底出了什么问题?如果你只是想找到那些URL,为什么不使用简单的模式匹配呢?谢谢!这很有效。我也尝试了正则表达式抓取,但根据您的代码,我没有正确设置它,因为它抓取了一个更大的块,这是不可行的。