Python 如何自动化此beautifulsoup导入

Python 如何自动化此beautifulsoup导入,python,python-2.7,beautifulsoup,Python,Python 2.7,Beautifulsoup,我正在从此网页导入指向boxscores的链接 http://www.covers.com/pageLoader/pageLoader.aspx?page=/data/wnba/teams/pastresults/2012/team665231.html 我现在就是这样做的。我从第一页获得链接 url = 'http://www.covers.com/pageLoader/pageLoader.aspx?page=/data/wnba/teams/pastresults/2012/team66

我正在从此网页导入指向boxscores的链接

http://www.covers.com/pageLoader/pageLoader.aspx?page=/data/wnba/teams/pastresults/2012/team665231.html
我现在就是这样做的。我从第一页获得链接

url = 'http://www.covers.com/pageLoader/pageLoader.aspx?page=/data/wnba/teams/pastresults/2012/team665231.html'

boxurl = urllib2.urlopen(url).read()
soup = BeautifulSoup(boxurl)

boxscores = soup.findAll('a', href=re.compile('boxscore'))
basepath = "http://www.covers.com"
pages=[]          # This grabs the links from the page
for a in boxscores:
pages.append(urllib2.urlopen(basepath + a['href']).read())  
然后在一个新窗口中我做这个

newsoup = pages[1]  # I am manually changing this every time

soup = BeautifulSoup(newsoup)
def _unpack(row, kind='td'):
    return [val.text for val in row.findAll(kind)]

tables = soup('table')
linescore = tables[1]   
linescore_rows = linescore.findAll('tr')
roadteamQ1 = float(_unpack(linescore_rows[1])[1])
roadteamQ2 = float(_unpack(linescore_rows[1])[2])
roadteamQ3 = float(_unpack(linescore_rows[1])[3])
roadteamQ4 = float(_unpack(linescore_rows[1])[4])  # add OT rows if ???
roadteamFinal = float(_unpack(linescore_rows[1])[-3])
hometeamQ1 = float(_unpack(linescore_rows[2])[1])
hometeamQ2 = float(_unpack(linescore_rows[2])[2])
hometeamQ3 = float(_unpack(linescore_rows[2])[3])
hometeamQ4 = float(_unpack(linescore_rows[2])[4])   # add OT rows if ???
hometeamFinal = float(_unpack(linescore_rows[2])[-3])    

misc_stats = tables[5]
misc_stats_rows = misc_stats.findAll('tr')
roadteam = str(_unpack(misc_stats_rows[0])[0]).strip()
hometeam = str(_unpack(misc_stats_rows[0])[1]).strip()
datefinder = tables[6]
datefinder_rows = datefinder.findAll('tr')

date = str(_unpack(datefinder_rows[0])[0]).strip()
year = 2012
from dateutil.parser import parse
parsedDate = parse(date)
date = parsedDate.replace(year)
month = parsedDate.month
day = parsedDate.day
modDate = str(day)+str(month)+str(year)
gameid = modDate + roadteam + hometeam

data = {'roadteam': [roadteam],
        'hometeam': [hometeam],
        'roadQ1': [roadteamQ1],
        'roadQ2': [roadteamQ2],   
        'roadQ3': [roadteamQ3],
        'roadQ4': [roadteamQ4],
        'homeQ1': [hometeamQ1],
        'homeQ2': [hometeamQ2],   
        'homeQ3': [hometeamQ3],
        'homeQ4': [hometeamQ4]}

globals()["%s" % gameid] = pd.DataFrame(data)
df = pd.DataFrame.load('df')
df = pd.concat([df, globals()["%s" % gameid]])
df.save('df')

我如何实现自动化,这样我就不必手动更改newsoup=pages[1]并一次性删除从第一个url链接的所有BoxScore。我对python非常陌生,对一些基础知识缺乏了解

因此在第一个代码框中收集
页面

所以在第二个代码框中,如果我理解的话,你必须循环这个

for page in pages:
    soup = BeautifulSoup(page)
    # rest of the code here

为什么您必须手动更改此项?就像第[2]页、第[3]页、…?我只知道如何一次导入一个。我会尝试一下。我需要暂停吗?如果是,我该怎么做?暂停?我不知道,你为什么要这么做。但是如果您想,可以使用
原始输入('someprompt:')
,因此它将等待您按enter键