Python 解析HTML表的内容-但iframe是问题的根源

Python 解析HTML表的内容-但iframe是问题的根源,python,html,parsing,iframe,beautifulsoup,Python,Html,Parsing,Iframe,Beautifulsoup,今天的最后一个问题。我试图找到一种方法来解析这个页面的表的内容:在一个var中,把它放在一个Excel文件中 使用BeautifulSoup解析数据后,将数据放入excel没有问题 但是(总是有一个“但是”)源代码非常奇怪,里面有一个iframe #!/usr/bin/python # -- coding: utf-8 -- import xlwt import urllib2 import sys import re from bs4 import BeautifulSoup as soup

今天的最后一个问题。我试图找到一种方法来解析这个页面的表的内容:在一个var中,把它放在一个Excel文件中

使用BeautifulSoup解析数据后,将数据放入excel没有问题

但是(总是有一个“但是”)源代码非常奇怪,里面有一个iframe

#!/usr/bin/python
# -- coding: utf-8 --

import xlwt
import urllib2
import sys
import re
from bs4 import BeautifulSoup as soup
import urllib

print("TEST FOR PTE TESTS CENTERS")

url = 'http://www6.pearsonvue.com/Dispatcher?application=VTCLocator&action=actStartApp&v=W2L&cid=445'
values = {
        'sortColumn' : 2,
        'sortDirection' : 1,
        'distanceUnits' : 0,
        'proximitySearchLimit'  : 20,
        'countryCode'  : 'GBR', # WE TRY FOR NOW WITH A SPECIFIC COUNTRY

            }

user_agent = 'Mozilla/5 (Solaris 10) Gecko'
headers = { 'User-Agent' : user_agent }

data = urllib.urlencode(values)
req = urllib2.Request(url, data, headers)
response = urllib2.urlopen(req)
thePage = response.read()
the_page = soup(thePage)


result = the_page.find('frame', attrs={'name' : 'VTCLocatorPageFrame'})
print result # We have now the FRAME link in the result var
因此,请在上面找到我正在努力工作的脚本的源代码

运行脚本后,我们在结果变量中有以下内容:

如果您有任何想法,它可能会非常有用:)


提前感谢并通过python

对不起,这个问题不太清楚。我试图找到解决方案,以下是我使用的脚本:

#!/usr/bin/python
# -- coding: utf-8 --

import xlwt
import urllib2
import sys
import re
from bs4 import BeautifulSoup as soup
import urllib
liste_countries = ['USA','AFG','ALA','ALB','DZA','ASM','AND','AGO','AIA','ATA','ATG','ARG','ARM','ABW','AUS','AUT','AZE','BHS','BHR','BGD','BRB','BLR','BEL','BLZ','BEN','BMU','BTN','BOL','BES','BIH','BWA','BVT','BRA','IOT','BRN','BGR','BFA','BDI','BDI','KHM','CMR','CAN','CPV','CYM','CAF','TCD','CHL','CHN','CXR','CCK','COL','COM','COG','COD','COK','CRI','CIV','HRV','CUW','CYP','CZE','DNK','DJI','DMA','DOM','ECU','EGY','SLV','GNQ','ERI','EST','ETH','FLK','FRO','FJI','FIN','FRA','GUF','PYF','ATF','GAB','GMB','GEO','DEU','GHA','GIB','GRC','GRL','GRD','GLP','GUM','GTM','GGY','GIN','GNB','GUY','HTI','HMD','HND','HKG','HUN','ISL','IND','IDN','IRN','IRQ','IRL','IMN','ISR','ITA','JAM','JPN','JEY','JOR','KAZ','KEN','KIR','PRK','KOR','KWT','KGZ','LAO','LVA','LBN','LSO','LBR','LBY','LIE','LTU','LUX','MAC','MKD','MDG','MWI','MYS','MDV','MLI','MLT','MHL','MTQ','MRT','MUS','MYT','MEX','FSM','MDA','MCO','MNG','MNE','MSR','MAR','MOZ','MMR','NAM','NRU','NPL','NLD','NCL','NZL','NIC','NER','NGA','NIU','NFK','MNP','NOR','OMN','PAK','PLW','PSE','PAN','PNG','PRY','PER','PHL','PCN','POL','PRT','PRI','QAT','REU','ROU','RUS','RWA','BLM','KNA','LCA','MAF','WSM','SMR','STP','SAU','SEN','SRB','SYC','SLE','SGP','SXM','SVK','SVN','SLB','SOM','ZAF','SGS','SSD','ESP','LKA','SHN','SPM','VCT','SDN','SUR','SJM','SWZ','SWE','CHE','TWN','TJK','TZA','THA','TLS','TKL','TON','TTO','TUN','TUR','TKM','TCA','TUV','UGA','UKR','ARE','GBR','URY','UMI','UZB','VUT','VAT','VEN','VNM','VGB','VIR','WLF','ESH','YEM','ZMB','ZWE']


name_doc_out = raw_input("What do you want for name for the Excel output document ? >>> ")
wb = xlwt.Workbook(encoding='utf-8')
ws = wb.add_sheet("PTE_TC")
x = 0
y = 0
numero = 0
total = len(liste_countries)
total_city = len(villes_us)
number_city = 0
for liste in liste_countries:
            if 0 == 1:
                        print("THIS IF IS JUST FOR TEST")
            else:
                        print("Fetching country number %s on %s" % (numero, total))
                        numero = numero + 1
                        url = 'http://www6.pearsonvue.com/Dispatcher?v=W2L&application=VTCLocator&HasXSes=Y&layerPath=ROOT.VTCLocator.SelTestCenterPage&wscid=199372577&layer=SelTestCenterPage&action=actDisplay&bfp=top.VTCLocatorPageFrame&bfpapp=top&wsid=1334887910891'
                        values = {
                                'sortColumn' : 2,
                                'sortDirection' : 1,
                                'distanceUnits' : 0,
                                'proximitySearchLimit'  : 20,
                                'countryCode'  : liste,

                                    }

                        user_agent = 'Mozilla/5 (Solaris 10) Gecko'
                        headers = { 'User-Agent' : user_agent }

                        data = urllib.urlencode(values)
                        req = urllib2.Request(url, data, headers)
                        response = urllib2.urlopen(req)
                        thePage = response.read()
                        the_page = soup(thePage)

                        #print the_page
                        tableau = the_page.find('table', attrs={'id' : 'apptable'})
                        print tableau
                        try:
                                    rows = tableau.findAll('tr')
                                    for tr in rows:
                                                cols = tr.findAll('td')
                                                # del / remove les td qui faut pas
                                                y = 0
                                                x = x + 1
                                                for td in cols:
                                                            print td.text
                                                            ws.write(x,y,td.text.strip())
                                                            wb.save("%s.xls" % name_doc_out)
                                                            y = y + 1
                        except (IndexError, AttributeError):
                                    pass
我认为问题来自我使用的URL。我猜id正在从请求更改为另一个。。。 http://www6.pearsonvue.com/Dispatcher?v=W2L&application=VTCLocator&HasXSes=Y&layerPath=ROOT.VTCLocator.SelTestCenterPage&wscid=199372577&layer=SelTestCenterPage&action=actDisplay&bfp=top.VTCLocatorPageFrame&bfpapp=top&wsid=1334887910891


它已经正常工作了一个小时,现在不再是了!:-)

如果您必须过滤/解析iframe,代码就在这里

from bs4 import BeautifulSoup
import urllib2

page = urllib2.urlopen("put_ur_url")
soup = BeautifulSoup(page)
    for link in soup.findAll('iframe'):
        if link['src'].startswith('start_of_path'):
            print(link)

根本不清楚你在问什么…很抱歉。如果您进入该页面,您将看到通过在列表中进行选择,您将访问一个包含测试中心名称、国家和地区的表。我想找到一种解析此数据的方法:-)