Python 2.7 如何刮下一页';s项目
你好,我是编程新手,也是个新手。我试着刮一些东西。但无法执行“刮下一页”项目,请帮助分析此网站的下一链接url 这是我的密码:Python 2.7 如何刮下一页';s项目,python-2.7,scrapy,scrapy-splash,scrapy-shell,Python 2.7,Scrapy,Scrapy Splash,Scrapy Shell,你好,我是编程新手,也是个新手。我试着刮一些东西。但无法执行“刮下一页”项目,请帮助分析此网站的下一链接url 这是我的密码: import scrapy from scrapy.linkextractors import LinkExtractor class BdJobs(scrapy.Spider): name = 'jobs' allowed_domains = ['Jobs.com'] start_urls = [ '
import scrapy
from scrapy.linkextractors import LinkExtractor
class BdJobs(scrapy.Spider):
name = 'jobs'
allowed_domains = ['Jobs.com']
start_urls = [
'http://jobs.com/',
]
#rules=( Rule(LinkExtractor(allow()), callback='parse', follow=True))
def parse(self, response):
for title in response.xpath('//div[@class="job-title-text"]/a'):
yield {
'titles': title.xpath('./text()').extract()[0].strip()
}
nextPageLink:
for grab the next url here is the inspect Element url:
https://08733078838609164420.googlegroups.com/attach/58c611bdb536b/bdjobs.png?part=0.1&view=1&vt=ANaJVrEDQr4PODzoOkFRO_fLhL2ZF3x-Mts4XJ8m8qb2RSX1b4n6kv0E-62A2yvw0HkBjrmUOwCrFpMBk_h8UYSWDO6hZXyt-N2brbcYwtltG-A6NiHeaGc
Here is output:
{"titles": "Senior Software Engineer (.Net)"},
{"titles": "Java programmer"},
{"titles": "VLSI Design Engineer (Japan)"},
{"titles": "Assistant Executive (Computer Lab-Evening programs)"},
{"titles": "IT Officer, Business System Management"},
{"titles": "Executive, IT"},
{"titles": "Officer, IT"},
{"titles": "Laravel PHP Developer"},
{"titles": "Executive - IT (EDISON Footwear)"},
{"titles": "Software Engineer (PHP/ MySQL)"},
{"titles": "Software Engineer [Back End]"},
{"titles": "Full Stack Developer"},
{"titles": "Mobile Application Developer (iOS/ Android)"},
{"titles": "Head of IT Security Operations"},
{"titles": "Database Administrator, Senior Analyst"},
{"titles": "Infrastructure Delivery Senior Analyst, Network Security"},
{"titles": "Head of IT Support Operations"},
{"titles": "Hardware Engineer"},
{"titles": "JavaScript/ Coffee Script Programmer"},
{"titles": "Trainer - Auto CAD"},
{"titles": "ASSISTENT PRODUCTION OFFICER"},
{"titles": "Customer Relationship Executive"},
{"titles": "Head of Sales"},
{"titles": "Sample Master"},
{"titles": "Manager/ AGM (Finance & Accounts)"},
{"titles": "Night Aiditor"},
{"titles": "Officer- Poultry"},
{"titles": "Business Analyst"},
{"titles": "Sr. Executive - Sales & Marketing (Sewing Thread)"},
{"titles": "Civil Engineer"},
{"titles": "Executive Director-HR"},
{"titles": "Sr. Executive (MIS & Internal Audit)"},
{"titles": "Manager, Health & Safety"},
{"titles": "Computer Engineer (Diploma)"},
{"titles": "Sr. Manager/ Manager, Procurement"},
{"titles": "Specialist, Content"},
{"titles": "Manager, Warranty and Maintenance"},
{"titles": "Asst. Manager - Compliance"},
{"titles": "Officer/Sr. Officer/Asst. Manager (Store)"},
{"titles": "Manager, Maintenance (Sewing)"}
不要使用
start\u URL
,这会让人困惑
使用start\u请求
函数,此函数在Spider启动后立即调用
class BdJobs(scrapy.Spider):
name = 'bdjobs'
allowed_domains = ['BdJobs.com']
def start_requests(self):
urls = ['http://jobs.bdjobs.com/','http://jobs.bdjobs.com/jobsearch.asp?fcatId=8&icatId=']
for url in urls:
yield Request(url,self.parse_detail_page)
def parse_detail_page(self, response):
for title in response.xpath('//div[@class="job-title-text"]/a'):
yield {
'titles': title.xpath('./text()').extract()[0].strip()
}
# TODO
nextPageLink = GET NEXT PAGE LINK HERE
yield Request(nextPageLink,self.parse_detail_page)
请注意,您必须在
nextPageLink
中删除下一页链接,谢谢您的回答。但是从Inspect Elements()如何获取nextPageLink。请帮忙。非常感谢。@Rana他们正在使用Javascript进入下一页。请分享你正在抓取的网站链接,然后我可以帮助这是这里的链接是image link@Rana他们正在使用POST请求导航到下一页。。。您可以在Inspect视图中看到POST URL和表单参数。。。您必须使用Scrapy来处理此操作。下面是PHP cURL代码示例。。。请注意pg=15
字符串。。它定义了页码,并为您提供帮助。您能提供一些与python相关的文档吗?这样我就可以理解了。谢谢。