Python 我的第一个Scrapy Spider不适用于MySQL数据库

Python 我的第一个Scrapy Spider不适用于MySQL数据库,python,scrapy,Python,Scrapy,我是新的网页抓取,我的抓取代码不工作,我没有线索!我想抓取这个网站(),然后将数据保存到MySQL数据库中。因此,我设计了一个基本的蜘蛛: import scrapy from ..items import QuotetutorialItem class QuoteSpider(scrapy.Spider) : name = 'quotes' start_urls = [ 'http://quotes.toscrape.com/' ] def

我是新的网页抓取,我的抓取代码不工作,我没有线索!我想抓取这个网站(),然后将数据保存到MySQL数据库中。因此,我设计了一个基本的蜘蛛:

import scrapy
from ..items import QuotetutorialItem

class QuoteSpider(scrapy.Spider) :
    name = 'quotes'
    start_urls = [
        'http://quotes.toscrape.com/'
    ]

    def parse(self, response) :

        items = QuotetutorialItem()

        all_div_quotes = response.css('div.quote')

        for quotes in all_div_quotes:

            title = quotes.css('span.text::text').extract()
            author = quotes.css('.author::text').extract()
            tag = quotes.css('.tag::text').extract()

            items['title'] = title
            items['author'] = author
            items['tag'] = tag

            yield items
这是我的“pipelines.py”代码:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

# Scraping data - > Item Containers - > Json/csv files
# Scraping data - > Item Containers - > Pipeline - > SQL/Mongo database

import mysql.connector

class QuotetutorialPipeline(object):

    def __int__(self):
        self.create_connection()
        self.create_table()

    def create_connection(self):
        self.conn = mysql.connector.connect(
                host = 'localhost',
                user = 'root',
                passwd = 'jozefleonel',
                database = 'myquotes'
            )
        self.curr = self.conn.cursor()

    def create_table(self):
        self.curr.execute("""DROP TABLE IF EXISTS quotes_tb""")
        self.curr.execute("""create table quotes_tb(
                        title text,
                        author text,
                        tag text
                        )""")

    def process_item(self, item, spider):
        self.store_db(item)
        return item

    def store_db(self,item):
        self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
            item['title'][0],
            item['author'][0],
            item['tag'][0]
            ))

        self.conn.commit()
您可以在回复中找到错误消息
谢谢^ ^

在您呼叫的流程项目中

自存储数据库(项目)

store_db正在尝试使用数据库对象curr。而且它在您的管道中的任何地方都没有定义

我想这是你应该做的

class QuotetutorialPipeline(object):

    def __init__(self):
        self.curr,self.conn = self.create_connection()
        self.curr = self.create_table(self.curr)

    def create_connection(self):
        conn = mysql.connector.connect(
                host = 'localhost',
                user = 'root',
                passwd = 'jozefleonel',
                database = 'myquotes'
            )
        return conn.cursor(),conn

    def create_table(self,curr):
        curr.execute("""DROP TABLE IF EXISTS quotes_tb""")
        curr.execute("""create table quotes_tb(
                        title text,
                        author text,
                        tag text
                        )""")

        return curr

    def process_item(self, item, spider):
        self.store_db(item)
        return item

    def store_db(self,item):
        self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
            item['title'][0],
            item['author'][0],
            item['tag'][0]
            ))

        self.conn.commit()
我们将从create_connection返回游标和连接对象,从create_表返回游标对象


我们现在可以在store_db上使用它。

在您调用的流程项中

自存储数据库(项目)

store_db正在尝试使用数据库对象curr。而且它在您的管道中的任何地方都没有定义

我想这是你应该做的

class QuotetutorialPipeline(object):

    def __init__(self):
        self.curr,self.conn = self.create_connection()
        self.curr = self.create_table(self.curr)

    def create_connection(self):
        conn = mysql.connector.connect(
                host = 'localhost',
                user = 'root',
                passwd = 'jozefleonel',
                database = 'myquotes'
            )
        return conn.cursor(),conn

    def create_table(self,curr):
        curr.execute("""DROP TABLE IF EXISTS quotes_tb""")
        curr.execute("""create table quotes_tb(
                        title text,
                        author text,
                        tag text
                        )""")

        return curr

    def process_item(self, item, spider):
        self.store_db(item)
        return item

    def store_db(self,item):
        self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
            item['title'][0],
            item['author'][0],
            item['tag'][0]
            ))

        self.conn.commit()
我们将从create_connection返回游标和连接对象,从create_表返回游标对象


我们现在可以在store_db上使用它。

所有消息错误:

(ScrapyTutorial) D:\ScrapyTutorial\quotetutorial>scrapy crawl quotes
2019-06-21 14:43:36 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: quotetut
orial)
2019-06-21 14:43:36 [scrapy.utils.log] INFO: Versions: lxml 4.3.4.0, libxml2 2.9
.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.1, Python 3.6.6 (v
3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:03) [MSC v.1900 64 bit (AMD64)], pyOpenSSL
19.0.0 (OpenSSL 1.1.1c  28 May 2019), cryptography 2.7, Platform Windows-8.1-6.3
.9600-SP0
2019-06-21 14:43:36 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'qu
otetutorial', 'NEWSPIDER_MODULE': 'quotetutorial.spiders', 'ROBOTSTXT_OBEY': Tru
e, 'SPIDER_MODULES': ['quotetutorial.spiders']}
2019-06-21 14:43:37 [scrapy.extensions.telnet] INFO: Telnet Password: e7bf79ce64
7de417
2019-06-21 14:43:37 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2019-06-21 14:43:45 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-06-21 14:43:45 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-06-21 14:43:48 [scrapy.middleware] INFO: Enabled item pipelines:
['quotetutorial.pipelines.QuotetutorialPipeline']
2019-06-21 14:43:48 [scrapy.core.engine] INFO: Spider opened
2019-06-21 14:43:48 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pag
es/min), scraped 0 items (at 0 items/min)
2019-06-21 14:43:48 [scrapy.extensions.telnet] INFO: Telnet console listening on
 127.0.0.1:6023
2019-06-21 14:43:49 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://quotes
.toscrape.com/robots.txt> (referer: None)
2019-06-21 14:43:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes
.toscrape.com/> (referer: None)
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['A
lbert Einstein'],
 'tag': ['change', 'deep-thoughts', 'thinking', 'world'],
 'title': ['"The world as we have created it is a process of our thinking. It '
           'cannot be changed without changing our thinking."']}
Traceback (most recent call last):
  File "d:\scrapytutorial\lib\site-packages\twisted\internet\defer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['J
.K. Rowling'],
 'tag': ['abilities', 'choices'],
 'title': ['"It is our choices, Harry, that show what we truly are, far more '
           'than our abilities."']}
Traceback (most recent call last):
  File "d:\scrapytutorial\lib\site-packages\twisted\internet\defer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['A
lbert Einstein'],
 'tag': ['inspirational', 'life', 'live', 'miracle', 'miracles'],
 'title': ['"There are only two ways to live your life. One is as though '
           'nothing is a miracle. The other is as though everything is a '
           'miracle."']}
Traceback (most recent call last):
  File "d:\scrapytutorial\lib\site-packages\twisted\internet\defer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['J
ane Austen'],
 'tag': ['aliteracy', 'books', 'classic', 'humor'],
 'title': ['"The person, be it gentleman or lady, who has not pleasure in a '
           'good novel, must be intolerably stupid."']}
Traceback (most recent call last):
  File "d:\scrapytutorial\lib\site-packages\twisted\internet\defer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['M
arilyn Monroe'],
 'tag': ['be-yourself', 'inspirational'],
 'title': [""Imperfection is beauty, madness is genius and it's better to be "
           'absolutely ridiculous than absolutely boring."']}
Traceback (most recent call last):
  File "d:\scrapytutorial\lib\site-packages\twisted\internet\defer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['A
lbert Einstein'],
 'tag': ['adulthood', 'success', 'value'],
 'title': ['"Try not to become a man of success. Rather become a man of '
           'value."']}
Traceback (most recent call last):
  File "d:\scrapytutorial\lib\site-packages\twisted\internet\defer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['A
ndré Gide'],
 'tag': ['life', 'love'],
 'title': ['"It is better to be hated for what you are than to be loved for '
           'what you are not."']}
Traceback (most recent call last):
  File "d:\scrapytutorial\lib\site-packages\twisted\internet\defer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['T
homas A. Edison'],
 'tag': ['edison', 'failure', 'inspirational', 'paraphrased'],
 'title': [""I have not failed. I've just found 10,000 ways that won't work.""]}

Traceback (most recent call last):
  File "d:\scrapytutorial\lib\site-packages\twisted\internet\defer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['E
leanor Roosevelt'],
 'tag': ['misattributed-eleanor-roosevelt'],
 'title': ['"A woman is like a tea bag; you never know how strong it is until '
           "it's in hot water.""]}
Traceback (most recent call last):
  File "d:\scrapytutorial\lib\site-packages\twisted\internet\defer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['S
teve Martin'],
 'tag': ['humor', 'obvious', 'simile'],
 'title': ['"A day without sunshine is like, you know, night."']}
Traceback (most recent call last):
  File "d:\scrapytutorial\lib\site-packages\twisted\internet\defer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.engine] INFO: Closing spider (finished)
2019-06-21 14:43:50 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 446,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 2701,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/404': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 6, 21, 12, 43, 50, 376034),
 'log_count/DEBUG': 2,
 'log_count/ERROR': 10,
 'log_count/INFO': 9,
 'response_received_count': 2,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/404': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2019, 6, 21, 12, 43, 48, 610377)}
2019-06-21 14:43:50 [scrapy.core.engine] INFO: Spider closed (finished)
(ScrapyTutorial)D:\ScrapyTutorial\quotetuorial>scrapy crawl引号
2019-06-21 14:43:36[scrapy.utils.log]信息:scrapy 1.6.0已启动(bot:quotetut
(续)
2019-06-21 14:43:36[scrapy.utils.log]信息:版本:lxml4.3.4.0,libxml2.9
.5、cssselect 1.0.3、parsel 1.5.1、w3lib 1.20.0、Twisted 19.2.1、Python 3.6.6(v
3.6.6:4cf1f54eb7,2018年6月27日,03:37:03)[MSC v.1900 64位(AMD64)],pyOpenSSL
19.0.0(OpenSSL 1.1.1c,2019年5月28日),加密2.7,平台Windows-8.1-6.3
.9600-SP0
2019-06-21 14:43:36[scrapy.crawler]信息:覆盖的设置:{'BOT_NAME':'qu
otetutorial,“NEWSPIDER”模块:“quotetutorial.spider”,“ROBOTSTXT”服从:Tru
e、 “SPIDER_模块”:[quotetutorial.SPIDER']}
2019-06-21 14:43:37[scrapy.extensions.telnet]信息:telnet密码:e7bf79ce64
7de417
2019-06-21 14:43:37[scrapy.middleware]信息:启用的扩展:
['scrapy.extensions.corestats.corestats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.logstats']
2019-06-21 14:43:45[剪贴簿中间件]信息:启用的下载程序中间件:
['scrapy.downloaderMiddleware.robotstxt.RobotsTxtMiddleware',
'scrapy.downloaderMiddleware.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloaderMiddleware.defaultheaders.DefaultHeadersMiddleware',
'scrapy.DownloaderMiddleware.useragent.UserAgentMiddleware',
'scrapy.DownloaderMiddleware.retry.RetryMiddleware',
'scrapy.DownloaderMiddleware.redirect.MetaRefreshMiddleware',
'scrapy.DownloaderMiddleware.httpcompression.HttpCompressionMiddleware',
'scrapy.DownloaderMiddleware.redirect.RedirectMiddleware',
“scrapy.DownloaderMiddleware.cookies.CookiesMiddleware”,
'scrapy.downloadermiddleware.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddleware.stats.DownloaderStats']
2019-06-21 14:43:45[scrapy.middleware]信息:启用的蜘蛛中间件:
['scrapy.spidermiddleware.httperror.httperror中间件',
'刮皮.SpiderMiddleware.场外.场外Iddleware',
“scrapy.Spidermiddleware.referer.RefererMiddleware”,
'scrapy.spiderMiddleware.urllength.UrlLengthMiddleware',
'scrapy.spidermiddleware.depth.DepthMiddleware']
2019-06-21 14:43:48[scrapy.middleware]信息:启用的项目管道:
['quotetutorial.pipelines.quotetutorial管道']
2019-06-21 14:43:48[刮屑芯发动机]信息:十字轴已打开
2019-06-21 14:43:48[scrapy.extensions.logstats]信息:爬网0页(0页
es/min),刮取0个项目(0个项目/min)
2019-06-21 14:43:48[scrapy.extensions.telnet]信息:telnet控制台正在监听
127.0.0.1:6023
2019-06-21 14:43:49[scrapy.core.engine]调试:爬网(404)(参考:无)
2019-06-21 14:43:49[刮屑核心引擎]调试:爬网(200)(参考:无)
2019-06-21 14:43:50[scrapy.core.scraper]错误:处理错误{'author':['A
爱因斯坦'],
“标签”:[“变化”、“深刻的想法”、“思考”、“世界”],
‘title’:[‘我们创造的世界是我们思考的过程,它是’
“如果不改变我们的思维,就无法改变。”]}
回溯(最近一次呼叫最后一次):
文件“d:\scrapytutorial\lib\site packages\twisted\internet\defer.py”,第654行
,in_runCallbacks
current.result=回调(current.result,*args,**kw)
文件“D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py”,第39行,中
处理项目
自存储数据库(项目)
文件“D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py”,第43行,中
存储单元数据库
self.curr.execute(““”插入引号中\u tb值(%s,%s,%s)”(
AttributeError:“QuotetutorialPipeline”对象没有属性“curr”
2019-06-21 14:43:50[scrapy.core.scraper]错误:处理错误{'author':['J
[K.罗琳],
“标记”:[“能力”,“选择”],
‘title’:[‘哈利,是我们的选择显示了我们的真实,远远不止这些’
“超过我们的能力。”]}
回溯(最近一次呼叫最后一次):
文件“d:\scrapytutorial\lib\site packages\twisted\internet\defer.py”,第654行
,in_runCallbacks
current.result=回调(current.result,*args,**kw)
文件“D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py”,第39行,中
处理项目
自存储数据库(项目)
文件“D:\ScrapyTutorial\quotetutorial\quotetutorial\pipelines.py”,第43行,中
存储单元数据库
self.curr.execute(““”插入引号中\u tb值(%s,%s,%s)”(
AttributeError:“QuotetutorialPipeline”对象没有属性“curr”
2019-06-21 14:43:50[scrapy.core.scraper]错误:处理错误{'author':['A
爱因斯坦'],
‘标签’:[‘灵感’、‘生活’、‘奇迹’、‘奇迹’],
“生活只有两种方式,一种是仿佛”
“没有什么是奇迹,另一个是一切都是奇迹”
“奇迹。”]}
回溯(最近一次呼叫最后一次):
文件“d:\scrapytutorial\lib\site packages\twisted\internet\defe