Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/mysql/70.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/google-app-engine/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python mysql.connector.errors.InterfaceError:2003:Can';t连接到';127.0.0.1:3306';关于Scrapinghub_Python_Mysql_Scrapy_Scrapinghub - Fatal编程技术网

Python mysql.connector.errors.InterfaceError:2003:Can';t连接到';127.0.0.1:3306';关于Scrapinghub

Python mysql.connector.errors.InterfaceError:2003:Can';t连接到';127.0.0.1:3306';关于Scrapinghub,python,mysql,scrapy,scrapinghub,Python,Mysql,Scrapy,Scrapinghub,我试着在scrapinghub上运行我的spider,然后运行它得到一个错误 Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks result = g.send(result) File "/usr/local/lib/python3.6/site-pack

我试着在scrapinghub上运行我的spider,然后运行它得到一个错误

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
    result = g.send(result)
  File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 80, in crawl
    self.engine = self._create_engine()
  File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 105, in _create_engine
    return ExecutionEngine(self, lambda _: self.stop())
  File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 70, in __init__
    self.scraper = Scraper(crawler)
  File "/usr/local/lib/python3.6/site-packages/scrapy/core/scraper.py", line 71, in __init__
    self.itemproc = itemproc_cls.from_crawler(crawler)
  File "/usr/local/lib/python3.6/site-packages/scrapy/middleware.py", line 53, in from_crawler
    return cls.from_settings(crawler.settings, crawler)
  File "/usr/local/lib/python3.6/site-packages/scrapy/middleware.py", line 35, in from_settings
    mw = create_instance(mwcls, settings, crawler)
  File "/usr/local/lib/python3.6/site-packages/scrapy/utils/misc.py", line 144, in create_instance
    return objcls(*args, **kwargs)
  File "/app/__main__.egg/skripsi/pipelines.py", line 19, in __init__
  File "/app/__main__.egg/skripsi/pipelines.py", line 29, in create_connection
  File "/app/python/lib/python3.6/site-packages/mysql/connector/__init__.py", line 173, in connect
    return MySQLConnection(*args, **kwargs)
  File "/app/python/lib/python3.6/site-packages/mysql/connector/connection.py", line 104, in __init__
    self.connect(**kwargs)
  File "/app/python/lib/python3.6/site-packages/mysql/connector/abstracts.py", line 780, in connect
    self._open_connection()
  File "/app/python/lib/python3.6/site-packages/mysql/connector/connection.py", line 284, in _open_connection
    self._socket.open_connection()
  File "/app/python/lib/python3.6/site-packages/mysql/connector/network.py", line 532, in open_connection
    errno=2003, values=(self.get_address(), _strioerror(err)))
mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on '127.0.0.1:3306' (111 Connection refused)
我尝试在requirements.txt上添加mysql连接器python,并在scrapinghub.yml上配置我的依赖项,如下所示

我的要求.txt

mysql-connector-python
我的scrapinghub.yml

projects:
  default: 396892
stacks:
  default: scrapy:1.6-py3
requirements:
  file: requirements.txt
我的名字是.py

import mysql.connector

class SkripsiPipeline(object):

    def __init__(self):
        self.create_connection()
        # dispatcher.connect(self.close_spider, signals.close_spider)
        # self.create_table()

    def create_connection(self):
        self.conn = mysql.connector.connect(
            host = '127.0.0.1',
            password = '',
            user = 'root',
            database = 'news'
        )
        self.curr = self.conn.cursor()

    def process_item(self, item, spider):
        self.store_db(item)
        return item

    def store_db(self,item):
        self.curr.execute("INSERT INTO news_tb (url, title, author, time, crawl_time, imagelink, content) values (%s,%s,%s,%s,%s,%s,%s)",(
            item['url'][0],
            item['title'][0],
            item['author'][0],
            item['time'][0],
            item['crawl_time'][0],
            item['imagelink'][0],
            item['content'][0]
        ))
        self.conn.commit()
这是我在scrapinghub上运行蜘蛛时遇到的错误。任何熟悉这个问题的人,请告诉我


谢谢。

这根本不可能。因为ScrapyCloud不提供任何SQL支持。您正在尝试连接到127.0.0.1-它是localhost,这意味着MySQL应该安装在ScrapyCloud上并运行。那是不可能的。
我建议您在web上的某个地方运行MySQL,并通过域/全局ip地址连接到in

您能检查一下MySQL服务是否正在运行吗?我也很困惑,我的服务中没有MySql您使用的是windows还是linux环境?使用windows,我已经在我的服务中创建了MySql,正如本教程中所述,下一步是什么使用本博客确定MySql服务器在哪里