Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/python-2.7/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 具有身份验证数据的公共FTP站点的scrapy,获取FTP错误_Python_Python 2.7_Ftp_Scrapy_Scrapy Spider - Fatal编程技术网

Python 具有身份验证数据的公共FTP站点的scrapy,获取FTP错误

Python 具有身份验证数据的公共FTP站点的scrapy,获取FTP错误,python,python-2.7,ftp,scrapy,scrapy-spider,Python,Python 2.7,Ftp,Scrapy,Scrapy Spider,我正在为公共FTP站点编写带有身份验证的spider 我给了ftp的用户名和密码。 Scrapy未处理此请求及其给出的“ftp\U用户”错误 # all import stmt class my_xml(BaseSpider): name = 'my_xml' def start_requests(self): yield Request( url='url', meta={'ftp_user': self.f

我正在为公共FTP站点编写带有身份验证的spider

我给了ftp的用户名和密码。 Scrapy未处理此请求及其给出的“ftp\U用户”错误

 # all import stmt
 class my_xml(BaseSpider):
    name = 'my_xml'

    def start_requests(self):
        yield Request(
            url='url',
            meta={'ftp_user': self.ftp_user, 'ftp_password': self.ftp_password}
        )

    def parse(self, response):
        print response.body
我犯了这样的错误

 2015-04-03 12:46:08+0530 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
 2015-04-03 12:46:08+0530 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
 2015-04-03 12:46:08+0530 [-] ERROR: Unhandled error in Deferred:
 2015-04-03 12:46:08+0530 [-] Unhandled Error
    Traceback (most recent call last):
      File "C:\Python27\lib\site-packages\scrapy\core\downloader\middleware.py", line 38, in process_request
        return download_func(request=request, spider=spider)
      File "C:\Python27\lib\site-packages\scrapy\core\downloader\__init__.py", line 123, in _enqueue_request
        self._process_queue(spider, slot)
      File "C:\Python27\lib\site-packages\scrapy\core\downloader\__init__.py", line 143, in _process_queue
        dfd = self._download(slot, request, spider)
      File "C:\Python27\lib\site-packages\scrapy\core\downloader\__init__.py", line 154, in _download
        dfd = mustbe_deferred(self.handlers.download_request, request, spider)
    --- <exception caught here> ---
      File "C:\Python27\lib\site-packages\scrapy\utils\defer.py", line 39, in mustbe_deferred
        result = f(*args, **kw)
      File "C:\Python27\lib\site-packages\scrapy\core\downloader\handlers\__init__.py", line 40, in download_request
        return handler(request, spider)
      File "C:\Python27\lib\site-packages\scrapy\core\downloader\handlers\ftp.py", line 72, in download_request
        creator = ClientCreator(reactor, FTPClient, request.meta["ftp_user"],
    exceptions.KeyError: 'ftp_user'
2015-04-03 12:46:08+0530[scrapy]调试:Telnet控制台监听127.0.0.1:6023
2015-04-03 12:46:08+0530[scrapy]调试:在127.0.0.1:6080上侦听Web服务
2015-04-03 12:46:08+0530[-]错误:延迟中未处理的错误:
2015-04-03 12:46:08+0530[-]未处理的错误
回溯(最近一次呼叫最后一次):
文件“C:\Python27\lib\site packages\scrapy\core\downloader\middleware.py”,第38行,进程中\u请求
返回下载函数(请求=请求,spider=spider)
文件“C:\Python27\lib\site packages\scrapy\core\downloader\\uuuu init\uuuu.py”,第123行,在请求队列中
self.\u进程\u队列(spider、slot)
文件“C:\Python27\lib\site packages\scrapy\core\downloader\\uuuu init\uuuu.py”,第143行,在进程队列中
dfd=自下载(插槽、请求、spider)
文件“C:\Python27\lib\site packages\scrapy\core\downloader\\uuuu init\uuuuu.py”,第154行,下载
dfd=必须延迟(self.handlers.download\u请求、请求、spider)
---  ---
文件“C:\Python27\lib\site packages\scrapy\utils\defer.py”,第39行,必须延迟
结果=f(*参数,**kw)
下载请求中的第40行文件“C:\Python27\lib\site packages\scrapy\core\downloader\handlers\\uuuu init\uuuu.py”
返回处理程序(请求,spider)
文件“C:\Python27\lib\site packages\scrapy\core\downloader\handlers\ftp.py”,第72行,在下载请求中
creator=ClientCreator(reactor、FTPClient、request.meta[“ftp_user”],
exceptions.KeyError:“ftp\U用户”
任何人都可以给出这个错误的解决方案。如果我做了错误的程序,请建议我正确的解决方案。如何处理这些类型的蜘蛛? 请注意:URL、ftp_用户和ftp_密码是正确的,在浏览器中,我们可以使用这些数据打开它。

请尝试以下方法:

# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import Request

class my_xml(scrapy.Spider):
    name = 'my_xml'
    ftp_host = 'ftp://127.0.0.1'
    ftp_user = 'your_username'
    ftp_password = 'your_password'

    def start_requests(self):
        yield Request(
            url=self.ftp_host,
            meta={'ftp_user': self.ftp_user, 'ftp_password': self.ftp_password}
        )

    def parse(self, response):
        print response.body

你能修改你的代码示例吗?我发现你已经从你的示例中删除了代码,导致代码不起作用,让人们给你提供解决方案,解决你的示例代码损坏而不是你的实际问题。请提供工作代码,简化和动画化,但如果URL、用户和密码被替换,这些代码会起作用由实际值确定。