Python 使用cron运行scrapy爬行并保存到mongodb

Python 使用cron运行scrapy爬行并保存到mongodb,python,mongodb,cron,scrapy,Python,Mongodb,Cron,Scrapy,我正在运行一个scrapy spider,使用cron job和mongodb来抓取网站。当我运行一个常规的scrapy爬行时,它工作并保存到mongodb。但是,当我使用cron运行它时,它不会保存到数据库中。日志输出显示常规爬网结果,但不会保存到mongodb。我错过了什么?我的猜测是,这与scrapy的环境有关,因为我可以在单个爬行器中使用mongo save(),并获得所需的结果,但当我将其放入管道中时,就不会这样了 谢谢 **crontab -e** PATH=/home/ubunt

我正在运行一个scrapy spider,使用cron job和mongodb来抓取网站。当我运行一个常规的scrapy爬行时,它工作并保存到mongodb。但是,当我使用cron运行它时,它不会保存到数据库中。日志输出显示常规爬网结果,但不会保存到mongodb。我错过了什么?我的猜测是,这与scrapy的环境有关,因为我可以在单个爬行器中使用mongo save(),并获得所需的结果,但当我将其放入管道中时,就不会这样了

谢谢

**crontab -e** 
PATH=/home/ubuntu/crawlers/env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/15 * * * * /home/ubuntu/crawlers/env/bin/python3 /home/ubuntu/crawlers/spider/evilscrapy/evilscrapy/run.py > /tmp/output

**pipeline**
class EvilscrapyPipeline(object):
    def __init__(self):
        connection = MongoClient(
            settings['MONGODB_SERVER'],
            settings['MONGODB_PORT']
        )
        db = connection[settings['MONGODB_DB']]
        self.collection = db[settings['MONGODB_COLLECTION']]

    def process_item(self,item,spider):      
        self.log_record(item)
        print(item)
        if item['url']:
                if self.collection.find( { "url": item['url'] } ).count() == 0:
                    if item['title']:
                        if item['content']:
                            item['timestamp']=datetime.datetime.now()
                            self.collection.insert(item)
        return item
在我的终端和cron作业上运行“/home/ubuntu/crawlers/env/bin/python3/home/ubuntu/crawlers/spider/evirscrapy/evirscrapy/run.py>/tmp/output”的输出与运行“/home/ubuntu/crawlers/env/bin/python3/home/ubuntu/ubuntu/crawlers

具体来说,在link_spider中,日志在mongodb调用后停止:

lib_path = os.path.realpath(os.path.join(os.path.abspath(os.path.dirname(__file__)), '../../../', 'server'))
if lib_path not in sys.path:
    sys.path[0:0] = [lib_path]
from mongo import save_mongo, check_mongo

class LinkSpider(scrapy.Spider):

    def parse(self, response):
        ''' code to get urls to complete_list '''
        for url in complete_list:
            yield scrapy.Request(url=url, callback=self.parse)
            print "log"

        if check_mongo(url):
            print "log2"
日志似乎到此为止

我的mongo_连接器文件:

import json
import os
import sys
from pymongo import MongoClient
from scrapy.conf import settings


def check_mongo(url):
    connection = MongoClient()
    db = connection[settings['MONGODB_DB']]
    collection = db[settings['MONGODB_COLLECTION']]
    if collection.find( { "url": url } ).count() != 0:
        return False
    else:
        return True
和设置:

MONGODB_SERVER = "localhost"
MONGODB_PORT = 27017
MONGODB_DB = 'articles'
MONGODB_COLLECTION = 'articles_data'
mongod.log:

2017-05-01T21:12:40.926+0000 I CONTROL  [main] ***** SERVER RESTARTED *****
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] MongoDB starting : pid=4249 port=27017 dbpath=/var/lib/mongodb 64-bit host=ubuntu
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] db version v3.2.12
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] git version: ef3e1bc78e997f0d9f22f45aeb1d8e3b6ac14a14
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] modules: none
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] build environment:
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten]     distarch: x86_64
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] options: { config: "/etc/mongod.conf", net: { bindIp: "127.0.0.1", port: 27017 }, storage: { dbPath: "/var/lib/mongo$
2017-05-01T21:12:40.961+0000 I -        [initandlisten] Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, so setting the active storage en$
2017-05-01T21:12:40.961+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics$
2017-05-01T21:12:41.300+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongodb/diagnostic.data'
2017-05-01T21:12:41.300+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2017-05-01T21:12:41.301+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
2017-05-02T19:52:06.590+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T19:52:06.590+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:08:58.458+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:08:58.458+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:08:58.458+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:21:39.076+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:21:39.076+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:21:39.076+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:21:39.076+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T21:33:09.651+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T21:33:09.651+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T21:33:09.651+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T21:33:09.651+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T22:01:53.036+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T22:01:53.036+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T22:01:53.036+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T22:01:53.036+0000 I COMMAND  [conn46674] killcursors: found 0 of 1

您是对的,crontab启动的进程有自己的最小环境。这通常会在启动依赖于特定环境变量的复杂流程时引发问题

要解决此问题,请尝试添加$HOME/.profile位于crontab中的命令前面。例如:

PATH=/home/ubuntu/crawlers/env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/15 * * * * $HOME/.profile; /home/ubuntu/crawlers/env/bin/python3 /home/ubuntu/crawlers/spider/evilscrapy/evilscrapy/run.py > /tmp/output

我正在尝试你的解决方案。。。还没有看到任何进展。很抱歉听到这个消息。你能在这里发布你的scrapy和mongodb日志吗?不用担心,谢谢你的帮助。在我看来,问题在于cron作业期间的mongodb连接。。。用所需的相关文件更新了问题。如果您还需要什么,请告诉我,谢谢,检查了您更新的问题,对我来说,在mongodb呼叫时,scrapy日志似乎有点可疑。我的第一个猜测是,有一个异常或错误消息,其中包含有用的信息,没有机会冒出来。尝试提高日志级别并查找可能会吞噬错误消息的异常捕获。啊,你是对的,我添加了其他日志级别,现在得到了mongo db connection的“TypeError:name必须是str的实例”。现在调查一下。谢谢