每小时运行一次python脚本

每小时运行一次python脚本,python,python-multithreading,tweepy,Python,Python Multithreading,Tweepy,我希望每小时运行一次python脚本,并将数据保存在elasticsearch索引中。所以我使用了我写的一个函数,set_interval,它使用tweepy库。但它不工作,因为我需要它工作。它每分钟运行一次,并将数据保存在索引中。即使在设定秒数等于3600秒后,它也会在每分钟内运行。但我想将其配置为每小时运行一次 我怎样才能解决这个问题?下面是我的python脚本: def call_at_interval(time, callback, args): while True:

我希望每小时运行一次python脚本,并将数据保存在elasticsearch索引中。所以我使用了我写的一个函数,set_interval,它使用tweepy库。但它不工作,因为我需要它工作。它每分钟运行一次,并将数据保存在索引中。即使在设定秒数等于3600秒后,它也会在每分钟内运行。但我想将其配置为每小时运行一次

我怎样才能解决这个问题?下面是我的python脚本:

def call_at_interval(time, callback, args):
    while True:
        timer = Timer(time, callback, args=args)
        timer.start()
        timer.join()


def set_interval(time, callback, *args):
    Thread(target=call_at_interval, args=(time, callback, args)).start()


def get_all_tweets(screen_name):
    # authorize twitter, initialize tweepy
    auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_key, access_secret)
    api = tweepy.API(auth)

    screen_name = ""

    # initialize a list to hold all the tweepy Tweets
    alltweets = []

    # make initial request for most recent tweets (200 is the maximum allowed count)
    new_tweets = api.user_timeline(screen_name=screen_name, count=200)

    # save most recent tweets
    alltweets.extend(new_tweets)

    # save the id of the oldest tweet less one
    oldest = alltweets[-1].id - 1

    # keep grabbing tweets until there are no tweets left to grab
    while len(new_tweets) > 0:
        #print
        #"getting tweets before %s" % (oldest)

        # all subsiquent requests use the max_id param to prevent duplicates
        new_tweets = api.user_timeline(screen_name=screen_name, count=200, max_id=oldest)

        # save most recent tweets
        alltweets.extend(new_tweets)

        # update the id of the oldest tweet less one
        oldest = alltweets[-1].id - 1

        #print
        #"...%s tweets downloaded so far" % (len(alltweets))

    outtweets = [{'ID': tweet.id_str, 'Text': tweet.text, 'Date': tweet.created_at, 'author': tweet.user.screen_name} for tweet in alltweets]

    def save_es(outtweets, es):  # Peps8 convention
        data = [  # Please without s in data
            {
                "_index": "index name",
                "_type": "type name",
                "_id": index,
                "_source": ID
            }
            for index, ID in enumerate(outtweets)
        ]
        helpers.bulk(es, data)

    save_es(outtweets, es)

    print('Run at:')
    print(datetime.now())
    print("\n")

    set_interval(3600, get_all_tweets(screen_name))

去掉所有的定时器代码,只需编写逻辑和 cron将为您完成这项工作,并在
crontab-e

0 * * * * /path/to/python /path/to/script.py
0****
表示每0分钟运行一次,您可以找到更多解释

我还注意到你在递归地调用
get\u all\u tweets(screen\u name)
我想你可能不得不从外部调用它

就这么多地保留你的剧本吧

def get_all_tweets(screen_name):
    # authorize twitter, initialize tweepy
    auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_key, access_secret)
    api = tweepy.API(auth)

    screen_name = ""

    # initialize a list to hold all the tweepy Tweets
    alltweets = []

    # make initial request for most recent tweets (200 is the maximum allowed count)
    new_tweets = api.user_timeline(screen_name=screen_name, count=200)

    # save most recent tweets
    alltweets.extend(new_tweets)

    # save the id of the oldest tweet less one
    oldest = alltweets[-1].id - 1

    # keep grabbing tweets until there are no tweets left to grab
    while len(new_tweets) > 0:
        #print
        #"getting tweets before %s" % (oldest)

        # all subsiquent requests use the max_id param to prevent duplicates
        new_tweets = api.user_timeline(screen_name=screen_name, count=200, max_id=oldest)

        # save most recent tweets
        alltweets.extend(new_tweets)

        # update the id of the oldest tweet less one
        oldest = alltweets[-1].id - 1

        #print
        #"...%s tweets downloaded so far" % (len(alltweets))

    outtweets = [{'ID': tweet.id_str, 'Text': tweet.text, 'Date': tweet.created_at, 'author': tweet.user.screen_name} for tweet in alltweets]

    def save_es(outtweets, es):  # Peps8 convention
        data = [  # Please without s in data
            {
                "_index": "index name",
                "_type": "type name",
                "_id": index,
                "_source": ID
            }
            for index, ID in enumerate(outtweets)
        ]
        helpers.bulk(es, data)

    save_es(outtweets, es)

get_all_tweets("") #your screen name here

为什么每小时都要做一些任务需要如此复杂?您可以按下面的方式每一小时运行一次脚本,请注意,它的运行时间为1小时+工作时间:

import time


def do_some_work():
    print("Do some work")
    time.sleep(1)
    print("Some work is done!")


if __name__ == "__main__":
    time.sleep(60)  # imagine you would like to start work in 1 minute first time
    while True:
        do_some_work()
        time.sleep(3600)  # do work every one hour
如果希望每一小时运行一次脚本,请执行以下代码:

import time
import threading


def do_some_work():
    print("Do some work")
    time.sleep(4)
    print("Some work is done!")


if __name__ == "__main__":
    time.sleep(60)  # imagine you would like to start work in 1 minute first time
    while True:
        thr = threading.Thread(target=do_some_work)
        thr.start()
        time.sleep(3600)  # do work every one hour 
在这种情况下,thr应该以超过3600秒的速度完成其工作,尽管没有,但您仍然会得到结果,但结果将来自另一次尝试,请参见下面的示例:

import time
import threading


class AttemptCount:
    def __init__(self, attempt_number):
        self.attempt_number = attempt_number


def do_some_work(_attempt_number):
    print(f"Do some work {_attempt_number.attempt_number}")
    time.sleep(4)
    print(f"Some work is done! {_attempt_number.attempt_number}")
    _attempt_number.attempt_number += 1


if __name__ == "__main__":
    attempt_number = AttemptCount(1)
    time.sleep(1)  # imagine you would like to start work in 1 minute first time
    while True:
        thr = threading.Thread(target=do_some_work, args=(attempt_number, ),)
        thr.start()
        time.sleep(1)  # do work every one hour
在这种情况下,您将看到的结果是:

做些工作 做些工作 做些工作 做些工作 有些工作完成了!1. 做些工作2 有些工作完成了!2. 做一些工作 有些工作完成了!3. 做一些工作 有些工作完成了!4. 做些工作 有些工作完成了!5. 做些工作 有些工作完成了!6. 做些工作 有些工作完成了!7. 做些工作 有些工作完成了!8. 做些工作

我喜欢将subprocess.Popen用于此类任务,如果子进程由于任何原因未能在一小时内完成其工作,您只需终止它并启动一个新的子进程即可


您还可以使用CRON计划每一小时运行一次进程。

请提供更多关于线程是什么以及您使用的库的信息我正在使用tweepy库tweepy是twitter API的库。它没有任何名称为Thread的内容。您使用什么库来创建线程?我使用线程来创建导入线程和时间。使用cron,这更易于自定义和简化