Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/selenium/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何每小时自动运行web scraper脚本?_Python_Selenium_Web Scraping_Automation_Cron - Fatal编程技术网

Python 如何每小时自动运行web scraper脚本?

Python 如何每小时自动运行web scraper脚本?,python,selenium,web-scraping,automation,cron,Python,Selenium,Web Scraping,Automation,Cron,我正在从booking.com提取数据,我的脚本使用selenium收集数据,创建一个带有适当时间戳的临时csv,然后将其附加到最终数据库(也是csv)。我希望每小时都能得到新的数据,即使是在离线时,也能将其附加到最终的数据库中,但我不知道该怎么做。我对网页抓取还不熟悉。目前,我的脚本在Jupyter中运行。任何帮助都将不胜感激 我用的是macOS Big Sur 这是我的代码: def prepare_driver(url): '''Returns a Firefox Webdri

我正在从booking.com提取数据,我的脚本使用selenium收集数据,创建一个带有适当时间戳的临时csv,然后将其附加到最终数据库(也是csv)。我希望每小时都能得到新的数据,即使是在离线时,也能将其附加到最终的数据库中,但我不知道该怎么做。我对网页抓取还不熟悉。目前,我的脚本在Jupyter中运行。任何帮助都将不胜感激

我用的是macOS Big Sur

这是我的代码:

 
def prepare_driver(url):
    '''Returns a Firefox Webdriver.'''
    options = Options()
    # options.add_argument('-headless')
    driver = Firefox(executable_path='/Users/andreazavala/Downloads/geckodriver', options=options)
    driver.get(url)
    wait = WebDriverWait(driver, 10).until(EC.presence_of_element_located(
        (By.ID, 'ss')))
    return driver

def fill_form(driver, search_argument):
    '''Finds all the input tags in form and makes a POST requests.'''
    search_field = driver.find_element_by_id('ss')
    search_field.send_keys(search_argument)
    
    #Look for today's date
    driver.find_element_by_class_name('xp__dates-inner').click()
    slcpath = "td[data-date='"+str(date.today())+"']"
    driver.find_element_by_css_selector(slcpath).click()
    
    # We look for the search button and click it
    driver.find_element_by_class_name('sb-searchbox__button')\
        .click()
    
    wait = WebDriverWait(driver, timeout=10).until(
        EC.presence_of_all_elements_located(
            (By.CLASS_NAME, 'sr-hotel__title')))

driver = prepare_driver(domain)
fill_form(driver, 'City Name')

url_iter = driver.current_url
accommodation_urls = list()
accommodation_urls.append(url_iter)

with open('urls.txt', 'w') as f:
    for item in accommodation_urls:
        f.write("%s\n" % item)
from selectorlib import Extractor
import requests 
from time import sleep
import csv

# Create an Extractor by reading from the YAML file
e = Extractor.from_yaml_file('booking.yml')

def scrape(url):    
    headers = {
        'Connection': 'keep-alive',
        'Pragma': 'no-cache',
        'Cache-Control': 'no-cache',
        'DNT': '1',
        'Upgrade-Insecure-Requests': '1',
        # You may want to change the user agent if you get blocked
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.113 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',

        'Referer': 'https://www.booking.com/index.en-gb.html',
        'Accept-Language': 'en-GB,en-US;q=0.9,en;q=0.8',
    }

    # Download the page using requests
    print("Downloading %s"%url)
    r = requests.get(url, headers=headers)
    # Pass the HTML of the page and create 
    return e.extract(r.text,base_url=url)


with open("urls.txt",'r') as urllist, open('data.csv','w') as outfile:
    fieldnames = [
        "name",
        "location",
        "price",
        "price_for",
        "room_type",
        "beds",
        "rating",
        "rating_title",
        "number_of_ratings",
        "url"
    ]
    writer = csv.DictWriter(outfile, fieldnames=fieldnames,quoting=csv.QUOTE_ALL)
    writer.writeheader()
    for url in urllist.readlines():
        data = scrape(url) 
        if data:
            for h in data['hotels']:
                writer.writerow(h)
import pandas as pd
data = pd.read_csv("data.csv")
data.insert(0, 'TimeStamp', pd.to_datetime('today').replace(microsecond=0))

df2 = data
df2.to_csv('Tarifa.csv', mode = 'a', header = False)
df_results = pd.read_csv('Tarifa.csv', index_col=0).reset_index(drop = True, inplace = True)

这是一个你可以使用的方法

导入时间表和时间,然后将脚本包装在主函数中,以便每小时调用一次

import time
import schedule

def runs_my_script():
    function1()
    function2()
    and_so_on()
然后在底部添加以下内容:

if __name__ == "__main__":
    schedule.every().hour.do(runs_my_script) # sets the function to run once per hour
  
    while True:  # loops and runs the scheduled job indefinitely 
        schedule.run_pending()
        time.sleep(1)

它并不优雅,但它完成了基本工作,并且可以扩展以满足您的需要:)

一种系统方法是依赖crontab

在控制台中键入:
crontab-e
。 在里面,放入
0-23***/path/to/script/app.py
它每天每小时都在运行


按esc(
esc
)保存,然后键入
:wq
。这将保存新的cron作业并退出编辑器。

如果您处于脱机状态,则无法连接到站点,这将是一个问题。