Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/328.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/selenium/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Selenium:如何在向下滚动后获得更新的HTML DOM?_Python_Selenium - Fatal编程技术网

Python Selenium:如何在向下滚动后获得更新的HTML DOM?

Python Selenium:如何在向下滚动后获得更新的HTML DOM?,python,selenium,Python,Selenium,我正在访问一个已实现视差滚动的。我正在使用代码滚动底部,但它没有获取更新的DOM。代码如下: import requests from bs4 import BeautifulSoup from gensim.summarization import summarize from selenium import webdriver from datetime import datetime from selenium.webdriver.support.ui import WebDriverW

我正在访问一个已实现视差滚动的。我正在使用代码滚动底部,但它没有获取更新的DOM。代码如下:

import requests
from bs4 import BeautifulSoup
from gensim.summarization import summarize

from selenium import webdriver
from datetime import datetime
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
from time import sleep
import sys
import os
import xmltodict
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import traceback
import random

driver = None
driver = webdriver.Firefox()
driver.maximize_window()
def fetch_links(tag):
    links = []
    url = 'https://steemit.com/trending/'+tag
    driver.get(url)
    html = driver.page_source
    sleep(4)

    soup = BeautifulSoup(html,'lxml')
    entries = soup.select('.entry-title > a')
    driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
    driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
    sleep(5)
    entries = soup.select('.entry-title > a')
    for e in entries:
        if e['href'].strip() not in entries:
            links.append(e['href'])
    return links

滚动窗口后,您可能需要解析页面:

driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")

sleep(5)

soup = BeautifulSoup(driver.page_source, 'lxml')
entries = soup.select('.entry-title > a')

滚动窗口后,您可能需要解析页面:

driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")

sleep(5)

soup = BeautifulSoup(driver.page_source, 'lxml')
entries = soup.select('.entry-title > a')

问题似乎出在
BeautifulSoup
上。所有标题都存在于由
driver.page\u source
返回的html中。默认情况下,它会在每页上选择20条记录,在滚动时,它应该选择下一个20条记录。作为替代方法,您可以通过一个JavaScript调用直接提取所有链接:
links=driver.execute\u脚本(“return[].map.call(document.queryselectoral”(.entry title>a)),e=>e.href)”)
它将如何选择尚未成为DOM一部分的链接?从我执行的测试中,新链接出现在DOM中。问题似乎出在
BeautifulSoup
上。所有标题都存在于由
driver.page\u source
返回的html中。默认情况下,它会在每页上选择20条记录,在滚动时,它应该选择下一个20条记录。作为替代方法,您可以通过一个JavaScript调用直接提取所有链接:
links=driver.execute\u脚本(“return[].map.call(document.queryselectoral”(.entry title>a)),e=>e.href)”)
它将如何选择尚未成为DOM一部分的链接?从我执行的测试中,新链接出现在DOM中。