Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/344.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 这段代码用于webscraping nt wrkng nd为Td标记提供空列表,用于其他标记编写。很好,如何使用索引我想hve第四个Td标记wid dis clss_Python_Selenium_Selenium Webdriver_Web Scraping - Fatal编程技术网

Python 这段代码用于webscraping nt wrkng nd为Td标记提供空列表,用于其他标记编写。很好,如何使用索引我想hve第四个Td标记wid dis clss

Python 这段代码用于webscraping nt wrkng nd为Td标记提供空列表,用于其他标记编写。很好,如何使用索引我想hve第四个Td标记wid dis clss,python,selenium,selenium-webdriver,web-scraping,Python,Selenium,Selenium Webdriver,Web Scraping,请给我一个解决方案,如何从这个页面获取实体类型刮。这个代码用于网页刮不工作,并为其他标签的Td标签提供空列表。它工作正常,以及如何使用索引。我希望这个类有第7个Td标签 INPUT:import bs4 as bs import requests as req import selenium from selenium import webdriver driver = webdriver.Chrome() url= "https://portal.unifiedpatents.co

请给我一个解决方案,如何从这个页面获取实体类型刮。这个代码用于网页刮不工作,并为其他标签的Td标签提供空列表。它工作正常,以及如何使用索引。我希望这个类有第7个Td标签

INPUT:import bs4 as bs
import requests as req
import selenium
from selenium import webdriver

driver = webdriver.Chrome()
url= "https://portal.unifiedpatents.com/litigation/caselist?case_no=1%3A18-CV-01956"
#driver.maximize_window()
driver.get(url)

content = driver.page_source.encode('utf-8').strip()
soup = bs.BeautifulSoup(content,"html.parser")
a=soup.find_all("td",{"class":"ant-table-row-cell-break-word"})
print(a)
driver.quit()


OUTPUT: "C:\Users\Lumenci 3\PycharmProjects\untitled6\venv\Scripts\python.exe" "C:/Users/Lumenci 3/.PyCharmCE2019.3/config/scratches/scratch_2.py"
[]

Process finished with exit code 0

您可以在不使用bs4的情况下使用Selenium

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
url= "https://portal.unifiedpatents.com/litigation/caselist?case_no=1%3A18-CV-01956"
driver.get(url)
elements = WebDriverWait(driver, 10).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, 'td.ant-table-row-cell-break-word')))
print([element.text for element in elements])
driver.quit()
输出:

['1:18-cv-01956', '2018-12-11', 'Open', 'Delaware District Court', 'Axcess International, Inc.', 'Lenel Systems International, Inc.', 'Infringement', 'NPE (Individual)', 'High-Tech']