Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/selenium/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何使用“靓汤”在推特上搜索转发者?_Python_Selenium_Web Scraping_Beautifulsoup - Fatal编程技术网

Python 如何使用“靓汤”在推特上搜索转发者?

Python 如何使用“靓汤”在推特上搜索转发者?,python,selenium,web-scraping,beautifulsoup,Python,Selenium,Web Scraping,Beautifulsoup,**>它正在显示此错误=> [17548:22900:0415/160654.715:错误:设备\u事件\u日志\u impl.cc(214)][16:06:54.715]蓝牙:>蓝牙适配器\u winrt.cc:1162请求无线访问异步失败:无线访问状态::DeniedByUsers将无法>更改无线电源。 0实际上,您可以仅使用selenium而不使用BeautifulSoup获取名称,下面是代码: from bs4 import BeautifulSoup from selenium imp

**>它正在显示此错误=>

[17548:22900:0415/160654.715:错误:设备\u事件\u日志\u impl.cc(214)][16:06:54.715]蓝牙:>蓝牙适配器\u winrt.cc:1162请求无线访问异步失败:无线访问状态::DeniedByUsers将无法>更改无线电源。
0实际上,您可以仅使用selenium而不使用BeautifulSoup获取名称,下面是代码:

from bs4 import BeautifulSoup
from selenium import webdriver
import requests
import lxml
import openpyxl as op

# from lxml

html_text = 'https://twitter.com/videogamedeals/status/1352325118261948418/retweets'


#

driver = webdriver.


----------
## Heading ##

Chrome(
    executable_path='C:/Users/atif/Downloads/chromedriver.exe')
# driver.implicitly_wait(30)
driver.get(html_text)
html = driver.page_source
soup = BeautifulSoup(html, 'lxml')

# body = soup.body

# titles = headers.find_all('a', class_='title fw500 ellipsis')
# for h in headers:
#     # title = h.find('a', class_='title fw500 ellipsis').text
#     print(h.a['href'])

# a_links = body.find_all("a")
names = soup.find_all(
    "a.css-4rbku5 css-18t94o4 css-1dbjc4n r-1loqt21 r-1wbh5a2 r-dnmrzs r-1ny4l3l")

print(len(names))

万一我也想用汤,那汤怎么用呢?用beautifulsoup很难买到,因为你的网页上有这样的东西:便宜的屁股玩家
from seleniumwire import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
from bs4 import BeautifulSoup
import requests
import lxml
import openpyxl as op





driver = webdriver.Chrome(ChromeDriverManager().install())


# from lxml

html_text = 'https://twitter.com/videogamedeals/status/1352325118261948418/retweets'


# driver.implicitly_wait(30)
driver.get(html_text)
time.sleep(20)

names = driver.find_elements_by_xpath('//span[@class="css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0"]//span[@class="css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0"]')

for name in names:
    print(name.text)