Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x 蟒蛇(靓汤)归来;无”;在爬网时查找现有html_Python 3.x_Selenium Webdriver_Beautifulsoup_Web Crawler_Ssl Certificate - Fatal编程技术网

Python 3.x 蟒蛇(靓汤)归来;无”;在爬网时查找现有html

Python 3.x 蟒蛇(靓汤)归来;无”;在爬网时查找现有html,python-3.x,selenium-webdriver,beautifulsoup,web-crawler,ssl-certificate,Python 3.x,Selenium Webdriver,Beautifulsoup,Web Crawler,Ssl Certificate,我只是想得到网站搜索栏的html。我已经编写了一个代码,并在“”、“”和其他许多代码上进行了尝试,效果很好。但它并没有起作用 它返回none,而应该返回: input type="search" id="q" name="q" placeholder="Search in Daraz" class="search-box__input--O34g" tabindex="1" value="" data-spm-anchor-id="a2a0e.home.search.i0.35e34937eWC

我只是想得到网站搜索栏的html。我已经编写了一个代码,并在“”、“”和其他许多代码上进行了尝试,效果很好。但它并没有起作用

它返回none,而应该返回:

input type="search" id="q" name="q" placeholder="Search in Daraz" class="search-box__input--O34g" tabindex="1" value="" data-spm-anchor-id="a2a0e.home.search.i0.35e34937eWCmbI" 我也试过这样做:

    bsObj=BeautifulSoup(requests.get("https://www.daraz.com.pk").content, 
    "html.parser")

    nameList = bsObj.find("input", {"type": "search"})
    print(nameList)
通过这种方式使用硒:

    driver = webdriver.Firefox()
    driver.get("https://www.daraz.com.pk")

    time.sleep(2)
    content = driver.page_source.encode('utf-8').strip()
    soup = BeautifulSoup(content,"html.parser")
    time.sleep(2)
    officials = soup.find("input", {"type":"search"})
    print(str(officials))
但失败了

    bsObj=BeautifulSoup(requests.get("https://www.daraz.com.pk").content, 
    "html.parser")

    nameList = bsObj.find("input", {"type": "search"})
    print(nameList)
    driver = webdriver.Firefox()
    driver.get("https://www.daraz.com.pk")

    time.sleep(2)
    content = driver.page_source.encode('utf-8').strip()
    soup = BeautifulSoup(content,"html.parser")
    time.sleep(2)
    officials = soup.find("input", {"type":"search"})
    print(str(officials))