Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 在浏览网站时,我的输出出现问题_Python_Python 3.x_Web Scraping_Beautifulsoup - Fatal编程技术网

Python 在浏览网站时,我的输出出现问题

Python 在浏览网站时,我的输出出现问题,python,python-3.x,web-scraping,beautifulsoup,Python,Python 3.x,Web Scraping,Beautifulsoup,我想在这里的所有链接上删除公司的所有名称: 在每个链接中,都有几家公司,如下所示: 我的目标是让所有这些公司的所有链接 以下是我目前的脚本: import requests from requests import get from bs4 import BeautifulSoup import pandas as pd pd.set_option('display.max_rows', None) pd.set_option('display.max_columns', None) pd

我想在这里的所有链接上删除公司的所有名称:

在每个链接中,都有几家公司,如下所示:

我的目标是让所有这些公司的所有链接

以下是我目前的脚本:

import requests
from requests import get
from bs4 import BeautifulSoup
import pandas as pd

pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', None)

import re
import nltk
from nltk.tokenize import word_tokenize
from nltk.tokenize import RegexpTokenizer
tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+')


def clean_text(text):
    text = tokenizer.tokenize(text)
    final_text = ' '.join( [w for w in text] ) 
    return final_text
       


url = 'https://www.bilansgratuits.fr/secteurs/finance-assurance,k.html'

links = []

results = requests.get(url)


soup = BeautifulSoup(results.text, "html.parser")


links = [a['href'] for a in soup.find("div", {"class": "listeEntreprises"}).find_all('a', href=True)]






names = []

root_url = 'https://www.bilansgratuits.fr/'
urls = [ '{root}{i}'.format(root=root_url, i=i) for i in links ]

for url in urls[:3]:

    results = requests.get(url)

    soup = BeautifulSoup(results.text, "html.parser")

    try:
        name = [a.text for a in soup.find("div", {"class": "donnees"}).find_all('a', href=True)]
                            

    except:
        name = [a.text for a in soup.find("div", {"class": "listeEntreprises"}).find_all('a', href=True)]

    names.append(name)
       
        


for i in range(0,3):    
    rx = re.compile(r'^\s+$')

    names[i] = [item.split() for item in names[i] if not rx.match(item)]





data = pd.DataFrame({
    'names' : names
    })


data['names']= data['names'].apply(str)


data['names']= data['names'].apply(lambda x : clean_text(x))

print(data)

#data.to_csv('dftest.csv', sep=';', index=False, encoding = 'utf_8_sig')
我有以下输出:

但这不是我想要的,我希望每一行都有一个公司的名字

就像这样:

依此类推所有的名字。

这是你想要的吗

import pandas as pd
import requests
from bs4 import BeautifulSoup

url = "https://www.bilansgratuits.fr/secteurs/finance-assurance,k.html"
html = requests.get(url).text

follow_urls = [
    f"https://www.bilansgratuits.fr{anchor['href']}" for anchor
    in BeautifulSoup(html, "html.parser").select(".titreElementAnnuaire a")
]

data = []
for follow_url in follow_urls:
    print(f"Fetching: {follow_url}")
    css_selector = ".titreElementAnnuaire a" if "6411Z" in follow_url else ".classementTop .blocRaisonSociale > a"
    company_urls = BeautifulSoup(
        requests.get(follow_url).text,
        "html.parser",
    ).select(css_selector)
    data.extend(
        [
            [
                " ".join(anchor.getText(strip=True).split()),
                f"https://www.bilansgratuits.fr{anchor['href']}",
            ] for anchor in company_urls
        ]
    )

pd.DataFrame(data).to_csv("your_data.csv", index=False, header=["Company", "URL"])
print("Done!")
输出:一个.csv文件中的345个条目:


这是我的最终答案

import requests
from requests import get
from bs4 import BeautifulSoup
import pandas as pd
import re
import itertools





url = 'https://www.bilansgratuits.fr/secteurs/finance-assurance,k.html'

links = []

results = requests.get(url)

#time.sleep(20)

soup = BeautifulSoup(results.text, "html.parser")

links = [a['href']  for a in soup.find("div", {"class": "listeEntreprises"}).find_all('a', href=True)]

secteur = [a.text for a in soup.find("div", {"class": "listeEntreprises"}).find_all('a', href=True)]


secteurs = []
URLS = []
names = []

root_url = 'https://www.bilansgratuits.fr/'
urls = [ '{root}{i}'.format(root=root_url, i=i) for i in links ]




for url, secteur in zip(urls[:3], secteur[:3]):

    results = requests.get(url)

    soup = BeautifulSoup(results.text, "html.parser")

    try:
        name = [a.text for a in soup.find("div", {"class": "donnees"}).find_all('a', href=True)]

        for i in name:
            URLS.append(url)

        for i in name:
            secteurs.append(secteur)
   

    except:
        name = [a.text for a in soup.find("div", {"class": "listeEntreprises"}).find_all('a', href=True)]

        for i in name:
            URLS.append(url)

        for i in name:
            secteurs.append(secteur)            

    names.append(name)




for i in range(0,3):    
    rx = re.compile(r'^\s+$')

    names[i] = [item.split() for item in names[i] if not rx.match(item)]



res = []
for list in names:
    for lis in list:
        res.append(' '.join([w for w in lis]))





data = pd.DataFrame({
    'names' : res,
    'URL' : URLS,
    'Secteur' : secteurs
    })


data.to_csv('dftest.csv', sep=';', index=False, encoding = 'utf_8_sig')

我编辑了我的帖子,很抱歉我认为我的解释很清楚:)是的,类似这样的,但我想知道我的代码是否可行?无论如何谢谢你:)也许我要求太多了?你已经做了很多。但是我有一个基于我的方法的解决方案。我发现了,我会更新它。专业提示:不要在代码中使用那么多空格。阅读更多代码风格提示。谢谢!:)还有一件事。你知道如何像我的代码一样添加每一行的URL吗?嗯,你需要先收集这些链接,然后将它们添加到你的
df
。是的,但是当我将它们添加到我的循环中时,我每个都只有一个。我会有一个我发现的形状问题,但现在我想知道每一行的门派名称,比如:“6411Z-Activités de banque centrale”。(我更新了url的答案)