Python 网络垃圾文章-个人合著者数据

Python 网络垃圾文章-个人合著者数据,python,pandas,beautifulsoup,Python,Pandas,Beautifulsoup,我正在抓取《米尔班克季刊》上发表的文章。我特别感兴趣的是关于作者及其机构关系的数据。我已经使用beautifulsoup和pandas库编写了代码,以便将输出保存为csv。csv包含每篇文章一行。这意味着对于有多个作者的文章,“作者”列包含所有作者,“机构”列包含共同撰写文章的作者的所有机构。相反,我希望输出为csv每个作者有一行;换句话说,每篇文章有多行。这是因为我想最终计算每个机构在《华尔街日报》上的代表次数 我使用了beautifulsoup.find_all方法来获取我的所有数据。最初,

我正在抓取《米尔班克季刊》上发表的文章。我特别感兴趣的是关于作者及其机构关系的数据。我已经使用beautifulsoup和pandas库编写了代码,以便将输出保存为csv。csv包含每篇文章一行。这意味着对于有多个作者的文章,“作者”列包含所有作者,“机构”列包含共同撰写文章的作者的所有机构。相反,我希望输出为csv每个作者有一行;换句话说,每篇文章有多行。这是因为我想最终计算每个机构在《华尔街日报》上的代表次数

我使用了beautifulsoup
.find_all
方法来获取我的所有数据。最初,我尝试使用
.find_all_next
获取作者和机构,认为这样可以容纳多个作者的文章,但这些专栏没有返回任何内容

对我来说,重写这段代码以便每个作者都有自己的行的最佳方式是什么

import pandas as pd
import numpy as np
import requests
import re
import urllib
from bs4 import BeautifulSoup
from bs4 import SoupStrainer

articletype=list()
articlelist=list()
titlelist=list()
vollist=list()
issuenumlist=list()
authorlist = list()
instlist = list()
urllist=list()

issueurllist = ['https://onlinelibrary.wiley.com/toc/14680009/2018/96/1', 'https://onlinelibrary.wiley.com/toc/14680009/2018/96/2','https://onlinelibrary.wiley.com/toc/14680009/2018/96/3','https://onlinelibrary.wiley.com/toc/14680009/2018/96/4']

for issue in issueurllist:
    requrl = requests.get(issue)
    soup = BeautifulSoup(requrl.text, 'lxml')

    #Open url of each article.

    baseurl = 'https://onlinelibrary.wiley.com'

    for article in issue:
        doi=[a.get('href') for a in soup.find_all('a', title = "Full text")]

    for d in doi:
        doilink = baseurl + d
        opendoi = requests.get(doilink)
        articlesoup=BeautifulSoup(opendoi.text, 'lxml')

    ```Get metadata for each article```
    for tag in articlesoup:
        arttype=articlesoup.find_all("span", {"class":"primary-heading"})
        title=articlesoup.find_all("meta",{"name":"citation_title"})
        vol=articlesoup.find_all("meta",{"name":"citation_volume"})
        issuenum = articlesoup.find_all("meta",{"name":"citation_issue"})
        author = articlesoup.find_all("meta",{"name":"citation_author"})
        institution=articlesoup.find_all("meta",{"name":"citation_author_institution"})
        url=articlesoup.find_all("meta",{"name":"citation_fulltext_html_url"})

    articletype.append(arttype)
    titlelist.append(title)
    vollist.append(vol)
    issuenumlist.append(issuenum)
    authorlist.append(author)
    instlist.append(institution)
    urllist.append(url)

    milbankdict={'article type':articletype, 'title':titlelist, 'vol':vollist, 'issue':issuenumlist,'author':authorlist, 'author institution':instlist, 'url':urllist}
    milbank2018=pd.DataFrame(milbankdict)
    milbank2018.to_csv('milbank2018.csv')
    print("saved")
该方法总是返回一个列表,正如您所看到的,我正在验证
标记_对象不是None
,这是一个重要的测试用例,因为一些作者不包含meta属性,然后返回None。每个元属性不需要多个列表,您可以使用字典进行管理,这里我按作者格式化
数据以及相关的所有元属性

Python的内置函数用于删除字符串中的所有前导空格和尾随空格

import requests
from bs4 import BeautifulSoup
import pandas as pd

issueurllist = ['https://onlinelibrary.wiley.com/toc/14680009/2018/96/1',
                'https://onlinelibrary.wiley.com/toc/14680009/2018/96/2',
                'https://onlinelibrary.wiley.com/toc/14680009/2018/96/3',
                'https://onlinelibrary.wiley.com/toc/14680009/2018/96/4'
                ]

base_url = 'https://onlinelibrary.wiley.com'

json_data = []

for issue in issueurllist:
    response1 = requests.get(issue)
    soup1 = BeautifulSoup(response1.text, 'lxml')

    for article in issue:
        doi=[a.get('href') for a in soup1.find_all('a', title = "Full text")]

    for i in doi:
        article_dict = {"article":"NaN","title":"NaN","vol":"NaN","issue":"NaN","author":"NaN","institution":"NaN","url":"NaN"}
        article_url = base_url + i
        response2 = requests.get(article_url)
        soup2=BeautifulSoup(response2.text, 'lxml')

        '''Get metadata for each article'''

        article = soup2.find("span", {"class":"primary-heading"})
        title = soup2.find("meta",{"name":"citation_title"})
        vol = soup2.find("meta",{"name":"citation_volume"})
        issue  = soup2.find("meta",{"name":"citation_issue"})
        author  = soup2.find("meta",{"name":"citation_author"})
        institution = soup2.find("meta",{"name":"citation_author_institution"})
        url = soup2.find("meta",{"name":"citation_fulltext_html_url"})

        if article is not None:
            article_dict['article']= article.text.strip()

        if title is not None:
            article_dict['title']= title['content'].strip()

        if vol is not None:
            article_dict['vol']= vol['content'].strip()

        if issue is not None:
            article_dict['issue']= issue['content'].strip()

        if author is not None:
            article_dict['author']= author['content'].strip()

        if institution is not None:
            article_dict['institution']= institution['content'].strip()

        if url is not None:
            article_dict['url']= url['content'].strip()

        json_data.append(article_dict)

df=pd.DataFrame(json_data)
df.to_csv('milbank2018.csv')
该方法总是返回一个列表,正如您所看到的,我正在验证
标记_对象不是None
,这是一个重要的测试用例,因为一些作者不包含meta属性,然后返回None。每个元属性不需要多个列表,您可以使用字典进行管理,这里我按作者格式化
数据以及相关的所有元属性

Python的内置函数用于删除字符串中的所有前导空格和尾随空格

import requests
from bs4 import BeautifulSoup
import pandas as pd

issueurllist = ['https://onlinelibrary.wiley.com/toc/14680009/2018/96/1',
                'https://onlinelibrary.wiley.com/toc/14680009/2018/96/2',
                'https://onlinelibrary.wiley.com/toc/14680009/2018/96/3',
                'https://onlinelibrary.wiley.com/toc/14680009/2018/96/4'
                ]

base_url = 'https://onlinelibrary.wiley.com'

json_data = []

for issue in issueurllist:
    response1 = requests.get(issue)
    soup1 = BeautifulSoup(response1.text, 'lxml')

    for article in issue:
        doi=[a.get('href') for a in soup1.find_all('a', title = "Full text")]

    for i in doi:
        article_dict = {"article":"NaN","title":"NaN","vol":"NaN","issue":"NaN","author":"NaN","institution":"NaN","url":"NaN"}
        article_url = base_url + i
        response2 = requests.get(article_url)
        soup2=BeautifulSoup(response2.text, 'lxml')

        '''Get metadata for each article'''

        article = soup2.find("span", {"class":"primary-heading"})
        title = soup2.find("meta",{"name":"citation_title"})
        vol = soup2.find("meta",{"name":"citation_volume"})
        issue  = soup2.find("meta",{"name":"citation_issue"})
        author  = soup2.find("meta",{"name":"citation_author"})
        institution = soup2.find("meta",{"name":"citation_author_institution"})
        url = soup2.find("meta",{"name":"citation_fulltext_html_url"})

        if article is not None:
            article_dict['article']= article.text.strip()

        if title is not None:
            article_dict['title']= title['content'].strip()

        if vol is not None:
            article_dict['vol']= vol['content'].strip()

        if issue is not None:
            article_dict['issue']= issue['content'].strip()

        if author is not None:
            article_dict['author']= author['content'].strip()

        if institution is not None:
            article_dict['institution']= institution['content'].strip()

        if url is not None:
            article_dict['url']= url['content'].strip()

        json_data.append(article_dict)

df=pd.DataFrame(json_data)
df.to_csv('milbank2018.csv')

find_all()
提供了一个列表,因此您可以使用
for
-循环分别处理每个元素。您还可以使用
zip()
同时处理多个列表-即
for name,address in zip(author,url):print(name,address)
find_all()
提供列表,因此您可以使用
for
-loop分别处理每个元素。您还可以使用
zip()
同时处理多个列表-即
for name,address in zip(author,url):print(name,address)
您的代码仅检索第一个作者。合著者没有自己的观点。我试图这样做,如果一篇文章有多个作者,每个作者在输出中都有自己的行。对于有多个作者的文章,有多个属性为“name=引文\作者”的元标记。@C.K.提供一个包含多个作者的文章URL。>>出现在内部链接中,然后查看元标记。@QHarr提供了一个很好的例子。请参考那篇文章。你的代码只检索第一作者。合著者没有自己的观点。我试图这样做,如果一篇文章有多个作者,每个作者在输出中都有自己的行。对于有多个作者的文章,有多个属性为“name=引文\作者”的元标记。@C.K.提供一个包含多个作者的文章URL。>>出现在内部链接中,然后查看元标记。@QHarr提供了一个很好的例子。请参阅那篇文章。