Python 3.x 使用API刮取coursera返回的课程不超过100个

Python 3.x 使用API刮取coursera返回的课程不超过100个,python-3.x,curl,web-scraping,beautifulsoup,web-crawler,Python 3.x,Curl,Web Scraping,Beautifulsoup,Web Crawler,这是我使用的curl命令--> 我使用了查询参数-start和limit,但它只是重复了2150门课程中的100门课程 Python代码: import requests import json from bs4 import BeautifulSoup import csv import sys reload(sys) sys.setdefaultencoding('utf-8') if __name__ == "__main__": headers = ({ "x

这是我使用的curl命令-->

我使用了查询参数-start和limit,但它只是重复了2150门课程中的100门课程

Python代码:

 import requests
 import json
 from bs4 import BeautifulSoup
 import csv
 import sys
 reload(sys)
 sys.setdefaultencoding('utf-8')


if __name__ == "__main__":
headers = ({
    "x-user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 
    (KHTML, like Gecko) Chrome/53.0.2785.92 Safari/537.36 
    FKUA/website/41/website/Desktop"})
d = open('result.json', 'r')
data = json.load(d)
print(data)
d.close()

with open("coursera.csv", 'a') as f:

    # Wrote the header once and toggle comment

    header = f.write('instructorIds' + ',' + 'courseType' + ',' + 'name' + ',' + 'partnerIds' + ',' +
                     'slug' + ',' + 'specializations' + ',' + 'course_id' + ',' + 'description' + "\n")


    for i in range(len(data['elements'])):

                instructorIds = data['elements'][i]['instructorIds']

                instructorIds = str(instructorIds)
                if instructorIds:
                    instructorIds = instructorIds.rstrip().replace(',', '')
                    instructorIds = instructorIds.rstrip().replace('\n', '')
                    instructorIds = instructorIds.rstrip().replace('u', '')
                    instructorIds = instructorIds.rstrip().replace('[', '')
                    instructorIds = instructorIds.rstrip().replace(']', '')
                else:
                    instructorIds = ' '
                print(instructorIds)
                courseType = data['elements'][i]['courseType']
                courseType = str(courseType)
                print(courseType)
                name = data['elements'][i]['name']
                name = str(name)
                print(name)
                partnerIds = data['elements'][i]['partnerIds']
                partnerIds = str(partnerIds)
                if partnerIds:
                    partnerIds = partnerIds.rstrip().replace(',', '')
                    partnerIds = partnerIds.rstrip().replace('\n', '')
                    partnerIds = partnerIds.rstrip().replace('u', '')
                    partnerIds = partnerIds.rstrip().replace('[', '')
                    partnerIds = partnerIds.rstrip().replace(']', '')
                else:
                    partnerIds = ' '
                print(partnerIds)
                slug = data['elements'][i]['slug']
                slug = str(slug)
                print(slug)
                specializations = data['elements'][i]['specializations']
                specializations = str(specializations)
                if specializations:
                    specializations = specializations.rstrip().replace(',', '')
                    specializations = specializations.rstrip().replace('\n', '')
                    specializations = specializations.rstrip().replace('u', '')
                    specializations = specializations.rstrip().replace('[', '')
                    specializations = specializations.rstrip().replace(']', '')
                else:
                    specializations = ' '
                print(specializations)
                course_id = data['elements'][i]['id']
                course_id = str(course_id)
                print(course_id)
                description = data['elements'][i]['description']
                description = str(description)
                print(description)




                if description:
                          description = description.rstrip().replace(',', '')
                          description = description.rstrip().replace('\n', '')
                else:
                     description = ' '

                                ####################################################################
                    ### writing the attributes in a csv file


                f.write(instructorIds + ',' + courseType + ',' + name + ',' + partnerIds + ',' + slug + ',' + specializations + ',' + course_id + ',' + description + "\n")

请建议一种方法,我可以刮除所有课程。

刮除不能处理站点地图文件吗?coursera网站上有一个特别的页面,其中列出了所有课程的页面


如果没有,则使用StormCrawler将其删除应该非常简单。

如果将“限制”设置为2150,则只需一次请求即可获得所有结果。例如:

url = "https://api.coursera.org/api/courses.v1?start=0&limit=2150&includes=instructorIds,partnerIds,specializations,s12nlds,v1Details,v2Details&fields=instructorIds,partnerIds,specializations,s12nlds,description"
data = requests.get(url).json()
print(len(data['elements']))

如果您添加有关实现和所需输出的更多详细信息,可能会有所帮助。当然。谢谢因此,我希望使用他们的API从Coursera中删除所有课程。因此,我在API上运行了一个curl命令来获取JSON,它默认返回100个课程。希望有帮助。如果你需要更具体的信息,请告诉我。谢谢,帮了我忙的人!我不知道为什么在curl命令中将limit设置为2150不起作用。如果检查curl代码中的url,您会注意到在
limit
参数之后有一个
。如果将
替换为
&
,我想curl也可以工作。
url = "https://api.coursera.org/api/courses.v1?start=0&limit=2150&includes=instructorIds,partnerIds,specializations,s12nlds,v1Details,v2Details&fields=instructorIds,partnerIds,specializations,s12nlds,description"
data = requests.get(url).json()
print(len(data['elements']))