Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/r/73.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
《纽约时报》的API带有R_R - Fatal编程技术网

《纽约时报》的API带有R

《纽约时报》的API带有R,r,R,我正在尝试使用《纽约时报》API获取文章信息。我得到的csv文件不反映我的筛选查询。例如,我将源代码限制为《纽约时报》,但我得到的文件也包含其他源代码。 我想问你为什么过滤查询不起作用 这是密码 if (!require("jsonlite")) install.packages("jsonlite") library(jsonlite) api = "apikey" nytime = function () { url = paste('http://api.nytimes.com/s

我正在尝试使用《纽约时报》API获取文章信息。我得到的csv文件不反映我的筛选查询。例如,我将源代码限制为《纽约时报》,但我得到的文件也包含其他源代码。 我想问你为什么过滤查询不起作用

这是密码

if (!require("jsonlite")) install.packages("jsonlite")
library(jsonlite)

api = "apikey"

nytime = function () {
  url = paste('http://api.nytimes.com/svc/search/v2/articlesearch.json?',
              '&fq=source:',("The New York Times"),'AND type_of_material:',("News"),
              'AND persons:',("Trump, Donald J"),
              '&begin_date=','20160522&end_date=','20161107&api-key=',api,sep="")
  #get the total number of search results
  initialsearch = fromJSON(url,flatten = T)
  maxPages = round((initialsearch$response$meta$hits / 10)-1)

  #try with the max page limit at 10
  maxPages = ifelse(maxPages >= 10, 10, maxPages)

  #creat a empty data frame
  df = data.frame(id=as.numeric(),source=character(),type_of_material=character(),
                  web_url=character())

  #save search results into data frame
  for(i in 0:maxPages){
    #get the search results of each page
    nytSearch = fromJSON(paste0(url, "&page=", i), flatten = T) 
    temp = data.frame(id=1:nrow(nytSearch$response$docs),
                      source = nytSearch$response$docs$source, 
                      type_of_material = nytSearch$response$docs$type_of_material,
                      web_url=nytSearch$response$docs$web_url)
    df=rbind(df,temp)
    Sys.sleep(5) #sleep for 5 second
  }
  return(df)
}

dt = nytime()
write.csv(dt, "trump.csv")
这是我得到的csv文件。

似乎你需要把引号放在引号内,而不是放在引号外。像这样:

  url = paste('http://api.nytimes.com/svc/search/v2/articlesearch.json?',
              '&fq=source:',"(The New York Times)",'AND type_of_material:',"(News)",
              'AND persons:',"(Trump, Donald J)",
              '&begin_date=','20160522&end_date=','20161107&api-key=',api,sep="")