Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/xamarin/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
将日期参数传递到RESTAPI调用中-使用R-_R_Url Encoding_Jsonlite - Fatal编程技术网

将日期参数传递到RESTAPI调用中-使用R-

将日期参数传递到RESTAPI调用中-使用R-,r,url-encoding,jsonlite,R,Url Encoding,Jsonlite,试图从REST API中提取一些数据,但无法将as date参数正确传递到字符串中。使用sprintf,我成功地通过了搜索词和网站,但没有发现的运气 API有问题吗 Function to grab data by one search term and one website get_newsriver_content <- function(searcht,website,api_key){ url <- sprintf('https://api.newsriver.io/v2

试图从REST API中提取一些数据,但无法将as date参数正确传递到字符串中。使用sprintf,我成功地通过了搜索词和网站,但没有发现的运气

API有问题吗

Function to grab data by one search term and one website

get_newsriver_content <- function(searcht,website,api_key){
url <- sprintf('https://api.newsriver.io/v2/search?query=text%%3A%s%%20OR%%20website.domainName%%3A%s%%20OR%%20language%%3AEN&sortBy=_score&sortOrder=DESC&limit=100',searcht, website)
news_get<- GET(url, add_headers(Authorization = paste(api_key, sep = "")))
news_txt <- content(news_get, as = "text", encoding = "UTF-8") 
news_df <- fromJSON(news_txt)
news_df$discoverDate <- as.Date(news_df$discoverDate)
news_df
}
通过一个搜索词和一个网站获取数据的功能

get_newsriver_content以下是我如何解决问题的

这真的是一个两步的问题

  • 了解如何正确编码要插入Curl调用的查询
  • 创建一个函数,该函数基于日期向量进行API调用并将其附加到数据帧 我是这样做的

    library(tidyverse)
    library(jsonlite)
    library(urltools)
    library(httr)
    
    # Function For Pulling by Date  
    get_newsriver_bydate <- function(query, date_v){
    
    #Being Kind to the free API - Shout out to Elia at Newsriver who has been ever patient
    pb$tick()$print()
    Sys.sleep(sample(seq(0.5, 2.5, 0.5), 1))
    
    #This is where is used the URL encode package as suggested by quartin
    url_base <- "https://api.newsriver.io/v2/search"
    create_curl_call <- url_base %>% 
    param_set("query",url_encode(query)) %>% 
    param_set("sortBy", "_score") %>% 
    param_set("sortOrder", "DESC") %>% 
    param_set("limit", "100") 
    
    #I had most of this before however I changed my output to a tibble
    #more versatile to work with 
    
    get_curl <- GET(create_curl_call, add_headers(Authorization = paste(api_key, sep = "")))
    curl_to_json <- content(get_curl, as = "text", encoding = "UTF-8")
    news_df <- fromJSON(curl_to_json, flatten = TRUE)
    news_df$discoverDate <- as.Date(news_df$discoverDate)
    as.tibble(news_df)
    }
    
    # Set Configration and Set API key
    set_config(config(ssl_verifypeer = 0L))
    api_key <- "mykey"
    
    #Set my vector of Dates
    dates1 <- seq(as.Date("2017-09-01"), as.Date("2017-10-01"), by = "days")
    
    #Set up my progress bar
    pb <- progress_estimated(length(dates1))
    
    #Sprintf my query into a vector of queries based on date
    query <- sprintf('text:"Canada" AND text:"Rocks" AND language:EN AND discoverDate:[%s TO %s]',dates1, dates1)
    
     #Run the query and be patient
    news_df <- map_df(query, get_newsriver_bydate, .id = "query")
    
    库(tidyverse)
    图书馆(jsonlite)
    库(urltools)
    图书馆(httr)
    #按日期拉取的功能
    按日期获取\u新闻流\u%
    参数集(“排序器”、“描述”)%>%
    参数集(“限制”、“100”)
    #我以前有过很多,但是我把输出改成了tibble
    #更灵活的工作方式
    
    我不知道这里有什么问题。如果查看,则可以按文本、标题、网站名称和语言进行查询,而不是查找(仅可用于对结果进行排序)。如果我可以提出其他建议,请谈谈
    urltools
    包,特别是
    param_set
    功能。您可以以更简洁的方式构建查询:
    url\u base%%>%param\u set(“query”、“…”)%%>%param\u set(“sortBy”、“\u score”)%%>%param\u set(“sortOrder”、“DESC”)%%>%param\u set(“limit”、“100”)
    @quartin很好的建议我还得到了api创建者的帮助,以完成url编码。我会把答案贴出来shortly@quartin更新