Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/r/83.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
R 从网站中的所有网页抓取完整数据_R_Web Scraping_Rvest - Fatal编程技术网

R 从网站中的所有网页抓取完整数据

R 从网站中的所有网页抓取完整数据,r,web-scraping,rvest,R,Web Scraping,Rvest,有了这段代码,我可以从这个网站的第一页获得数据。但我想从完整的数据库中获取数据。我想从所有网页中提取数据。提取数据后,应将其保存在excel或csv文件中 install.packages("rvest") library(rvest) install.packages("dplyr") library(dplyr) pg<-read_html("https://bidplus.gem.gov.in/bidresultlists?bidresultlists&page_no=i"

有了这段代码,我可以从这个网站的第一页获得数据。但我想从完整的数据库中获取数据。我想从所有网页中提取数据。提取数据后,应将其保存在excel或csv文件中

install.packages("rvest")
library(rvest)
install.packages("dplyr")
library(dplyr)

pg<-read_html("https://bidplus.gem.gov.in/bidresultlists?bidresultlists&page_no=i")

#pg <- read_html("https://bidplus.gem.gov.in/bidresultlists")

blocks <- html_nodes(pg, ".block")

items_and_quantity <- html_nodes(blocks, xpath=".//div[@class='col-block' and contains(., 'Item(s)')]")

items <- html_nodes(items_and_quantity, xpath=".//strong[contains(., 'Item(s)')]/following-sibling::span") %>% html_text(trim=TRUE)
quantity <- html_nodes(items_and_quantity, xpath=".//strong[contains(., 'Quantity')]/following-sibling::span") %>% html_text(trim=TRUE) %>% as.numeric()

department_name_and_address <- html_nodes(blocks, xpath=".//div[@class='col-block' and contains(., 'Department Name And Address')]") %>% 
  html_text(trim=TRUE) %>% 
  gsub("\n", "|", .) %>% 
  gsub("[[:space:]]*\\||\\|[[:space:]]*", "|", .)

block_header <- html_nodes(blocks, "div.block_header")

html_nodes(block_header, xpath=".//p[contains(@class, 'bid_no')]") %>%
  html_text(trim=TRUE) %>% 
  gsub("^.*: ", "", .) -> bid_no

html_nodes(block_header, xpath=".//p/b[contains(., 'Status')]/following-sibling::span") %>% 
  html_text(trim=TRUE) -> status

html_nodes(blocks, xpath=".//strong[contains(., 'Start Date')]/following-sibling::span") %>%
  html_text(trim=TRUE) -> start_date

html_nodes(blocks, xpath=".//strong[contains(., 'End Date')]/following-sibling::span") %>%
  html_text(trim=TRUE) -> end_date

data.frame(
  bid_no,
  status,
  start_date,
  end_date,
  items,
  quantity,
  department_name_and_address,
  stringsAsFactors=FALSE
) -> xdf

xdf$is_ra <- grepl("/RA/", bid_no)

str(xdf)
## 'data.frame': 10 obs. of  8 variables:
##  $ bid_no                     : chr  "GEM/2018/B/93066" "GEM/2018/B/93082" "GEM/2018/B/93105" "GEM/2018/B/93999" ...
##  $ status                     : chr  "Not Evaluated" "Not Evaluated" "Not Evaluated" "Not Evaluated" ...
##  $ start_date                 : chr  "25-09-2018 03:53:pm" "27-09-2018 09:16:am" "25-09-2018 05:08:pm" "26-09-2018 05:21:pm" ...
##  $ end_date                   : chr  "18-10-2018 03:00:pm" "18-10-2018 03:00:pm" "18-10-2018 03:00:pm" "18-10-2018 03:00:pm" ...
##  $ items                      : chr  "automotive chassis fitted with engine" "automotive chassis fitted with engine" "automotive chassis fitted with engine" "Storage System" ...
##  $ quantity                   : num  1 1 1 2 90 1 981 6 4 376
##  $ department_name_and_address: chr  "Department Name And Address:||Ministry Of Steel Na Kirandul Complex N/a" "Department Name And Address:||Ministry Of Steel Na Kirandul Complex N/a" "Department Name And Address:||Ministry Of Steel Na Kirandul Complex N/a" "Department Name And Address:||Maharashtra Energy Department Maharashtra Bhusawal Tps N/a" ...
##  $ is_ra                      : logi  FALSE FALSE FALSE FALSE FALSE FALSE ...

xdf
write.csv(xdf,'xdf1.csv')


write.csv(xdf,'xdf.csv')
write.csv(xdf,'xdf.csv', append = TRUE)
?write.csv

write.table( xdf1,  
             file="xdf.csv", 
             append = T, 
             sep=',', 
             row.names=F, 
             col.names=F )
试试这个:

library(rvest)
library(tidyverse)

pg<-read_html("https://bidplus.gem.gov.in/bidresultlists?bidresultlists&page_no=1")

##Find total number of pages

page_num<-pg%>%
  html_nodes(".pagination")%>%
  html_nodes("li")%>%
  html_nodes("a")%>%
  .[5]%>%
  html_attrs()%>%
  unlist()%>%
  parse_number()%>%unique()

 #make function for scraping page
scr=function(i){
  pg<-read_html(paste0("https://bidplus.gem.gov.in/bidresultlists?bidresultlists&page_no=",i))
  blocks <- html_nodes(pg, ".block")

  items_and_quantity <- html_nodes(blocks, xpath=".//div[@class='col-block' and contains(., 'Item(s)')]")

  items <- html_nodes(items_and_quantity, xpath=".//strong[contains(., 'Item(s)')]/following-sibling::span") %>% html_text(trim=TRUE)
  quantity <- html_nodes(items_and_quantity, xpath=".//strong[contains(., 'Quantity')]/following-sibling::span") %>% html_text(trim=TRUE) %>% as.numeric()

  department_name_and_address <- html_nodes(blocks, xpath=".//div[@class='col-block' and contains(., 'Department Name And Address')]") %>% 
    html_text(trim=TRUE) %>% 
    gsub("\n", "|", .) %>% 
    gsub("[[:space:]]*\\||\\|[[:space:]]*", "|", .)

  block_header <- html_nodes(blocks, "div.block_header")

  html_nodes(block_header, xpath=".//p[contains(@class, 'bid_no')]") %>%
    html_text(trim=TRUE) %>% 
    gsub("^.*: ", "", .) -> bid_no

  html_nodes(block_header, xpath=".//p/b[contains(., 'Status')]/following-sibling::span") %>% 
    html_text(trim=TRUE) -> status

  html_nodes(blocks, xpath=".//strong[contains(., 'Start Date')]/following-sibling::span") %>%
    html_text(trim=TRUE) -> start_date

  html_nodes(blocks, xpath=".//strong[contains(., 'End Date')]/following-sibling::span") %>%
    html_text(trim=TRUE) -> end_date

  data.frame(
    bid_no,
    status,
    start_date,
    end_date,
    items,
    quantity,
    department_name_and_address,
    stringsAsFactors=FALSE
  ) -> xdf
  xdf$is_ra <- grepl("/RA/", bid_no)
  return(xdf)
}
#run for-loop for each page and save it in data frame  
res<-1:page_num%>%
  map_df(.,scr)

#for example 
1:2%>%
   map_df(.,scr)%>%
   head(5)
            bid_no               status          start_date            end_date                                                   items quantity
1 GEM/2018/B/94492        Not Evaluated 02-10-2018 10:42:am 22-10-2018 01:00:pm door frame metal detector dfmd  security metal detector        1
2 GEM/2018/B/95678        Not Evaluated 29-09-2018 11:01:am 22-10-2018 01:00:pm                                         Foolscap sheets      100
3 GEM/2018/B/96187        Not Evaluated 01-10-2018 10:29:am 22-10-2018 01:00:pm                               OEM Cartridge/ Consumable       20
4 GEM/2018/B/96196        Not Evaluated 01-10-2018 10:48:am 22-10-2018 01:00:pm                               OEM Cartridge/ Consumable       20
5 GEM/2018/B/96722 Technical Evaluation 01-10-2018 05:26:pm 22-10-2018 01:00:pm        Special Purpose Telephones(smart phone for ICDS)    33914
                                                                                          department_name_and_address is_ra
1 Department Name And Address:||Ministry Of Shipping Na Electronics Directorate General Of Lighthouses And Lightships FALSE
2                            Department Name And Address:||Ministry Of Defence Department Of Defence Cweafborjhar N/a FALSE
3                            Department Name And Address:||Ministry Of Defence Department Of Defence Cweafborjhar N/a FALSE
4                            Department Name And Address:||Ministry Of Defence Department Of Defence Cweafborjhar N/a FALSE
5                                 Department Name And Address:||Bihar Social Welfare Department Bihar Procurement N/a FALSE

两个问题。1您需要有一个URL列表或矢量才能爬网。你已经有了还是需要生成?2最终结果应该在一个表中还是每页都要一个单独的表?不,我要在一个表中显示结果其他用户将您的问题标记为质量低,需要改进。我对您的输入进行了重新措辞/格式化,使其更易于阅读/理解。请查看我的更改,以确保它们反映您的意图。如果您有进一步的问题或反馈,请随时给我留言。不,您必须只输入第一页,在脚本完成后,查找页面数并将其全部用于循环,看起来您的刮码不理想:我只是将其放在一起很奇怪,我在2分钟前用此脚本将res%map_df.,scr 15页和150条记录替换掉