Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/r/64.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
R 如何从下拉列表中删除选项并将其存储在表中?_R_Web Scraping_Rvest - Fatal编程技术网

R 如何从下拉列表中删除选项并将其存储在表中?

R 如何从下拉列表中删除选项并将其存储在表中?,r,web-scraping,rvest,R,Web Scraping,Rvest,我正试图做一个交互式仪表板与分析,在汽车方面的基础。我希望用户能够选择汽车品牌,例如宝马、奥迪等,基于此选择,他将只能选择宝马/奥迪等车型。在选择每个品牌后,我都有一个问题,我无法废弃属于该品牌的车型。我要从中删除的页面: 主页-> 子车品牌页面示例-> 我已经尝试过放弃所有选项,所以以后我可能会以某种方式清理数据,只存储模型 代码: 但它只是在页面引擎类型等上使用其他可用选项刮取品牌。而在检查元素后,我可以清楚地看到模型类型 otomoto <- "https://www.otomoto

我正试图做一个交互式仪表板与分析,在汽车方面的基础。我希望用户能够选择汽车品牌,例如宝马、奥迪等,基于此选择,他将只能选择宝马/奥迪等车型。在选择每个品牌后,我都有一个问题,我无法废弃属于该品牌的车型。我要从中删除的页面: 主页-> 子车品牌页面示例->

我已经尝试过放弃所有选项,所以以后我可能会以某种方式清理数据,只存储模型

代码:

但它只是在页面引擎类型等上使用其他可用选项刮取品牌。而在检查元素后,我可以清楚地看到模型类型

otomoto <- "https://www.otomoto.pl/osobowe/"


brands <- read_html(otomoto) %>%
  html_nodes("option") %>%
  html_text() 

brands <- data.frame(brands)

for (i in 1:nrow(brands)){
  no_marka_pojazdu <- i
    if(brands[i,1] == "Marka pojazdu"){
      break
    }
}
no_marka_pojazdu <- no_marka_pojazdu + 1 
for (i in 1:nrow(brands)){
  zuk <- i
  if(substr(brands[i,1],1,3) == "Żuk"){
    break
  }
}

Modele_pojazdow <- as.character(brands[no_marka_pojazdu:zuk,1])
Modele_pojazdow <- removeNumbers(Modele_pojazdow)
Modele_pojazdow <- substr(Modele_pojazdow,1,nchar(Modele_pojazdow)-2)
Modele_pojazdow <- data.frame(Modele_pojazdow)
以上代码仅用于在网页上选择受支持的汽车品牌并将其存储在数据框中。有了它,我可以创建html链接,并将所有内容指向一个选定的品牌

我想有类似的对象,以Modele_pojazdow,但与模型限制在以前选定的汽车品牌


带有车型的下拉列表显示为白色框,文本Model pojazdu位于右侧奥迪框旁边

有些人可能不赞成使用Python作为解决方案语言,但其目的是为高级过程提供一些指针。我已经很长时间没有写R了,所以Python更快了

编辑:现在添加了R脚本

概述:

第一个下拉选项可以从使用param571选项的css选择器返回的每个节点的value属性中获取。这将使用一个目标来定位父下拉列表选择元素,然后使用中的选项选择器来指定中的选项标记元素。要应用此选择器组合的html可以通过xhr请求检索到最初提供的。您希望返回一个节点列表进行迭代;类似于使用js document.querySelectorAll应用选择器

页面使用ajax POST请求根据您的第一个下拉列表选择更新第二个下拉列表。您的第一个下拉选择确定参数搜索[filter_enum_make]的值,该值用于向服务器发送POST请求。后续响应包含一个可用选项列表,其中包括一些可以删除的案例备选方案

我通过使用捕获了POST请求。这显示了请求主体中的请求头和参数。屏幕截图示例显示在末尾

从响应文本IMO中提取选项的最简单方法是将适当的字符串正则化,我通常不会推荐正则化用于处理html,但在这种情况下,它为我们提供了很好的服务。如果不想使用正则表达式,可以从id为body容器的元素的datafacets属性中获取相关信息。对于非正则表达式版本,您需要处理不带引号的null,并检索键为filter\u enum\u model的内部字典。最后,我展示了一个重写函数来处理这个问题

检索到的字符串是字典的字符串表示形式。这需要转换为实际的dictionary对象,然后从中提取选项值。编辑:由于R没有dictionary对象,因此需要找到类似的结构。我将在转换时查看这个

我创建了一个用户定义的函数getOptions,用于返回每个make的选项。每个汽车制造值都来自第一个下拉列表中的可能项目列表。我循环那些可能的值,使用该函数返回该品牌的选项列表,并将这些列表作为值添加到字典results中,其键为make of car。同样,对于R,需要找到一个具有类似于python字典功能的对象

该列表字典需要转换为一个数据帧,该数据帧包括一个转置操作,以使标题(汽车制造)和每个标题下的列(包含相关模型)的输出整洁

最后,整个过程都可以写入csv

所以,希望这能给你一个实现你想要的方法的想法。也许其他人可以用它来帮你写一个解决方案

下面是对此的Python演示:

import requests
from bs4 import BeautifulSoup as bs
import re
import ast
import pandas as pd

headers = {
    'User-Agent' : 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36'
}


def getOptions(make):  #function to return options based on make
    data = {
             'search[filter_enum_make]': make,
             'search[dist]' : '5',
             'search[category_id]' : '29'
            }

    r = requests.post('https://www.otomoto.pl/ajax/search/list/', data = data, headers = headers)   
    try:
        # verify the regex here: https://regex101.com/r/emvqXs/1
        data = re.search(r'"filter_enum_model":(.*),"new_used"', r.text ,flags=re.DOTALL).group(1) #regex to extract the string containing the models associated with the car make filter 
        aDict = ast.literal_eval(data) #convert string representation of dictionary to python dictionary
        d = len({k.lower(): v for k, v in aDict.items()}.keys()) #find length of unique keys when accounting for case
        dirtyList = list(aDict)[:d] #trim to unique values
        cleanedList = [item for item in dirtyList if item != 'other' ] #remove 'other' as doesn't appear in dropdown
    except:
        cleanedList = [] # sometimes there are no associated values in 2nd dropdown
    return cleanedList

r = requests.get('https://www.otomoto.pl/osobowe/')
soup = bs(r.content, 'lxml')
values = [item['value'] for item in soup.select('#param571 option') if item['value'] != '']

results = {}
# build a dictionary of lists to hold options for each make
for value in values:
    results[value] = getOptions(value) #function call to return options based on make

# turn into a dataframe and transpose so each column header is the make and the options are listed below
df = pd.DataFrame.from_dict(results,orient='index').transpose()

#write to csv
df.to_csv(r'C:\Users\User\Desktop\Data.csv', sep=',', encoding='utf-8-sig',index = False )
csv输出示例:

alfa romeo的json示例:

alfa romeo的正则表达式匹配示例:

使用make参数值alfa romeo从函数调用返回的筛选器选项列表示例:

fiddler请求示例:

包含选项的ajax响应html示例:

R转换和改进的python: 在转换为R时,我发现了一种更好的方法,可以从服务器上的js文件中提取参数。如果打开“开发工具”,可以看到“源”选项卡中列出的文件

R有待改进:


嘿,很有趣的话题。我试图在python中使用这段代码,但出现了错误:soup=bsr.content,“lxml”如何解决这个问题?我已经完成了R转换的一部分,但由于某种原因,post请求遇到了问题
a 503和R,所以我想我在体内传递参数时做错了什么。不过,Python也可以正常工作。您需要确保import语句位于bs4 import BeautifulSoup中所有代码的顶部,作为bsFirst:我将chrome的版本更改为my,对吗?出现第一个错误:soup=bsr.content“lxml”中的第29行。出现第二个错误:第196行,在uuu init_uuuu%中。joinfeatures bs4.FeatureNotFound:找不到具有您请求的功能的树生成器:lxml。你需要安装解析器库吗?我注意到有时候第二个下拉列表没有相关的值,所以更新了代码顶层版本来处理这个问题。你尝试过我在答案底部给出的R解决方案吗?
import requests
from bs4 import BeautifulSoup as bs
import re
import ast
import pandas as pd

headers = {
    'User-Agent' : 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36'
}


def getOptions(make):  #function to return options based on make
    data = {
             'search[filter_enum_make]': make,
             'search[dist]' : '5',
             'search[category_id]' : '29'
            }

    r = requests.post('https://www.otomoto.pl/ajax/search/list/', data = data, headers = headers)   
    try:
        # verify the regex here: https://regex101.com/r/emvqXs/1
        data = re.search(r'"filter_enum_model":(.*),"new_used"', r.text ,flags=re.DOTALL).group(1) #regex to extract the string containing the models associated with the car make filter 
        aDict = ast.literal_eval(data) #convert string representation of dictionary to python dictionary
        d = len({k.lower(): v for k, v in aDict.items()}.keys()) #find length of unique keys when accounting for case
        dirtyList = list(aDict)[:d] #trim to unique values
        cleanedList = [item for item in dirtyList if item != 'other' ] #remove 'other' as doesn't appear in dropdown
    except:
        cleanedList = [] # sometimes there are no associated values in 2nd dropdown
    return cleanedList

r = requests.get('https://www.otomoto.pl/osobowe/')
soup = bs(r.content, 'lxml')
values = [item['value'] for item in soup.select('#param571 option') if item['value'] != '']

results = {}
# build a dictionary of lists to hold options for each make
for value in values:
    results[value] = getOptions(value) #function call to return options based on make

# turn into a dataframe and transpose so each column header is the make and the options are listed below
df = pd.DataFrame.from_dict(results,orient='index').transpose()

#write to csv
df.to_csv(r'C:\Users\User\Desktop\Data.csv', sep=',', encoding='utf-8-sig',index = False )
{"145":1,"146":1,"147":218,"155":1,"156":118,"159":559,"164":2,"166":39,"33":1,"Alfasud":2,"Brera":34,"Crosswagon":2,"GT":89,"GTV":7,"Giulia":251,"Giulietta":378,"Mito":224,"Spider":24,"Sportwagon":2,"Stelvio":242,"alfasud":2,"brera":34,"crosswagon":2,"giulia":251,"giulietta":378,"gt":89,"gtv":7,"mito":224,"spider":24,"sportwagon":2,"stelvio":242}
['145', '146', '147', '155', '156', '159', '164', '166', '33', 'Alfasud', 'Brera', 'Crosswagon', 'GT', 'GTV', 'Giulia', 'Giulietta', 'Mito', 'Spider', 'Sportwagon', 'Stelvio']
<section id="body-container" class="om-offers-list"
        data-facets='{"offer_seek":{"offer":2198},"private_business":{"business":1326,"private":872,"all":2198},"categories":{"29":2198,"161":953,"163":953},"categoriesParent":[],"filter_enum_model":{"145":1,"146":1,"147":219,"155":1,"156":116,"159":561,"164":2,"166":37,"33":1,"Alfasud":2,"Brera":34,"Crosswagon":2,"GT":88,"GTV":7,"Giulia":251,"Giulietta":380,"Mito":226,"Spider":25,"Sportwagon":2,"Stelvio":242,"alfasud":2,"brera":34,"crosswagon":2,"giulia":251,"giulietta":380,"gt":88,"gtv":7,"mito":226,"spider":25,"sportwagon":2,"stelvio":242},"new_used":{"new":371,"used":1827,"all":2198},"sellout":null}'
        data-showfacets=""
        data-pagetitle="Alfa Romeo samochody osobowe - otomoto.pl"
        data-ajaxurl="https://www.otomoto.pl/osobowe/alfa-romeo/?search%5Bbrand_program_id%5D%5B0%5D=&search%5Bcountry%5D="
        data-searchid=""
        data-keys=''
        data-vars=""
from bs4 import BeautifulSoup as bs

def getOptions(make):  #function to return options based on make
    data = {
             'search[filter_enum_make]': make,
             'search[dist]' : '5',
             'search[category_id]' : '29'
            }

    r = requests.post('https://www.otomoto.pl/ajax/search/list/', data = data, headers = headers)   
    soup = bs(r.content, 'lxml')
    data = soup.select_one('#body-container')['data-facets'].replace('null','"null"')
    aDict = ast.literal_eval(data)['filter_enum_model'] #convert string representation of dictionary to python dictionary
    d = len({k.lower(): v for k, v in aDict.items()}.keys()) #find length of unique keys when accounting for case
    dirtyList = list(aDict)[:d] #trim to unique values
    cleanedList = [item for item in dirtyList if item != 'other' ] #remove 'other' as doesn't appear in dropdown
    return cleanedList

print(getOptions('alfa-romeo'))
library(httr)
library(jsonlite)

url <- 'https://www.otomoto.pl/ajax/jsdata/params/'
r <- GET(url)
contents <- content(r, "text")

data <- strsplit(contents, "var searchConditions = ")[[1]][2]
data <- strsplit(as.character(data), ";var searchCondition")[[1]][1]

source <- fromJSON(data)$values$'573'$'571'
makes <- names(source)

for(make in makes){
  print(make)
  print(source[make][[1]]$value)
  #break
 }
import requests
import json
import pandas as pd

r = requests.get('https://www.otomoto.pl/ajax/jsdata/params/')
data = r.text.split('var searchConditions = ')[1]
data = data.split(';var searchCondition')[0]
items = json.loads(data)
source = items['values']['573']['571']
makes = [item for item in source]

results = {}

for make in makes:
    df = pd.DataFrame(source[make]) ## build a dictionary of lists to hold options for each make
    results[make]  = list(df['value'])

dfFinal = pd.DataFrame.from_dict(results,orient='index').transpose()  # turn into a dataframe and transpose so each column header is the make and the options are listed below

mask = dfFinal.applymap(lambda x: x is None) #tidy up None values to empty strings https://stackoverflow.com/a/31295814/6241235
cols = dfFinal.columns[(mask).any()]

for col in dfFinal[cols]:
    dfFinal.loc[mask[col], col] = ''
print(dfFinal)