Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/345.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 在pandas中直接从网站打开csv文件,无需下载到文件夹_Python_Pandas_Csv_Selenium - Fatal编程技术网

Python 在pandas中直接从网站打开csv文件,无需下载到文件夹

Python 在pandas中直接从网站打开csv文件,无需下载到文件夹,python,pandas,csv,selenium,Python,Pandas,Csv,Selenium,它包含一个“导出数据”链接,可将页面内容下载到csv文件中。该按钮不包含指向csv文件的链接,而是运行javascript过程。我想直接用pandas打开csv文件,而不是下载它,找出下载文件夹,然后从那里打开它。这可能吗 我现有的代码使用selenium来单击按钮,不过如果有更好的方法,我很乐意听到 # assign chrome driver path to variable chrome_path = chromepath # create browser object driv

它包含一个“导出数据”链接,可将页面内容下载到csv文件中。该按钮不包含指向csv文件的链接,而是运行javascript过程。我想直接用pandas打开csv文件,而不是下载它,找出下载文件夹,然后从那里打开它。这可能吗

我现有的代码使用selenium来单击按钮,不过如果有更好的方法,我很乐意听到

# assign chrome driver path to variable
chrome_path = chromepath

# create browser object
    driver=webdriver.Chrome(chrome_path)

# assign url variable    
url = 'http://www.fangraphs.com/projections.aspx?pos=all&stats=bat&type=fangraphsdc&team=0&lg=all&players=0&sort=24%2cd'

# navigate to web page    
driver.get(url)

# click export data button    
driver.find_element_by_link_text("Export Data").click()

#close driver
driver.quit()

只是碰巧遇到了这个问题,并且有一个脚本,如果您更改URL,它应该可以工作。不是使用selenium来下载CSV,而是使用soup来刮除页面中的表格,并使用pandas创建表格以进行CSV导出

只需确保它的末尾有“page=1_100000”即可获得所有行。如果你有任何问题,请告诉我

import requests
from random import choice
from bs4 import BeautifulSoup
import pandas as pd
from urllib.parse import urlparse, parse_qs
from functools import reduce

desktop_agents = ['Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36',
                 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36',
                 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36',
                 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/602.2.14 (KHTML, like Gecko) Version/10.0.1 Safari/602.2.14',
                 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36',
                 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36',
                 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36',
                 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36',
                 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36',
                 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0']

url = "https://www.fangraphs.com/leaders.aspx?pos=np&stats=bat&lg=all&qual=0&type=c,4,6,5,23,9,10,11,13,12,21,22,60,18,35,34,50,40,206,207,208,44,43,46,45,24,26,25,47,41,28,110,191,192,193,194,195,196,197,200&season=2018&month=0&season1=2018&ind=0&team=0&rost=0&age=0&filter=&players=0&page=1_100000"

def random_headers():
    return {'User-Agent': choice(desktop_agents),'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'}

df3 = pd.DataFrame()
# get the url

page_request = requests.get(url,headers=random_headers())
soup = BeautifulSoup(page_request.text,"lxml")

table = soup.find_all('table')[11]
data = []
# pulls headings from the fangraphs table
column_headers = []
headingrows = table.find_all('th')
for row in headingrows[0:]:
    column_headers.append(row.text.strip())

data.append(column_headers)
table_body = table.find('tbody')
rows = table_body.find_all('tr')

for row in rows:
    cols = row.find_all('td')
    cols = [ele.text.strip() for ele in cols]
    data.append([ele for ele in cols[1:]])

ID = []

for tag in soup.select('a[href^=statss.aspx?playerid=]'):
    link = tag['href']
    query = parse_qs(link)
    ID.append(query)

df1 = pd.DataFrame(data)
df1 = df1.rename(columns=df1.iloc[0])
df1 = df1.loc[1:].reset_index(drop=True)

df2 = pd.DataFrame(ID)
df2.drop(['position'], axis = 1, inplace = True, errors = 'ignore')
df2['statss.aspx?playerid'] = df2['statss.aspx?playerid'].str[0]

df3 = pd.concat([df1, df2], axis=1)

df3.to_csv("HittingGA2018.csv")

如何使用SeleniumWeb驱动程序实现这一点我在这里也提出了同样的问题