Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Pandas.read_csv对于包含逗号的列失败_Python_Pandas_Data Science - Fatal编程技术网

Python Pandas.read_csv对于包含逗号的列失败

Python Pandas.read_csv对于包含逗号的列失败,python,pandas,data-science,Python,Pandas,Data Science,我正在尝试使用Pandas解析以下CSV文件。我看到了读取csv失败的问题,因为csv列中有逗号分隔的值。这些是http“用户代理”信息。有人能帮忙解决这个问题吗 1) 我试过“quotechar”,但帮不了我多少忙。 2) 我没有看到任何关于花括号的报价 csv文件: StartTime,ClientIP,ClientPort,ServerIP,ServerPort,HttpMethod,RequestHeader.Host,HttpURI,RequestHeader.User-Agent,R

我正在尝试使用Pandas解析以下CSV文件。我看到了读取csv失败的问题,因为csv列中有逗号分隔的值。这些是http“用户代理”信息。有人能帮忙解决这个问题吗

1) 我试过“quotechar”,但帮不了我多少忙。
2) 我没有看到任何关于花括号的报价

csv文件:

StartTime,ClientIP,ClientPort,ServerIP,ServerPort,HttpMethod,RequestHeader.Host,HttpURI,RequestHeader.User-Agent,RequestHeader.Referrer,ResponseHeader.Content-Type,RequestHeader.DNT_x-do-not-track,DownloadContentLength,UploadContentLength,ResponseCode,Duration,RequestActualByteCount,ResponseActualByteCount
1473805362,0::ffff:174.201.4.56,11978,0::ffff:192.0.77.2,80,GET,i1.wp.com,/www.fbschedules.com/blog/wp-content/uploads/2016/09/utep-miners.jpg?resize=121%2C80,"Mozilla/5.0 (iPhone; CPU iPhone OS 9_3_2 like Mac OS X) AppleWebKit/601.1.46 (KHTML, like Gecko) Version/9.0 Mobile/13F69 Safari/601.1(ir=0;pe=0;rh=0;ah=0;or=0) (ps=0)",http://www.fbschedules.com/ncaa-16/2016-penn-state-nittany-lions-football-schedule.php,image/jpeg,0,4927,0,200,7,456,5389
1473805367,0::ffff:174.201.30.80,15345,0::ffff:174.129.245.227,80,GET,lt.andomedia.com,/lt?guid=c3FsMDAxfkM2Mzc0MUE0LTcxMTUtNEFCRi1BMDlGLUJCODk0NTRGOUQyOH5lcDAwMQ%3D%3D&cb=3711104352,Pandora/1626 CFNetwork/758.5.3 Darwin/15.6.0(ir=0;pe=0;rh=0;ah=0;or=0) (ps=0),,text/html; charset=UTF-8,0,72,0,200,9,437,450
1473805289,0::ffff:174.201.23.150,8187,0::ffff:204.79.197.200,80,GET,www.bing.com,/search?q=city+hallphila++owner+locator&form=IE10TR&src=IE10TR&pc=NTJB,Mozilla/5.0 (Windows NT 6.3; ARM; Trident/7.0; Touch; rv:11.0) like Gecko(ir=0;pe=0;rh=0;ah=0;or=0) (ps=0),,text/html; charset=utf-8,1,0,0,200,487,1271,31816
1473805290,0::ffff:174.201.23.150,8187,0::ffff:204.79.197.200,80,GET,www.bing.com,"/fd/ls/l?IG=6AA55B871D1C490C9E26E3D34C29D0D9&Type=Event.CPT&DATA={"pp":{"S":"L","FC":73,"BC":436,"SE":-1,"TC":-1,"H":519,"BP":526,"CT":538,"IL":5},"ad":[185,159,1280,706,1280,2016,1]}&P=SERP&DA=BN1&MN=SERP",Mozilla/5.0 (Windows NT 6.3; ARM; Trident/7.0; Touch; rv:11.0) like Gecko(ir=0;pe=0;rh=0;ah=0;or=0) (ps=0),http://www.bing.com/search?q=city+hallphila++owner+locator&form=IE10TR&src=IE10TR&pc=NTJB,text/html,1,0,0,204,12,1436,239
代码块:

path = "/root"
dataAll = glob.glob(path + "/*.csv")
frame = pd.DataFrame()
files = []
for file_ in dataAll:
    print (file_)
    df = pd.read_csv(file_)
    files.append(df)
frame = pd.concat(files)
frame
错误代码段:

    ---------------------------------------------------------------------------
ParserError                               Traceback (most recent call last)
<ipython-input-38-e39305246059> in <module>()
      6 for file_ in dataAll:
      7     #df = pd.read_csv(file_, names = ['StartTime','ClientIP','ClientPort','ServerIP','ServerPort','HttpMethod','RequestHeader.Host','HttpURI','RequestHeader.User-Agent','RequestHeader.Referrer','ResponseHeader.Content-Type','RequestHeader.DNT_x-do-not-track','DownloadContentLength','UploadContentLength','ResponseCode','Duration','RequestActualByteCount','ResponseActualByteCount'
----> 8     df = pd.read_csv(file_)
      9     files.append(df)
     10 frame = pd.concat(files)

~/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, doublequote, delim_whitespace, low_memory, memory_map, float_precision)
    676                     skip_blank_lines=skip_blank_lines)
    677 
--> 678         return _read(filepath_or_buffer, kwds)
    679 
    680     parser_f.__name__ = name

~/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    444 
    445     try:
--> 446         data = parser.read(nrows)
    447     finally:
    448         parser.close()

~/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py in read(self, nrows)
   1034                 raise ValueError('skipfooter not supported for iteration')
   1035 
-> 1036         ret = self._engine.read(nrows)
   1037 
   1038         # May alter columns / col_dict

~/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py in read(self, nrows)
   1846     def read(self, nrows=None):
   1847         try:
-> 1848             data = self._reader.read(nrows)
   1849         except StopIteration:
   1850             if self._first_chunk:

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()

ParserError: Error tokenizing data. C error: Expected 18 fields in line 5, saw 33
---------------------------------------------------------------------------
ParserError回溯(上次最近的调用)
在()
6对于dataAll中的文件:
7#df=pd.read_csv(文件名=['StartTime','ClientIP','ClientPort','ServerIP','ServerPort','HttpMethod','RequestHeader.Host','HttpURI','RequestHeader.User Agent','RequestHeader.Referrer','ResponseHeader.Content Type','RequestHeader.DNT_x-do-not-track','DownloadContentLength','UploadContentLength','ResponseCode','Duration','RequestActualByteCount','ResponseActualByteCount','ResponseActualByteCount'
---->8 df=pd.read\u csv(文件)
9文件。追加(df)
10帧=pd.concat(文件)
解析器中的~/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py\u f(文件路径或缓冲区、sep、分隔符、标题、名称、索引列、usecols、squeeze、prefix、mangle\u dupe\u cols、数据类型、引擎、转换器、true\u值、false\u值、skipinitialspace、skiprows、nrows、na\u值、keep\u默认值、na\u筛选器、冗余、跳过空白行、解析日期、推断日期时间格式、keep\u日期列、日期解析器、dayfirst、迭代器、chunksize、压缩、千、十进制、行终止符、引号、引号、转义符、注释、编码、方言、元组、错误行、警告行、跳板、双引号、删除空格、低内存、内存映射、浮点精度)
676跳过空白行=跳过空白行)
677
-->678返回读取(文件路径或缓冲区,kwds)
679
680解析器名称
读取中的~/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py(文件路径或缓冲区,kwds)
444
445试试:
-->446 data=parser.read(nrows)
447最后:
448解析器.close()
读取中的~/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py(self,nrows)
1034 raise VALUERROR('迭代不支持skipfooter')
1035
->1036 ret=自读数(nrows)
1037
1038#可更改列/列目录
读取中的~/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py(self,nrows)
1846 def读取(自身,nrows=无):
1847尝试:
->1848数据=自身读取(nrows)
1849除停止迭代外:
1850如果自我第一块:
pandas/_libs/parsers.pyx在pandas中。_libs.parsers.textleader.read()
pandas/_libs/parsers.pyx在pandas中。_libs.parsers.TextReader._read_low_memory()
pandas/_libs/parsers.pyx在pandas中。_libs.parsers.TextReader._read_rows()
pandas/_libs/parsers.pyx在pandas中。_libs.parsers.TextReader。_tokenize_rows()
pandas/_libs/parsers.pyx在pandas中。_libs.parsers.raise_parser_error()
ParserError:将数据标记化时出错。C错误:第5行中应包含18个字段,SAW33

您的文件已损坏。您需要在字段周围用逗号括起来。您的输入无法做到这一点。请以某种方式修复您的输入。

-不确定,但这可能会有所帮助。谢谢Ankan。但这无助于我继续前进。:(