Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/278.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python2.7ASCII';编解码器可以';t编码字符u'\xe4_Python_Python 2.7_Utf 8_Character Encoding - Fatal编程技术网

Python2.7ASCII';编解码器可以';t编码字符u'\xe4

Python2.7ASCII';编解码器可以';t编码字符u'\xe4,python,python-2.7,utf-8,character-encoding,Python,Python 2.7,Utf 8,Character Encoding,我在Python2.7中遇到了一个代码问题,我已经使用了UTF-8,但它仍然出现了异常 "UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 81: ordinal not in range(128)" 我的文件和包含了这么多这样的狗屎,但出于某种原因,我不允许删除它 desktop,[Search] Store | Automated Titles,google / cpc,Titles &

我在Python2.7中遇到了一个代码问题,我已经使用了UTF-8,但它仍然出现了异常

"UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 81: ordinal not in range(128)"
我的文件和包含了这么多这样的狗屎,但出于某种原因,我不允许删除它

desktop,[Search] Store | Automated Titles,google / cpc,Titles > Kesäkaverit,275285048,13
我尝试了下面的方法来避免,但仍然没有修复它。有人能帮我吗

1.在我的文件头中使用
“#!/usr/bin/python”

2.设置setdefaultencoding

import sys
reload(sys)
sys.setdefaultencoding('utf-8')
3.
content=unicode(s3core.下载文件到内存(S3文件,S3原始+文件),“utf-8”,“忽略”)

下面是我的代码

    content = unicode(s3core.download_file_to_memory(S3_PROFILE, S3_RAW + file), "utf8", "ignore")
    rows = content.split('\n')[1:]
    for row in rows:
        if not row:
            continue

        try:
            # fetch variables
            cols = row.rstrip('\n').split(',')
            transaction = cols[0]
            device_category = cols[1]
            campaign = cols[2]
            source = cols[3].split('/')[0].strip()
            medium = cols[3].split('/')[1].strip()
            ad_group = cols[4]
            transactions = cols[5]

            data_list.append('\t'.join(
                ['-'.join([dt[:4], dt[4:6], dt[6:]]), country, transaction, device_category, campaign, source,
                 medium, ad_group, transactions]))

        except:
            print 'ignoring row: ' + row

将代码张贴在您读取数据的位置。可能的重复项-得票最多的答案是BestThank,代码已附加。检查并了解字节字符串和Unicode之间的差异:和。切换到Python3,其中不支持二者之间的隐式转换。