Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/313.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Datascrape将twitter转换为csv(tweeepy,python)。如果文件已存在,如何追加:_Python_Python 2.7_Csv_Tweepy - Fatal编程技术网

Datascrape将twitter转换为csv(tweeepy,python)。如果文件已存在,如何追加:

Datascrape将twitter转换为csv(tweeepy,python)。如果文件已存在,如何追加:,python,python-2.7,csv,tweepy,Python,Python 2.7,Csv,Tweepy,我有以下代码: import tweepy #https://github.com/tweepy/tweepy import csv import random #Twitter API credentials consumer_key = "" consumer_secret = "" access_key = "" access_secret = "" twitname = raw_input("Enter desired twitter account from which a twe

我有以下代码:

import tweepy #https://github.com/tweepy/tweepy
import csv
import random

#Twitter API credentials
consumer_key = ""
consumer_secret = ""
access_key = ""
access_secret = ""

twitname = raw_input("Enter desired twitter account from which a tweet will be selected to act as inspiration for the poem: ")




def get_all_tweets(screen_name):
    #Twitter only allows access to a users most recent 3240 tweets with this method

    #authorize twitter, initialize tweepy
    auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_key, access_secret)
    api = tweepy.API(auth)

    #initialize a list to hold all the tweepy Tweets
    alltweets = []  

    #make initial request for most recent tweets (200 is the maximum allowed count)
    new_tweets = api.user_timeline(screen_name = screen_name,count=200)

    #save most recent tweets
    alltweets.extend(new_tweets)

    #save the id of the oldest tweet less one
    oldest = alltweets[-1].id - 1

    #keep grabbing tweets until there are no tweets left to grab
    while len(new_tweets) > 0:
        print "getting tweets before %s" % (oldest)

        #all subsiquent requests use the max_id param to prevent duplicates
        new_tweets = api.user_timeline(screen_name = screen_name,count=200,max_id=oldest)

        #save most recent tweets
        alltweets.extend(new_tweets)

        #update the id of the oldest tweet less one
        oldest = alltweets[-1].id - 1

        print "...%s tweets downloaded so far" % (len(alltweets))

    #transform the tweepy tweets into a 2D array that will populate the csv 
    outtweets = [[tweet.text.encode("utf-8")] for tweet in alltweets]

    #write the csv  
    with open('%s_tweets.csv' % screen_name, 'wb') as f:
        writer = csv.writer(f)
        writer.writerow(["text"])
        writer.writerows(outtweets)

    pass


if __name__ == '__main__':
    #pass in the username of the account you want to download
    get_all_tweets(twitname)




spamReader = csv.reader(open(twitname + '_tweets.csv', 'r'))

twitterinsp = sum([i for i in spamReader],[]) #To flatten the list
print(random.choice(twitterinsp))

目前,它会抓取最近的推文,将其存储在csv文件中,然后显示一个随机条目。我想做的是,如果csv文件已经存在,它会在已经存在的csv文件中添加新的tweet。这可能吗/有人有什么想法吗?如果这是不可能的,有人知道我将如何在这里编写If-else函数吗:如果文件存在,打印随机条目,else刮取、存储,然后打印随机条目。谢谢你的任何问候。谢谢

使用“ab”代替“wb”

with open('%s_tweets.csv' % screen_name, 'ab') as f:
文件模式:


使用“ab”代替“wb”

with open('%s_tweets.csv' % screen_name, 'ab') as f:
文件模式:


在这里写二进制有什么好处?我把它放在那里是因为原始代码是二进制的。我认为二进制文件读/写速度更快,文件大小更小。但不要引用我的话。很简单,谢谢!稍有不同但相关的问题:你知道是否有办法只附加新数据吗?不熟悉twitter API,但如果你检索推文,用它们和旧推文创建一个数据框,你可以根据给定的索引
df删除所有重复的推文(子集='keyColumns')
在这里写二进制代码有什么好处?我把它放在那里是因为原始代码是二进制的。我认为二进制文件读/写速度更快,文件大小更小。但不要引用我的话。很简单,谢谢!稍有不同但相关的问题:你知道是否有办法只附加新数据吗?不熟悉twitter API,但如果你检索推文,用它们和旧推文创建一个数据框,你可以根据给定的索引删除所有重复的推文
df。删除重复的推文(子集='keyColumns')