Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/elixir/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 程序类被卡住/空闲,在Anaconda/命令行提示符中第一次调用后不执行剩余调用,但在Spyder中工作_Python_Python 3.x_Command Line_Anaconda_Spyder - Fatal编程技术网

Python 程序类被卡住/空闲,在Anaconda/命令行提示符中第一次调用后不执行剩余调用,但在Spyder中工作

Python 程序类被卡住/空闲,在Anaconda/命令行提示符中第一次调用后不执行剩余调用,但在Spyder中工作,python,python-3.x,command-line,anaconda,spyder,Python,Python 3.x,Command Line,Anaconda,Spyder,我正在尝试使用anaconda提示符来运行python脚本。它在第一次呼叫时运行平稳,但在那里停止。我在Spyder上试过,它可以工作,但我希望它能在anaconda提示符或命令行上工作。有什么原因吗 from decompress import decompress from reddit import reddit from clean import clean from wikipedia import wikipedia def main(): dir_of_file = r"

我正在尝试使用anaconda提示符来运行python脚本。它在第一次呼叫时运行平稳,但在那里停止。我在Spyder上试过,它可以工作,但我希望它能在anaconda提示符或命令行上工作。有什么原因吗

from decompress import decompress
from reddit import reddit
from clean import clean
from wikipedia import wikipedia

def main():
    dir_of_file = r"D:\Users\Jonathan\Desktop\Reddit Data\Demo\\"
    print('0. Path: ' + dir_of_file)
    reddit_repo = reddit()
    wikipedia_repo = wikipedia()
    pattern_filter = "*2007*&*2008*"
    print('1. Creating data lake')
    reddit_repo.download_files(pattern_filter,"https://files.pushshift.io/reddit/submissions/",dir_of_file,'s') 
    reddit_repo.download_files(pattern_filter,"https://files.pushshift.io/reddit/comments/",dir_of_file,'c')         

if __name__ == "__main__":
    main()
下载的RS是正在运行的这行代码:

reddit_repo.download_files(pattern_filter,"https://files.pushshift.io/reddit/submissions/",dir_of_file,'s') 
更新:

添加了类/函数

class reddit:

    def multithread_download_files_func(self,list_of_file):
        filename = list_of_file[list_of_file.rfind("/")+1:]
        path_to_save_filename = self.ptsf_download_files + filename
        if not os.path.exists(path_to_save_filename): 
            data_content = None
            try:
                request = urllib.request.Request(list_of_file)
                response = urllib.request.urlopen(request)
                data_content = response.read()
            except urllib.error.HTTPError:
                print('HTTP Error')
            except Exception as e:
                print(e)
            if data_content:
                with open(path_to_save_filename, 'wb') as wf:    
                    wf.write(data_content)                 
                    print(self.present_download_files + filename)                        

    def download_files(self,filter_files_df,url_to_download_df,path_to_save_file_df,prefix):
        #do some processing
        matching_fnmatch_list.sort()

        p = ThreadPool(200)
        p.map(self.multithread_download_files_func, matching_fnmatch_list)

是下载花了很多时间。我已经改变了网络,并且它按照预期工作。因此,cmd或anaconda提示符没有问题,下载花费了大量时间。我已经改变了网络,并且它按照预期工作。因此,cmd或anaconda提示符没有问题