Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/backbone.js/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
C# 无法在运行scrapy spider的c中运行python脚本_C#_Python_Web Scraping_Scrapy_Scrapy Spider - Fatal编程技术网

C# 无法在运行scrapy spider的c中运行python脚本

C# 无法在运行scrapy spider的c中运行python脚本,c#,python,web-scraping,scrapy,scrapy-spider,C#,Python,Web Scraping,Scrapy,Scrapy Spider,我遵循了这一点,我能够从我的c代码中运行一个伪python文件,如下所示 public JsonResult FetchscrapyDataUrl(String website) { ProcessStartInfo start = new ProcessStartInfo(); start.FileName = @"C:\ProgramData\Anaconda3\python.exe";

我遵循了这一点,我能够从我的c代码中运行一个伪python文件,如下所示

   public JsonResult FetchscrapyDataUrl(String website)
        {

           ProcessStartInfo start = new ProcessStartInfo();
            start.FileName = @"C:\ProgramData\Anaconda3\python.exe";            
            start.Arguments = @"C:\Users\PycharmProjects\scraping_web\scrape_info\main.py";
           //this is path to .py file from scrapy project

            start.CreateNoWindow = false;  // We don't need new window
            start.UseShellExecute = false;  // Do not use OS shell
            //start.RedirectStandardOutput = true;// Any output, generated by application will be redirected back
            start.RedirectStandardError = true; // Any error in standard output will be redirected back (for example exceptions)
            Console.WriteLine("Python Starting");
       

            start.RedirectStandardOutput = true;
            using (Process process = Process.Start(start))
            {
                using (StreamReader reader = process.StandardOutput)
                {
                    string stderr = process.StandardError.ReadToEnd(); // Here are the exceptions from our Python script
                    string result = reader.ReadToEnd();  // Here is the result of StdOut(for example: print "test")
                    Console.Write(result);
                }
            }
    
    }
       
from scrapy import cmdline    
cmdline.execute("scrapy crawl text".split())
现在我知道我可以从单个文件main.py运行spider,如下所示

   public JsonResult FetchscrapyDataUrl(String website)
        {

           ProcessStartInfo start = new ProcessStartInfo();
            start.FileName = @"C:\ProgramData\Anaconda3\python.exe";            
            start.Arguments = @"C:\Users\PycharmProjects\scraping_web\scrape_info\main.py";
           //this is path to .py file from scrapy project

            start.CreateNoWindow = false;  // We don't need new window
            start.UseShellExecute = false;  // Do not use OS shell
            //start.RedirectStandardOutput = true;// Any output, generated by application will be redirected back
            start.RedirectStandardError = true; // Any error in standard output will be redirected back (for example exceptions)
            Console.WriteLine("Python Starting");
       

            start.RedirectStandardOutput = true;
            using (Process process = Process.Start(start))
            {
                using (StreamReader reader = process.StandardOutput)
                {
                    string stderr = process.StandardError.ReadToEnd(); // Here are the exceptions from our Python script
                    string result = reader.ReadToEnd();  // Here is the result of StdOut(for example: print "test")
                    Console.Write(result);
                }
            }
    
    }
       
from scrapy import cmdline    
cmdline.execute("scrapy crawl text".split())
当我在windows中从cmd运行main.py文件时,它工作正常,但当我从C代码.Net framework运行它时,它不工作。错误是

"Scrapy 1.4.0 - no active project\r\n\r\nUnknown command: crawl\r\n\r\nUse \"scrapy\" to see available commands\r\n"
你知道如何运行这个…或者我在windows中缺少了一些路径设置吗

或者我应该以其他方式从C运行spider吗???

您需要设置WorkingDirectory属性


或者您需要将cd刻录到该目录以使其正常工作

相反,请使用scrapyd API运行您的scraper。。。你只需要发送一个POST请求来运行scraper,我相信你是如何用C发送POST请求的我不是C程序员,否则我会告诉你谢谢你的回复…你能用任何你熟悉的语言给我一个POST请求的概述吗…基本上这是启动爬行器的cURL命令,curl命令curlhttp://localhost:6800/schedule.json -d project=myproject-d spider=spider2-d anyOtherExtraParam=Value这里您可以在这个工具中复制并粘贴该命令,您将获得Python/PHPcan中的转换代码我可以使用javascript使用这个scrapyd API吗?我需要为此在云上部署我的项目吗?我查看了您的再次提问,我看到了您的错误Scrapy 1.4.0-无活动项目,,,如果您不在存在Scrapy项目的目录中,我建议您首先在C代码或main.py中执行cd/path/to/your/project/