Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/ruby-on-rails/58.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/heroku/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ruby on rails 轨道&x2B;Sidekiq Csv导入错误_Ruby On Rails_Heroku_Sidekiq - Fatal编程技术网

Ruby on rails 轨道&x2B;Sidekiq Csv导入错误

Ruby on rails 轨道&x2B;Sidekiq Csv导入错误,ruby-on-rails,heroku,sidekiq,Ruby On Rails,Heroku,Sidekiq,我刚刚将CSV上传过程切换到在worker上运行。它在本地运行良好,但当我尝试在生产环境中上载文件时,会出现此错误。在我看来,它只是不知道从哪里获取文件 017-02-22T16:32:48.914560+00:00 app[worker.1]: 4 TID-os5wk7tgo InventoryUploadWorker JID-f5be1032c019c28684582427 INFO: start 2017-02-22T16:32:49.224819+00:00 heroku[worker.

我刚刚将CSV上传过程切换到在worker上运行。它在本地运行良好,但当我尝试在生产环境中上载文件时,会出现此错误。在我看来,它只是不知道从哪里获取文件

017-02-22T16:32:48.914560+00:00 app[worker.1]: 4 TID-os5wk7tgo InventoryUploadWorker JID-f5be1032c019c28684582427 INFO: start
2017-02-22T16:32:49.224819+00:00 heroku[worker.1]: source=worker.1 dyno=heroku.53973862.c2c36482-5d99-4a68-a399-0918d1ed36d2 sample#load_avg_1m=0.29 sample#load_avg_5m=0.07 sample#load_avg_15m=0.02
2017-02-22T16:32:49.224900+00:00 heroku[worker.1]: source=worker.1 dyno=heroku.53973862.c2c36482-5d99-4a68-a399-0918d1ed36d2 sample#memory_total=144.37MB sample#memory_rss=134.18MB sample#memory_cache=6.66MB sample#memory_swap=3.54MB sample#memory_pgpgin=55377pages sample#memory_pgpgout=19323pages sample#memory_quota=512.00MB
2017-02-22T16:32:49.167416+00:00 app[worker.1]:   Company Load (0.6ms)  SELECT  "companies".* FROM "companies" WHERE "companies"."id" = $1 LIMIT 1  [["id", 32]]
2017-02-22T16:32:49.246868+00:00 app[worker.1]: 4 TID-os5wk7tgo InventoryUploadWorker JID-f5be1032c019c28684582427 INFO: fail: 0.332 sec
2017-02-22T16:32:49.247408+00:00 app[worker.1]: 4 TID-os5wk7tgo WARN: {"class":"InventoryUploadWorker","args":["/tmp/RackMultipart20170222-4-1jaehp1.csv","32"],"retry":false,"queue":"default","jid":"f5be1032c019c28684582427","created_at":1487781168.915459,"enqueued_at":1487781168.9161458}
2017-02-22T16:32:49.247452+00:00 app[worker.1]: 4 TID-os5wk7tgo WARN: Errno::ENOENT: No such file or directory @ rb_sysopen - /tmp/RackMultipart20170222-4-1jaehp1.csv
工人:

class InventoryUploadWorker
  include Sidekiq::Worker
  sidekiq_options retry: false

  Sidekiq.configure_server do |config|
    config.redis = { url: ENV["REDISTOGO_URL"], network_timeout: 5 }
  end

  Sidekiq.configure_client do |config|
    config.redis = { url: ENV["REDISTOGO_URL"], network_timeout: 5 }
  end

  def perform(file_path, company_id)
    CsvImport.csv_import(file_path, Company.find(company_id))
  end
end
进口方式:

class CsvImport

    def self.csv_import(filename, company)
        time = Benchmark.measure do
            File.open(filename) do |file|
                headers = file.first
                file.lazy.each_slice(150) do |lines|
                    Part.transaction do 
                        inventory = []
                        insert_to_parts_db = []
                        rows = CSV.parse(lines.join, write_headers: true, headers: headers)
                        rows.map do |row|
                            part_match = Part.find_by(part_num: row['part_num'])
                            new_part = build_new_part(row['part_num'], row['description']) unless part_match
                            quantity = row['quantity'].to_i
                            row.delete('quantity')
                            row["condition"] = match_condition(row)
                            quantity.times do 
                                part = InventoryPart.new(
                                    part_num: row["part_num"], 
                                    description: row["description"], 
                                    condition: row["condition"],
                                    serial_num: row["serial_num"],
                                    company_id: company.id,
                                    part_id: part_match ? part_match.id : new_part.id
                                    )           
                                inventory << part                   
                            end
                        end
                        #activerecord-import (bulk import)
                        InventoryPart.import inventory
                    end
                end
            end         
        end
        puts time
    end
CsvImport类
def self.csv_导入(文件名,公司)
time=Benchmark.measure do
File.open(filename)do | File|
headers=file.first
file.lazy.each_slice(150)do|行|
第三部分:交易做什么
存货=[]
插入\u到\u部分\u db=[]
rows=CSV.parse(lines.join,write_headers:true,headers:headers)
rows.map do | row|
part_match=part.find_by(part_num:row['part_num'])
新零件=构建新零件(第['part\u num']行,第['description']行),除非零件匹配
数量=第[‘数量’]行。至
行。删除('数量')
行[“条件”]=匹配条件(行)
数量是多少
零件=库存零件。新零件(
零件编号:行[“零件编号”],
说明:行[“说明”],
条件:行[“条件”],
序列号:行[“序列号”],
公司id:company.id,
part\u id:part\u match?part\u match.id:new\u part.id
)           

inventorysidekiq进程依赖web进程中的临时文件不是一个好主意。如果作业失败并在下周重试,会发生什么情况?如果您的web和工作进程位于不同的机器或容器中,会发生什么情况


您应该将CSV内容作为参数推送,或者将文件移动到一个众所周知的位置,以便工作人员拾取。

sidekiq进程依赖web进程中的临时文件不是一个好主意。如果作业失败并在下周重试,会发生什么情况?如果您的web和工作进程位于不同的机器或容器中,会发生什么情况


您应该将CSV内容作为参数推送,或者将文件移动到一个众所周知的位置,以便工作人员拾取。

您有什么建议?是否将文件保存到AmazonS3?我绝对不想将它们保存到我的应用程序中。我已经将重试设置为false,这样我就可以查看错误,对于这个用例,我不一定希望重试发生。谢谢。“你应该把CSV内容作为一个参数”我认为这里的问题是CSV文件有100k+行,因此每1000行创建一个作业。我会遇到并发问题吗?你有什么建议?是否将文件保存到AmazonS3?我绝对不想将它们保存到我的应用程序中。我已经将重试设置为false,这样我就可以查看错误,对于这个用例,我不一定希望重试发生。谢谢。“你应该把CSV内容作为一个参数”我认为这里的问题是CSV文件有100k+行,然后每1000行创建一个作业。我会遇到并发问题吗?