Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/mongodb/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ruby on rails 减少sidekiq作业的执行时间_Ruby On Rails_Mongodb_Redis_Mongoid_Sidekiq - Fatal编程技术网

Ruby on rails 减少sidekiq作业的执行时间

Ruby on rails 减少sidekiq作业的执行时间,ruby-on-rails,mongodb,redis,mongoid,sidekiq,Ruby On Rails,Mongodb,Redis,Mongoid,Sidekiq,我目前正在开发一个应用程序,它涉及rails服务器上联系人的同步。我使用redis服务器和sidekiq在后台执行联系人同步。我的数据库是mongodb,我使用mongoid gem作为ORM。工作流程如下所示: 手机上的联系人通过应用程序传递到rails服务器,然后在rails服务器上,它在redis服务器中排队 现在cron作业触发sidekiq,sidekiq连接到redis并完成作业 sidekiq的一项工作如下: 它有一系列的联系人(最多3000个) 它必须处理每个联系人。通过处理,我

我目前正在开发一个应用程序,它涉及rails服务器上联系人的同步。我使用redis服务器和sidekiq在后台执行联系人同步。我的数据库是mongodb,我使用mongoid gem作为ORM。工作流程如下所示:

  • 手机上的联系人通过应用程序传递到rails服务器,然后在rails服务器上,它在redis服务器中排队
  • 现在cron作业触发sidekiq,sidekiq连接到redis并完成作业
  • sidekiq的一项工作如下:

  • 它有一系列的联系人(最多3000个)
  • 它必须处理每个联系人。通过处理,我的意思是对数据库进行插入查询
  • 现在的问题是sidekiq需要花费大量的时间来完成这项工作。完成这项工作平均需要50-70秒

    以下是相关文件

    西德基

    # Sample configuration file for Sidekiq.
    # Options here can still be overridden by cmd line args.
    #   sidekiq -C config.yml
    
    :verbose: true
    :concurrency:  5
    :logfile: ./log/sidekiq.log
    :pidfile: ./tmp/pids/sidekiq.pid
    :queues:
      - [new_wall, 1]#6
      - [contact_wall, 1]#7
      - [email, 1]#5
      - [gcm_chat, 1]#5
      - [contact_address, 1]#7
      - [backlog_contact_address, 5]
      - [comment, 7]
      - [default, 5]
    
    蒙哥德

    development:
      # Configure available database sessions. (required)
      sessions:
        # Defines the default session. (required)
        default:
              # Defines the name of the default database that Mongoid can connect to.
              # (required).
              database: "<%= ENV['DB_NAME']%>"
              # Provides the hosts the default session can connect to. Must be an array
              # of host:port pairs. (required)
              hosts:
                - "<%=ENV['MONGOD_URL']%>"
              #username: "<%= ENV['DB_USERNAME']%>"
              #password: "<%= ENV['DB_PASSWORD']%>"
              options:
    
                #pool: 12
            # Change the default write concern. (default = { w: 1 })
            # write:
            # w: 1
    
            # Change the default consistency model to primary, secondary.
            # 'secondary' will send reads to secondaries, 'primary' sends everything
            # to master. (default: primary)
            # read: secondary_preferred
    
            # How many times Moped should attempt to retry an operation after
            # failure. (default: The number of nodes in the cluster)
            # max_retries: 20
    
            # The time in seconds that Moped should wait before retrying an
            # operation on failure. (default: 0.25)
            # retry_interval: 0.25
      # Configure Mongoid specific options. (optional)
      options:
        # Includes the root model name in json serialization. (default: false)
        # include_root_in_json: false
    
        # Include the _type field in serializaion. (default: false)
        # include_type_for_serialization: false
    
        # Preload all models in development, needed when models use
        # inheritance. (default: false)
        # preload_models: false
    
        # Protect id and type from mass assignment. (default: true)
        # protect_sensitive_fields: true
    
        # Raise an error when performing a #find and the document is not found.
        # (default: true)
        # raise_not_found_error: true
    
        # Raise an error when defining a scope with the same name as an
        # existing method. (default: false)
        # scope_overwrite_exception: false
    
        # Use Active Support's time zone in conversions. (default: true)
        # use_activesupport_time_zone: true
    
        # Ensure all times are UTC in the app side. (default: false)
        # use_utc: false
    test:
      sessions:
        default:
          database: db_test
          hosts:
            - localhost:27017
          options:
            read: primary
            # In the test environment we lower the retries and retry interval to
            # low amounts for fast failures.
            max_retries: 1
            retry_interval: 0
    
    
    production:
      # Configure available database sessions. (required)
      sessions:
        # Defines the default session. (required)
        default:
          # Defines the name of the default database that Mongoid can connect to.
          # (required).
          database: "<%= ENV['DB_NAME']%>"
          # Provides the hosts the default session can connect to. Must be an array
          # of host:port pairs. (required)
          hosts:
            - "<%=ENV['MONGOD_URL']%>"
          username: "<%= ENV['DB_USERNAME']%>"
          password: "<%= ENV['DB_PASSWORD']%>"
          pool: 10
          options:
    
      # Configure Mongoid specific options. (optional)
      options:
    
    ContactDump.rb

    class ContactDump
      include Mongoid::Document
      include Mongoid::Timestamps::Created
      include Mongoid::Timestamps::Updated
    
      field :contacts,   type: Array
      field :status,     type: Integer, default: 0
      field :user_id,    type: BSON::ObjectId
      field :error_msg,  type: String
    
      CONTACT_DUMP_CONS = {FRESH: 0,  PROCESSED: 1, ERROR: 2, CANTSYNC: 3}
    end
    
    如何加快作业处理速度?我尝试在sidekiq.yml和mongoid.yml池中增加sidekiq的并发性,但没有任何帮助

    whatsApp和其他消息传递应用程序如何处理联系人同步

    如果需要其他信息,请询问。谢谢

    编辑:如果无法回答此问题,请任何人向我推荐其他方法来同步rails服务器上的联系人

    救援指标

    class ContactDump
      index({status: 1})
    end
    
    class Person
      index({h_m_num: 1})
    end
    
    Person
    可能需要更多索引,具体取决于您的
    Person.get\u number\u digest
    的功能

    添加索引后运行
    rakedb:mongoid:create_索引


    此外,请务必删除
    put
    ,您不需要在工作人员身上使用它,而且即使您看不到输出,put也会严重影响您的性能

    谢谢,我会尝试一下,做了之后会更新。它成功了。谢谢现在,一个作业的平均时间是5秒,而之前是80秒。这是非常大的进步。谢谢托马斯。。。
    class ContactDump
      index({status: 1})
    end
    
    class Person
      index({h_m_num: 1})
    end