Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/ruby-on-rails/66.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ruby on rails Rails SearchDexJob用于100000行记录_Ruby On Rails_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch_Searchkick - Fatal编程技术网 elasticsearch,searchkick,Ruby On Rails,elasticsearch,Searchkick" /> elasticsearch,searchkick,Ruby On Rails,elasticsearch,Searchkick" />

Ruby on rails Rails SearchDexJob用于100000行记录

Ruby on rails Rails SearchDexJob用于100000行记录,ruby-on-rails,elasticsearch,searchkick,Ruby On Rails,elasticsearch,Searchkick,现在这些书有超过100000000行 在mysql上会出现数据太长的问题 我们可以拆分记录吗?这是关于elasticsearch的吗?如果不是,我建议您使用它来存储数据。这不是很多数据。这不是解决方案。Cna我们只发送位置id? books = location.books # TODO: Move into a method Searchkick::BulkReindexJob.perform_later( class_name: "Book",

现在这些书有超过100000000行

在mysql上会出现数据太长的问题


我们可以拆分记录吗?

这是关于elasticsearch的吗?如果不是,我建议您使用它来存储数据。这不是很多数据。这不是解决方案。Cna我们只发送位置id?
books = location.books
        # TODO: Move into a method
Searchkick::BulkReindexJob.perform_later(
        class_name: "Book",
        record_ids: books.map(&:id),
        index_name: Book.search_index.name,
        method_name: nil

)