Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/arrays/12.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Arrays 按大小和频率排列多个阵列的公共共享子阵列的有效方法是什么?_Arrays_Ruby_Algorithm_Subset - Fatal编程技术网

Arrays 按大小和频率排列多个阵列的公共共享子阵列的有效方法是什么?

Arrays 按大小和频率排列多个阵列的公共共享子阵列的有效方法是什么?,arrays,ruby,algorithm,subset,Arrays,Ruby,Algorithm,Subset,我正在研究一个有关Instagram标签的问题。用户在发布图像时通常会复制和粘贴“捆绑”的标签。不同主题的不同包 因此,我可能会有我的“花园里的东西”包,这将是[“花园”、“美丽的小山”、“树外”、“绿色伦敦”]等等。它们通常有二十到三十件长 有时,他们可能会有几个这样的方法来保持事物的多样性 我想做的是通过查看他们过去发布的图片,推荐一组标签来使用 要做到这一点,我需要几个他们以前使用过的标签数组: x = ["a", "b", "c", "d", "e"] y = ["a", "b", "d

我正在研究一个有关Instagram标签的问题。用户在发布图像时通常会复制和粘贴“捆绑”的标签。不同主题的不同包

因此,我可能会有我的“花园里的东西”包,这将是[“花园”、“美丽的小山”、“树外”、“绿色伦敦”]等等。它们通常有二十到三十件长

有时,他们可能会有几个这样的方法来保持事物的多样性

我想做的是通过查看他们过去发布的图片,推荐一组标签来使用

要做到这一点,我需要几个他们以前使用过的标签数组:

x = ["a", "b", "c", "d", "e"]
y = ["a", "b", "d", "e", "f", "g"]
z = ["a", "c", "d", "e", "f", "h"]
...
我想为这些数组找到最大的公共条目子集

因此,在这种情况下,最大的子集将是这三个中的[“a”,“d”,“e”]。通过使用
x&y&z
之类的工具,可以很简单地实现这一点

但是,我想根据这些子集在考虑中的所有数组中的大小和频率来创建这些子集的排名,以便显示最常用的标签包:

[
  {bundle: ["a","d","e"], frequency: 3, size: 3},
  {bundle: ["e","f"], frequency: 2, size: 2},
  {bundle: ["a","b"], frequency: 2, size: 2},
  {bundle: ["b","d"], frequency: 2, size: 2},
  ...
]
假设这些包的最小大小有限制,比如说两个项目

我使用Elasticsearch进行索引,但我发现尝试使用聚合进行索引很有挑战性,因此我将图像提取到Ruby中,然后在那里创建列表

首先,我遍历了所有这些数组,然后使用MD5哈希键作为唯一标识符查找其他数组的所有子集。但这限制了结果。我怀疑,添加更多的通行证会使这种方法非常低效

require 'digest'

x = ["a", "b", "c", "d", "e"]
y = ["a", "b", "d", "e", "f", "g"]
z = ["a", "c", "d", "e", "f", "h"]


def bundle_report arrays
  arrays = arrays.collect(&:sort)
  working = {}
  arrays.each do |array|
    arrays.each do |comparison|
      next if array == comparison
      subset = array & comparison
      key = Digest::MD5.hexdigest(subset.join(""))
      working[key] ||= {subset: subset, frequency: 0}
      working[key][:frequency] += 1
      working[key][:size] = subset.length
    end
  end
  working
end

puts bundle_report([x, y, z])
=> {"bb4a3fb7097e63a27a649769248433f1"=>{:subset=>["a", "b", "d", "e"], :frequency=>2, :size=>4}, "b6fdd30ed956762a88ef4f7e8dcc1cae"=>{:subset=>["a", "c", "d", "e"], :frequency=>2, :size=>4}, "ddf4a04e121344a6e7ee2acf71145a99"=>{:subset=>["a", "d", "e", "f"], :frequency=>2, :size=>4}}
添加第二个过程可以获得更好的结果:

def bundle_report arrays
  arrays = arrays.collect(&:sort)
  working = {}
  arrays.each do |array|
    arrays.each do |comparison|
      next if array == comparison
      subset = array & comparison
      key = Digest::MD5.hexdigest(subset.join(""))
      working[key] ||= {subset: subset, frequency: 0}
      working[key][:frequency] += 1
      working[key][:size] = subset.length 
    end
  end

  original_working = working.dup

  original_working.each do |key, item|
    original_working.each do |comparison_key, comparison|
      next if item == comparison
      subset = item[:subset] & comparison[:subset]
      key = Digest::MD5.hexdigest(subset.join(""))
      working[key] ||= {subset: subset, frequency: 0}
      working[key][:frequency] += 1
      working[key][:size] = subset.length
    end
  end
  working
end

puts bundle_report([x, y, z])
=> {"bb4a3fb7097e63a27a649769248433f1"=>{:subset=>["a", "b", "d", "e"], :frequency=>2, :size=>4}, "b6fdd30ed956762a88ef4f7e8dcc1cae"=>{:subset=>["a", "c", "d", "e"], :frequency=>2, :size=>4}, "ddf4a04e121344a6e7ee2acf71145a99"=>{:subset=>["a", "d", "e", "f"], :frequency=>2, :size=>4}, "a562cfa07c2b1213b3a5c99b756fc206"=>{:subset=>["a", "d", "e"], :frequency=>6, :size=>3}}

你能推荐一种有效的方法来建立这个大型子集的排名吗?

与其将每个数组与每个其他数组进行交集(这可能很快就会失控),不如保留一个持久的索引(在Elasticsearch中?)来记录到目前为止看到的所有可能的组合,以及它们的频率计数。然后,对于每一组新的标记,将该标记的所有子组合的频率计数增加1

下面是一个速写:

require 'digest'

def bundle_report(arrays, min_size = 2, max_size = 10)

  combination_index = {}

  arrays.each do |array|

    (min_size..[max_size,array.length].min).each do |length|

      array.combination(length).each do |combination|

        key = Digest::MD5.hexdigest(combination.join(''))

        combination_index[key] ||= {bundle: combination, frequency: 0, size: length}
        combination_index[key][:frequency] += 1

      end

    end

  end

  combination_index.to_a.sort_by {|x| [x[1][:frequency], x[1][:size]] }.reverse

end

input_arrays = [
  ["a", "b", "c", "d", "e"],
  ["a", "b", "d", "e", "f", "g"],
  ["a", "c", "d", "e", "f", "h"]
]

bundle_report(input_arrays)[0..5].each do |x|
  puts x[1]
end
其结果是:

{:bundle=>["a", "d", "e"], :frequency=>3, :size=>3}
{:bundle=>["d", "e"], :frequency=>3, :size=>2}
{:bundle=>["a", "d"], :frequency=>3, :size=>2}
{:bundle=>["a", "e"], :frequency=>3, :size=>2}
{:bundle=>["a", "d", "e", "f"], :frequency=>2, :size=>4}
{:bundle=>["a", "b", "d", "e"], :frequency=>2, :size=>4}

但这也可能无法很好地扩展。

谢谢Frankie!我正在仔细考虑,我会回答的。非常感谢你的关注。