Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/performance/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
C 比较两对4个变量并返回匹配数?_C_Performance_Sorting_Compare_Sorting Network - Fatal编程技术网

C 比较两对4个变量并返回匹配数?

C 比较两对4个变量并返回匹配数?,c,performance,sorting,compare,sorting-network,C,Performance,Sorting,Compare,Sorting Network,给定以下结构: struct four_points { uint32_t a, b, c, d; } 比较两个这样的结构并返回匹配变量数量(在任何位置)的最快方法是什么 例如: four_points s1 = {0, 1, 2, 3}; four_points s2 = {1, 2, 3, 4}; 我希望结果是3,因为两个结构之间有三个数字匹配。然而,鉴于以下情况: four_points s1 = {1, 0, 2, 0}; four_points s2 = {0, 1, 9,

给定以下结构:

struct four_points {
    uint32_t a, b, c, d;
}
比较两个这样的结构并返回匹配变量数量(在任何位置)的最快方法是什么

例如:

four_points s1 = {0, 1, 2, 3};
four_points s2 = {1, 2, 3, 4};
我希望结果是3,因为两个结构之间有三个数字匹配。然而,鉴于以下情况:

four_points s1 = {1, 0, 2, 0};
four_points s2 = {0, 1, 9, 7};
然后我希望结果只有2,因为两个结构之间只有两个变量匹配(尽管第一个结构中有两个零)

我已经找到了一些基本的系统来进行比较,但这是一个在短时间内被称为几百万次的系统,需要相对快速。我目前最好的尝试是使用排序网络对任一输入的所有四个值进行排序,然后循环排序后的值并保持相等值的计数,从而相应地提高任一输入的当前索引


有没有哪种技术可以比排序和迭代执行得更好?

在现代CPU上,有时正确地应用蛮力是一种方法。诀窍在于编写不受指令延迟限制的代码,而只受吞吐量限制


复制品常见吗?如果它们非常罕见,或者有模式,使用分支来处理它们会使常见情况更快。如果它们真的不可预测,最好做一些没有分支的事情。我在考虑用一个分支来检查罕见位置之间的重复,并在更常见的位置使用无分支

基准测试是很棘手的,因为当使用相同的数据测试一百万次时,带有分支的版本会大放异彩,但在实际使用中会有很多分支预测失误


我还没有对任何东西进行基准测试,但我已经提出了一个版本,通过使用或代替加法来组合找到的匹配项来跳过重复项。它编译成漂亮的x86 asm,gcc完全展开。(没有条件分支,甚至没有循环)

。(g++是哑的,在x86
setcc
的输出上使用32位操作,它只设置低8位。这种部分寄存器访问将产生慢化。而且我甚至不确定它是否会将高24位归零……总之,gcc 4.9.2中的代码看起来很好,锁销上的叮当声也不错)

这仅仅是3次洗牌,仍然进行所有16次比较。诀窍是将它们与需要合并重复项的ORs相结合,然后能够有效地对它们进行计数。压缩比较基于该位置两个元素之间的比较,输出每个元素=0或-1(所有位集)的向量。它的设计目的是为AND或XOR生成一个有用的操作数,以屏蔽某些向量元素,例如,使v1+=v2&在每个元素的基础上屏蔽条件。它也只是一个布尔真值

通过将一个向量旋转两次,将另一个向量旋转一次,然后在四个移位向量和未移位向量之间进行比较,所有16次比较只需两次洗牌。如果我们不需要消除DUP,这将是非常好的,但是既然我们这样做了,结果在哪里就很重要了。我们不仅仅是将所有16个比较结果相加

或者将压缩的比较结果合并到一个向量。每个元素将根据s2的元素在s1中是否有匹配项进行设置
int\u mm\u movemask\u ps(\uu m128 a)
将矢量转换为位图,然后对位图进行popcount。(,否则返回到具有4位查找表的版本。)

垂直ORs负责处理
s1
中的重复项,但
s2
中的重复项是一个不太明显的扩展,需要更多的工作。我最终确实想到了一种速度不到两倍的方法(见下文)

嗯,如果零可以作为一个常规元素出现,那么我们可能也需要在比较之后进行字节移位,以将潜在的误报变为零如果在
s1
中有一个哨兵值不能出现
,则可以在该值的元素中移动,而不是0。(SSE具有
PALIGNR
,它为您提供任何需要附加两个寄存器内容的连续16B窗口。以模拟两个对齐负载的未对齐负载的用例命名。因此,您将拥有该元素的常量向量。)


更新:我想到了一个很好的技巧,可以避免使用标识元素。实际上,我们只需要两个向量比较,就可以得到所有6个必要的s2与s2比较,然后合并结果

  • 在两个向量的同一位置进行相同的比较,可以将两个或两个结果放在一起,而无需在OR之前进行遮罩。(解决了缺少哨兵值的问题)

  • 洗牌比较的输出,而不是S2的额外洗牌和比较。这意味着我们可以在另一个比较旁边完成
    d==a

  • 请注意,我们并不局限于将整个元素混洗。按字节洗牌,将不同比较结果中的字节放入单个向量元素,并将其与零进行比较。(这比我希望的节省更少,见下文)

检查重复是一个很大的减速(特别是吞吐量,而不是延迟)。所以你最好还是在s2中安排一个哨兵值,它永远不会匹配任何s1元素,你说这是可能的。我只是因为觉得很有趣才提出这个。(并为您提供了一个选项,以防您需要某个不需要哨兵的版本。)

对于
pshufb
,这需要SSSE3。它和一个
pcmpeq
(和一个
pxor
来生成一个常量)正在取代一个随机排列(
bslli(s2bc,12)
)、一个OR和一个and

d==bc  c==ab b==a a==d = s2b|s2c
d==a   0     0    0    = byte-shift-left(s2b) = s2d0
d==abc c==ab b==a a==d = s2abc
d==abc c==ab b==a 0    = mask(s2abc).  Maybe use PBLENDW or MOVSS from s2d0 (which we know has zeros) to save loading a 16B mask.

__m128i s2abcd = _mm_or_si128(s2b, s2c);
//s2bc = _mm_shuffle_epi8(s2bc, _mm_set_epi8(-1,-1,0,12,  -1,-1,-1,8, -1,-1,-1,4,  -1,-1,-1,-1));
//__m128i dupmask = _mm_cmpeq_epi32(s2bc, _mm_setzero_si128());
__m128i s2d0 = _mm_bslli_si128(s2b, 12);  // d==a  0  0  0
s2abcd = _mm_or_si128(s2abcd, s2d0);
__m128i dupmask = _mm_blend_epi16(s2abcd, s2d0, 0 | (2 | 1));
//__m128i dupmask = _mm_and_si128(s2abcd, _mm_set_epi32(-1, -1, -1, 0));

match = _mm_andnot_si128(dupmask, match);  // ~dupmask & match;  first arg is the one that's inverted
我不能推荐
MOVSS
;它将在AMD上产生额外的延迟,因为它在FP域中运行<代码>PBLENDW是SSE4.1<代码>popcnt可在AMD K10上使用,但
PBLENDW{ 1d 1c 1b 1a }
  == == == ==   packed-compare with
{ 2d 2c 2b 2a }

{ 1a 1d 1c 1b }
  == == == ==   packed-compare with
{ 2d 2c 2b 2a }

{ 1b 1a 1d 1c }  # if dups didn't matter: do this shuffle on s2
  == == == ==   packed-compare with
{ 2d 2c 2b 2a }

{ 1c 1b 1a 1d } # if dups didn't matter: this result from { 1a ... }
  == == == ==   packed-compare with
{ 2d 2c 2b 2a }                                           { 2b ...
#include <stdint.h>
#include <immintrin.h>

typedef struct four_points {
    int32_t a, b, c, d;
} four_points;
//typedef uint32_t four_points[4];

// small enough to inline, only 62B of x86 instructions (gcc 4.9.2)
static inline int match4_sse_noS2dup(const four_points *s1pointer, const four_points *s2pointer)
{
    __m128i s1 = _mm_loadu_si128((__m128i*)s1pointer);
    __m128i s2 = _mm_loadu_si128((__m128i*)s2pointer);
    __m128i s1b= _mm_shuffle_epi32(s1, _MM_SHUFFLE(0, 3, 2, 1));
    // no shuffle needed for first compare
    __m128i match = _mm_cmpeq_epi32(s1 , s2);  //{s1.d==s2.d?-1:0, 1c==2c, 1b==2b, 1a==2a }
    __m128i s1c= _mm_shuffle_epi32(s1, _MM_SHUFFLE(1, 0, 3, 2));
    s1b = _mm_cmpeq_epi32(s1b, s2);
    match = _mm_or_si128(match, s1b);  // merge dups by ORing instead of adding

    // note that we shuffle the original vector every time
    // multiple short dependency chains are better than one long one.
    __m128i s1d= _mm_shuffle_epi32(s1, _MM_SHUFFLE(2, 1, 0, 3));
    s1c = _mm_cmpeq_epi32(s1c, s2);
    match = _mm_or_si128(match, s1c);
    s1d = _mm_cmpeq_epi32(s1d, s2);

    match = _mm_or_si128(match, s1d);    // match = { s2.a in s1?,  s2.b in s1?, etc. }

    // turn the the high bit of each 32bit element into a bitmap of s2 elements that have matches anywhere in s1
    // use float movemask because integer movemask does 8bit elements.
    int matchmask = _mm_movemask_ps (_mm_castsi128_ps(match));

    return _mm_popcnt_u32(matchmask);  // or use a 4b lookup table for CPUs with SSE2 but not popcnt
}
#### comparing S2 with itself to mask off duplicates
{  0 2d 2c 2b }
{ 2d 2c 2b 2a }     == == ==

{  0  0 2d 2c }
{ 2d 2c 2b 2a }        == ==

{  0  0  0 2d }
{ 2d 2c 2b 2a }           ==
static inline
int match4_sse(const four_points *s1pointer, const four_points *s2pointer)
{
    // IACA_START
    __m128i s1 = _mm_loadu_si128((__m128i*)s1pointer);
    __m128i s2 = _mm_loadu_si128((__m128i*)s2pointer);
    // s1a = unshuffled = s1.a in the low element
    __m128i s1b= _mm_shuffle_epi32(s1, _MM_SHUFFLE(0, 3, 2, 1));
    __m128i s1c= _mm_shuffle_epi32(s1, _MM_SHUFFLE(1, 0, 3, 2));
    __m128i s1d= _mm_shuffle_epi32(s1, _MM_SHUFFLE(2, 1, 0, 3));

    __m128i match = _mm_cmpeq_epi32(s1 , s2);  //{s1.d==s2.d?-1:0, 1c==2c, 1b==2b, 1a==2a }
    s1b = _mm_cmpeq_epi32(s1b, s2);
    match = _mm_or_si128(match, s1b);  // merge dups by ORing instead of adding

    s1c = _mm_cmpeq_epi32(s1c, s2);
    match = _mm_or_si128(match, s1c);
    s1d = _mm_cmpeq_epi32(s1d, s2);
    match = _mm_or_si128(match, s1d);
    // match = { s2.a in s1?,  s2.b in s1?, etc. }

    // s1 vs s2 all done, now prepare a mask for it based on s2 dups

/*
 * d==b   c==a   b==a  d==a   #s2b
 * d==c   c==b   b==a  d==a   #s2c
 *    OR together -> s2bc
 *  d==abc     c==ba    b==a    0  pshufb(s2bc) (packed as zero or non-zero bytes within the each element)
 * !(d==abc) !(c==ba) !(b==a)  !0   pcmpeq setzero -> AND mask for s1_vs_s2 match
 */
    __m128i s2b = _mm_shuffle_epi32(s2, _MM_SHUFFLE(1, 0, 0, 3));
    __m128i s2c = _mm_shuffle_epi32(s2, _MM_SHUFFLE(2, 1, 0, 3));
    s2b = _mm_cmpeq_epi32(s2b, s2);
    s2c = _mm_cmpeq_epi32(s2c, s2);

    __m128i s2bc= _mm_or_si128(s2b, s2c);
    s2bc = _mm_shuffle_epi8(s2bc, _mm_set_epi8(-1,-1,0,12,  -1,-1,-1,8, -1,-1,-1,4,  -1,-1,-1,-1));
    __m128i dupmask = _mm_cmpeq_epi32(s2bc, _mm_setzero_si128());
    // see below for alternate insn sequences that can go here.

    match = _mm_and_si128(match, dupmask);
    // turn the the high bit of each 32bit element into a bitmap of s2 matches
    // use float movemask because integer movemask does 8bit elements.
    int matchmask = _mm_movemask_ps (_mm_castsi128_ps(match));

    int ret = _mm_popcnt_u32(matchmask);  // or use a 4b lookup table for CPUs with SSE2 but not popcnt
    // IACA_END
    return ret;
}
d==bc  c==ab b==a a==d = s2b|s2c
d==a   0     0    0    = byte-shift-left(s2b) = s2d0
d==abc c==ab b==a a==d = s2abc
d==abc c==ab b==a 0    = mask(s2abc).  Maybe use PBLENDW or MOVSS from s2d0 (which we know has zeros) to save loading a 16B mask.

__m128i s2abcd = _mm_or_si128(s2b, s2c);
//s2bc = _mm_shuffle_epi8(s2bc, _mm_set_epi8(-1,-1,0,12,  -1,-1,-1,8, -1,-1,-1,4,  -1,-1,-1,-1));
//__m128i dupmask = _mm_cmpeq_epi32(s2bc, _mm_setzero_si128());
__m128i s2d0 = _mm_bslli_si128(s2b, 12);  // d==a  0  0  0
s2abcd = _mm_or_si128(s2abcd, s2d0);
__m128i dupmask = _mm_blend_epi16(s2abcd, s2d0, 0 | (2 | 1));
//__m128i dupmask = _mm_and_si128(s2abcd, _mm_set_epi32(-1, -1, -1, 0));

match = _mm_andnot_si128(dupmask, match);  // ~dupmask & match;  first arg is the one that's inverted
unsigned int dupmask = _mm_movemask_ps(cast(s2bc));
dupmask |= dupmask << 3;  // bit3 = d==abc.  garbage in bits 4-6, careful if using AVX2 to do two structs at once
        // only 2 instructions.  compiler can use lea r2, [r1*8] to copy and scale
dupmask &= ~1;  // clear the low bit

unsigned int matchmask = _mm_movemask_ps(cast(match));
matchmask &= ~dupmask;   // ANDN is in BMI1 (Haswell), so this will take 2 instructions
return _mm_popcnt_u32(matchmask);