openCV灰度/彩色寻址像素 我用OpenCV编写了C++中的块匹配算法。 它正在处理灰度图像,并通过他的绝对像素地址来处理图像的边缘
我必须以相同大小的块(8x8像素)划分IPLImage。为了访问块内的像素值,我计算PixelAddress并通过以下方式访问像素值:openCV灰度/彩色寻址像素 我用OpenCV编写了C++中的块匹配算法。 它正在处理灰度图像,并通过他的绝对像素地址来处理图像的边缘,c,opencv,iplimage,C,Opencv,Iplimage,我必须以相同大小的块(8x8像素)划分IPLImage。为了访问块内的像素值,我计算PixelAddress并通过以下方式访问像素值: for (int yBlock = 0; yBlock < maxYBlocks; yBlock++){ for (int xBlock = 0; yxlock < maxXBlocks; xBlock++){ for (int yPixel = 0; yPixel < 8; yPixel++){ f
for (int yBlock = 0; yBlock < maxYBlocks; yBlock++){
for (int xBlock = 0; yxlock < maxXBlocks; xBlock++){
for (int yPixel = 0; yPixel < 8; yPixel++){
for (int xPixel = 0; xPixel < 8; xPixel++){
pixelAdress = yBlock*imageWidth*8 + xBlock*8 + yPixel*imageWidth + xPixel;
unsigned char* imagePointer = (unsigned char*)(img->imageData);
pixelValue = imagePointer[pixelAdress];
}
}
}
}
或者我必须在以前使用imageWidth的地方使用widthStep,比如:
pixelAdressR = pixelAdress = yBlock*img->widthStep*8 + xBlock*8*3 + yPixel*img->widthStep + xPixel*3 + 2;
pixelAdressG = pixelAdress = yBlock*img->widthStep*8 + xBlock*8*3 + yPixel*img->widthStep + xPixel*3 + 1;
pixelAdressB = pixelAdress = yBlock*img->widthStep*8 + xBlock*8*3 + yPixel*img->widthStep + xPixel*3;
baseadress + (by*8+ y_in_block)*widthStep + (bx*8+x)*3 +0 = b
baseadress + (by*8+ y_in_block)*widthStep + (bx*8+x)*3 +1 = g
baseadress + (by*8+ y_in_block)*widthStep + (bx*8+x)*3 +2 = r
访问权也是如此
pixelValueR = imagePointer[pixelAdressR];
pixelValueG = imagePointer[pixelAdressG];
pixelValueB = imagePointer[pixelAdressB];
如果是多通道
Mat
(本例中为BGR),您可以使用访问单个像素,如下所述
Vec3b强度=img.at(y,x);
乌查尔蓝=强度.val[0];
uchar绿色=强度.val[1];
乌查尔红=强度.val[2];
Mat
(例如Mat img
)
- 灰度(
):8UC1
IplImage
(例如IplImage*img
)
- 灰度:
uchar intensity = CV_IMAGE_ELEM(img, uchar, h, w);
- 彩色图像:
uchar blue = CV_IMAGE_ELEM(img, uchar, y, x*3); uchar green = CV_IMAGE_ELEM(img, uchar, y, x*3+1); uchar red = CV_IMAGE_ELEM(img, uchar, y, x*3+2);
不确定您的整个算法,目前无法对其进行测试,但对于IplImages,内存排列如下:
1. row
baseadress + 0 = b of [0]
baseadress + 1 = g of [0]
baseadress + 2 = r of [0]
baseadress + 3 = b of [1]
etc
2. row
baseadress + widthStep + 0 = b
baseadress + widthStep + 1 = g
baseadress + widthStep + 2 = r
因此,如果您有大小为8x8的n*m
块unsigned char bgr数据,并且希望在块[bx,by]
中的变量[x,y]
上循环,您可以这样做:
pixelAdressR = pixelAdress = yBlock*img->widthStep*8 + xBlock*8*3 + yPixel*img->widthStep + xPixel*3 + 2;
pixelAdressG = pixelAdress = yBlock*img->widthStep*8 + xBlock*8*3 + yPixel*img->widthStep + xPixel*3 + 1;
pixelAdressB = pixelAdress = yBlock*img->widthStep*8 + xBlock*8*3 + yPixel*img->widthStep + xPixel*3;
baseadress + (by*8+ y_in_block)*widthStep + (bx*8+x)*3 +0 = b
baseadress + (by*8+ y_in_block)*widthStep + (bx*8+x)*3 +1 = g
baseadress + (by*8+ y_in_block)*widthStep + (bx*8+x)*3 +2 = r
因为行by*8+y是地址
baseaddress+(by*8+y\u在块中)*宽度步长`
列bx*8+x
是地址偏移量(bx*8+x)*3
是的,谢谢,但我使用IPLimage,想知道如何访问绝对像素地址的rgb值。你明白我的问题吗?@user3379319“但我使用IPLimage”-不要!c-api已经非常过时了。没有更发达的。我不想为了访问rgb值而改变我的整个算法并在cvMat上重新实现它。这将是一个很大的工作。有人能回答我的问题吗?你的第二个版本应该是正确的,因为widthStep包含了“转到下一行”所需的所有内存(以字节为单位)。请小心,如果imageWidth
是图像的像素宽度,则原始代码并不总是正确的,因为OpenCV可能会在列的末尾添加额外的未使用字节以提高显示性能(或者SSE操作或其他,我不确定)。这就是img->widthStep
的目的,这样你就可以轻松地处理像素,而不必知道是否添加了额外的字节(以及其他像子成像这样的事情变得微不足道)。非常感谢!这就是我要找的!我以这种方式实现了它,它似乎正在工作!对于baseAddress,您可能是指指向imagedata的指针,对吗?
uchar blue = CV_IMAGE_ELEM(img, uchar, y, x*3);
uchar green = CV_IMAGE_ELEM(img, uchar, y, x*3+1);
uchar red = CV_IMAGE_ELEM(img, uchar, y, x*3+2);
1. row
baseadress + 0 = b of [0]
baseadress + 1 = g of [0]
baseadress + 2 = r of [0]
baseadress + 3 = b of [1]
etc
2. row
baseadress + widthStep + 0 = b
baseadress + widthStep + 1 = g
baseadress + widthStep + 2 = r
baseadress + (by*8+ y_in_block)*widthStep + (bx*8+x)*3 +0 = b
baseadress + (by*8+ y_in_block)*widthStep + (bx*8+x)*3 +1 = g
baseadress + (by*8+ y_in_block)*widthStep + (bx*8+x)*3 +2 = r