Opencv 图像处理:应用过滤器后图像有网格线
我对低水平的图像处理非常陌生,刚刚尝试过用GPU和CPU实现高斯内核,但两者都产生相同的输出,图像被网格严重扭曲: 我知道我可以使用OpenCV的预构建函数来处理过滤器,但我想学习它背后的方法,所以我构建了自己的方法 卷积核:Opencv 图像处理:应用过滤器后图像有网格线,opencv,image-processing,Opencv,Image Processing,我对低水平的图像处理非常陌生,刚刚尝试过用GPU和CPU实现高斯内核,但两者都产生相同的输出,图像被网格严重扭曲: 我知道我可以使用OpenCV的预构建函数来处理过滤器,但我想学习它背后的方法,所以我构建了自己的方法 卷积核: // Convolution kernel - this manipulates the given channel and writes out a new blurred channel. void convoluteChannel_cpu(
// Convolution kernel - this manipulates the given channel and writes out a new blurred channel.
void convoluteChannel_cpu(
const unsigned char* const channel, // Input channel
unsigned char* const channelBlurred, // Output channel
const size_t numRows, const size_t numCols, // Channel width/height (rows, cols)
const float *filter, // The weight of sigma, to convulge
const int filterWidth // This is normally a sample of 9
)
{
// Loop through the images given R, G or B channel
for(int rows = 0; rows < (int)numRows; rows++)
{
for(int cols = 0; cols < (int)numCols; cols++)
{
// Declare new pixel colour value
float newColor = 0.f;
// Loop for every row along the stencil size (3x3 matrix)
for(int filter_x = -filterWidth/2; filter_x <= filterWidth/2; filter_x++)
{
// Loop for every col along the stencil size (3x3 matrix)
for(int filter_y = -filterWidth/2; filter_y <= filterWidth/2; filter_y++)
{
// Clamp to the boundary of the image to ensure we don't access a null index.
int image_x = __min(__max(rows + filter_x, 0), static_cast<int>(numRows -1));
int image_y = __min(__max(cols + filter_y, 0), static_cast<int>(numCols -1));
// Assign the new pixel value to the current pixel, numCols and numRows are both 3, so we only
// need to use one to find the current pixel index (similar to how we find the thread in a block)
float pixel = static_cast<float>(channel[image_x * numCols + image_y]);
// Sigma is the new weight to apply to the image, we perform the equation to get a radnom weighting,
// if we don't do this the image will become choppy.
float sigma = filter[(filter_x + filterWidth / 2) * filterWidth + filter_y + filterWidth/2];
//float sigma = 1 / 81.f;
// Set the new pixel value
newColor += pixel * sigma;
}
}
// Set the value of the next pixel at the current image index with the newly declared color
channelBlurred[rows * numCols + cols] = newColor;
}
}
}
//卷积内核-此操作操作给定通道并写出新的模糊通道。
无效卷积通道(
常量无符号字符*常量通道,//输入通道
无符号字符*常量channelfuzzle,//输出通道
常量大小\u t numRows、常量大小\u t numCols、//通道宽度/高度(行、列)
const float*filter,//要传递的sigma的权重
const int filterWidth//这通常是9的示例
)
{
//通过给定的R、G或B通道循环图像
对于(int行=0;行<(int)numRows;行++)
{
对于(int cols=0;cols<(int)numCols;cols++)
{
//声明新像素颜色值
float newColor=0.f;
//沿模具尺寸每行循环(3x3矩阵)
对于(int filter_x=-filterWidth/2;filter_x,这个问题中有很多疑问。
在代码的开头,它提到了过滤器的宽度是9,因此它是一个9x9内核。但在其他一些评论中,它被称为3。所以我猜你实际上使用的是一个9x9内核,过滤器中有81个权重
但上述输出绝不可能是由于上述混乱造成的
uchar4的大小为4字节。因此,在gaussian_cpu中,通过在不包含alpha值的图像上运行rgbaImage[i]上的循环来分割数据(可以从上述循环推断alpha不存在)实际要做的是将R1、G2、B3、R5、G6、B7等复制到红色通道。最好在灰度图像上尝试代码,并确保使用uchar而不是uchar4
输出图像看起来正好是原始图像宽度的1/3,这使得上述假设成立
编辑1:
guassian_cpu函数的输入是RGBA还是RGB?视频捕获必须提供3通道输出。*h_inputFrame
(到uchar4)本身的初始化是错误的,因为它指向3通道数据。
类似地,输出数据是四通道数据,但Mat outputFrame
声明为指向此四通道数据的单个通道。尝试将Mat outputFrame设置为8UC3 type
并查看结果
还有,代码是如何工作的,guassian_cpu()函数在定义中有7个输入参数,但调用函数时使用了8个参数。希望这只是一个输入错误。我认为您没有考虑到输入图像是RGB并且有3个通道。如果您的算法输出1个通道图像,最好在执行前将输入图像转换为单个通道算法。这种小故障似乎与输入/输出通道的数量有关。嗯,我的解决方案中是否有任何明显的可能导致输出倾斜的地方?更改算法以执行灰度转换,您将看到。@berak过滤是在整个长度内完成的。(-x到x),其中x是宽度/2。抱歉,还有一点,为了澄清混淆,这是一个3x3过滤器,我在注释中加9的唯一原因是为了提醒我乘法后的总大小。你所说的有道理,但当我用gaussiancpu方法拍摄图像并分割通道时,我将255放入alpha。也许我应该放弃这个,只使用uchar?作为for内核被调用错误,我刚刚意识到我发布了GPU内核调用而不是CPU-我可以向您保证参数确实匹配:P-beginStream()函数将从相机捕获的初始BGR图像转换为RGBA,然后将其强制转换为uchar4,将其写回h_输出指针以准备内核。如果这是真的,则“Mat outputFrame(大小(numCols(),numRows()),CV_8UC1,…)将修改为CV_8UC4,代码必须work@Alex..Try使用cv::cvtColor(frameIn,frameIn,cv::COLOR\u BGR2BGRA)
在camera>>frameIn
之后将输入图像转换为实际的4通道图像。您的gaussian\u cpu
功能工作正常,只需为其提供正确分配的输入和输出图像。
void gaussian_cpu(
const uchar4* const rgbaImage, // Our input image from the camera
uchar4* const outputImage, // The image we are writing back for display
size_t numRows, size_t numCols, // Width and Height of the input image (rows/cols)
const float* const filter, // The value of sigma
const int filterWidth // The size of the stencil (3x3) 9
)
{
// Build an array to hold each channel for the given image
unsigned char *r_c = new unsigned char[numRows * numCols];
unsigned char *g_c = new unsigned char[numRows * numCols];
unsigned char *b_c = new unsigned char[numRows * numCols];
// Build arrays for each of the output (blurred) channels
unsigned char *r_bc = new unsigned char[numRows * numCols];
unsigned char *g_bc = new unsigned char[numRows * numCols];
unsigned char *b_bc = new unsigned char[numRows * numCols];
// Separate the image into R,G,B channels
for(size_t i = 0; i < numRows * numCols; i++)
{
uchar4 rgba = rgbaImage[i];
r_c[i] = rgba.x;
g_c[i] = rgba.y;
b_c[i] = rgba.z;
}
// Convolute each of the channels using our array
convoluteChannel_cpu(r_c, r_bc, numRows, numCols, filter, filterWidth);
convoluteChannel_cpu(g_c, g_bc, numRows, numCols, filter, filterWidth);
convoluteChannel_cpu(b_c, b_bc, numRows, numCols, filter, filterWidth);
// Recombine the channels to build the output image - 255 for alpha as we want 0 transparency
for(size_t i = 0; i < numRows * numCols; i++)
{
uchar4 rgba = make_uchar4(r_bc[i], g_bc[i], b_bc[i], 255);
outputImage[i] = rgba;
}
}
while(gpu_frames > 0)
{
//cout << gpu_frames << "\n";
camera >> frameIn;
// Allocate I/O Pointers
beginStream(&h_inputFrame, &h_outputFrame, &d_inputFrame, &d_outputFrame, &d_redBlurred, &d_greenBlurred, &d_blueBlurred, &_h_filter, &filterWidth, frameIn);
// Show the source image
imshow("Source", frameIn);
g_timer.Start();
// Allocate mem to GPU
allocateMemoryAndCopyToGPU(numRows(), numCols(), _h_filter, filterWidth);
// Apply the gaussian kernel filter and then free any memory ready for the next iteration
gaussian_gpu(h_inputFrame, d_inputFrame, d_outputFrame, numRows(), numCols(), d_redBlurred, d_greenBlurred, d_blueBlurred, filterWidth);
// Output the blurred image
cudaMemcpy(h_outputFrame, d_frameOut, sizeof(uchar4) * numPixels(), cudaMemcpyDeviceToHost);
g_timer.Stop();
cudaDeviceSynchronize();
gpuTime += g_timer.Elapsed();
cout << "Time for this kernel " << g_timer.Elapsed() << "\n";
Mat outputFrame(Size(numCols(), numRows()), CV_8UC1, h_outputFrame, Mat::AUTO_STEP);
clean_mem();
imshow("Dest", outputFrame);
// 1ms delay to prevent system from being interrupted whilst drawing the new frame
waitKey(1);
gpu_frames--;
}
// Allocate host variables, casting the frameIn and frameOut vars to uchar4 elements, these will
// later be processed by the kernel
*h_inputFrame = (uchar4 *)frameIn.ptr<unsigned char>(0);
*h_outputFrame = (uchar4 *)frameOut.ptr<unsigned char>(0);