I have split an image into 3 separate color channels - one blue, one green, and one red. I would like to normalize each of these channels by the image's intensity, where intensity = (red + blue + green)/3. To be clear, I am trying to make an image that is composed of one of the three color channels, divided by the image's intensity, where the intensity is described by the equation above
我已将图像分成3个独立的颜色通道 - 一个蓝色,一个绿色和一个红色。 我想通过图像的强度来标准化每个通道,其中强度=(红+蓝+绿)/ 3。 为了清楚起见,我试图制作由三个颜色通道中的一个组成的图像,除以图像的强度,其中强度由以上等式描述。 我是OpenCV的新手,我认为我没有做到这一点, 当显示图像时,所有像素都显示为黑色。 我是OpenCV的新手(我已经完成了文档附带的教程,但就是这样) - 有关如何进行这种规范化的
I'm new to OpenCV and I'm trying to get the pixel value from a grayscale image. #include<opencv2opencv.hpp> #include<opencv2highguihighgui.hpp> #include<opencv2corecore.hpp> #include<iostream> using namespace cv; int main() { VideoCapture cap(1); Mat image,gray_image; cap>>image; cvtColor(image,gray_image,CV_BGR2GRAY); std::cout<<
我是OpenCV的新手,我试图从灰度图像中获取像素值。 #include<opencv2opencv.hpp> #include<opencv2highguihighgui.hpp> #include<opencv2corecore.hpp> #include<iostream> using namespace cv; int main() { VideoCapture cap(1); Mat image,gray_image; cap>>image; cvtColor(image,gray_image,CV_BGR2GRAY); std::cout<<"Value: "<<gray_image.at<uchar
I see there are similar questions to this but don't quiet answer what I am asking so here is my question. In C++ with OpenCV I run the code I will provide below and it returns an average pixel value of 6.32. However, when I open the image and use the mean function in MATLAB it returns an average pixel intensity of approximately 6.92ish. As you can see I convert the OpenCV values to double
我看到有类似的问题,但不要回答我所问的问题,所以这里是我的问题。 在OpenCV的C ++中,我运行我将在下面提供的代码,它返回的平均像素值为6.32。 但是,当我打开图像并在MATLAB中使用平均函数时,它会返回约6.92ish的平均像素强度。 正如你所看到的,我将OpenCV值转换为double来试图缓解这个问题,并且发现openCV将图像加载为一组整数,而MATLAB将图像加载为十进制值,这些十进制值与整数显然大致相同但不完全相同。 所以
I am using letter_regcog example from OpenCV, it used dataset from UCI which have structure like this: Attribute Information: 1. lettr capital letter (26 values from A to Z) 2. x-box horizontal position of box (integer) 3. y-box vertical position of box (integer) 4. width width of box (integer) 5. high height of box (integer) 6.
我使用的是来自OpenCV的letter_regcog示例,它使用了来自UCI的具有如下结构的数据集: Attribute Information: 1. lettr capital letter (26 values from A to Z) 2. x-box horizontal position of box (integer) 3. y-box vertical position of box (integer) 4. width width of box (integer) 5. high height of box (integer) 6. onpix total # on pi
I'm using openCV 3.0 and trying to make feature matcher. I copied code from http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html. I didnt have "opencv2/nonfree/features2d.hpp" So I included features2d.hpp from https://github.com/itseez/opencv_contrib/ in my cpp file. But now I get this errors. OneMoreMain.cpp(50) : error C2065: 'SurfFeatur
我正在使用openCV 3.0并试图进行功能匹配。 我从http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html复制了代码。 我没有 "opencv2/nonfree/features2d.hpp" 所以我在我的cpp文件中包含了https://github.com/itseez/opencv_contrib/中的features2d.hpp。 但现在我得到这个错误。 OneMoreMain.cpp(50) : error C2065: 'SurfFeatureDetector' : undeclared identifier 1&
I copied the code of the Feature Matching with FLANN from the OpenCV tutorial page, and made the following changes: I used the SIFT features, instead of SURF; I modified the check for a 'good match'. Instead of if( matches[i].distance < 2*min_dist ) I used if( matches[i].distance <= 2*min_dist ) otherwise I would get zero good matches when comparing an image with itself
我从OpenCV教程页面复制了功能匹配与FLANN的代码,并做了以下更改: 我使用了SIFT功能,而不是SURF; 我修改了“良好匹配”的支票。 代替 if( matches[i].distance < 2*min_dist ) 我用了 if( matches[i].distance <= 2*min_dist ) 否则,当比较图像与自身时,我会得到零匹配的好匹配。 绘制关键点时修改参数: drawMatches( img1, k1, img2, k2, good_matches, img_matches, Scalar::all(-1
I am starting up with OpenCV (using C++) and was playing around with the "Feature matching with FLANN" tutorial from here: http://opencv.itseez.com/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html I did not modify anything, just tried running it as it is. Unfortunately, when running it, I am getting this error when the program tries to detect the keypoints (o
我开始使用OpenCV(使用C ++),并从这里开始使用“与FLANN匹配的功能”教程:http://opencv.itseez.com/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html 我没有修改任何东西,只是尝试运行它。 不幸的是,在运行它时,当程序尝试检测关键点时(线路detector.detect(img_1,keypoints_1) ),我收到了此错误: 开箱即用运行其他OpenCV教程没有任何问题......有没有人遇到过类似的问题? 我的
I have an image with a non-rectangular ROI, defined by a binary mask image. In OpenCV, how can I set the pixels OUTSIDE my ROI as the nearest pixel value INSIDE the ROI? Something similar to what is in cv::BORDER_REPLICATE , or something similar to what is done in cv::warp You can make use of cv::inpaint() by restoring the selected region region in an image using the region neighborhood. I
我有一个非矩形ROI的图像,由二进制蒙版图像定义。 在OpenCV中,我怎样才能将我的ROI外部的像素设置为距离ROI最近的像素值? 类似于cv::BORDER_REPLICATE ,或类似于cv::warp 你可以使用cv::inpaint()通过使用区域邻域在图像中恢复选定的区域区域。 在你的情况下,这将是这样的: cv::inpaint(mat_input, 255 - roi, mat_output, inpaint_radius, cv::INPAINT_NS);
I have an image mask, with some contours I got from Canny. I can calculate a bounding rectangle (with a given angle that is fix). Now I need to separate the 2 areas to the left and right of that rectangle. How can I do that? Please note that I want to work with the area within the rectangle, not the pixels that are contours. Edit This is how I obtain each bounding rectangle from the mas
我有一个形象面具,有一些我从Canny那里得到的轮廓。 我可以计算一个边界矩形(具有固定的给定角度)。 现在我需要将这两个区域分离到该矩形的左侧和右侧。 我怎样才能做到这一点? 请注意,我想要处理矩形内的区域,而不是轮廓的像素。 编辑 这是我如何从掩码获取每个边界矩形: cv::Mat img_edges; // mask with contours // Apply clustering to the edge mask from here // http://stackoverflow.com/questions/3
When you retrieve contours from an image, you should get 2 contours per blob - one inner and one outer. Consider the circle below - since the circle is a line with a pixel width larger than one, you should be able to find two contours in the image - one from the inner part of the circle and one from the outer part. Using OpenCV, I want to retrieve the INNER contours. However, when I use findC
当您从图像中检索轮廓时,您应该获得每个斑点2个轮廓 - 一个内部轮廓和一个外部轮廓。 考虑下面的圆 - 由于圆是像素宽度大于1的线,因此您应该能够在图像中找到两个轮廓 - 一个来自圆圈内部,另一个来自外部。 使用OpenCV,我想要检索INNER轮廓。 但是,当我使用findContours()时,我似乎只能获得外部轮廓。 我将如何使用OpenCV检索blob的内部轮廓? 我使用的是C ++ API,而不是C,因此只提示使用C ++ API的函数。 (