C ++

我想要做的是测量眼镜框的厚度。 我有想法测量框架轮廓的厚度(可能是更好的方法?)。 到目前为止,我已经概述了眼镜的框架,但是线条不符合的地方存在差距。 我想过使用HoughLinesP,但我不确定这是否是我需要的。

到目前为止,我已经执行了以下步骤:

  • 将图像转换为灰度
  • 围绕眼睛/眼镜区域创建ROI
  • 模糊图像
  • 扩大图像(这样做是为了去除任何薄框眼镜)
  • 进行Canny边缘检测
  • 找到轮廓
  • 这些是结果:

    这是我迄今为止的代码:

    //convert to grayscale
    cv::Mat grayscaleImg;
    cv::cvtColor( img, grayscaleImg, CV_BGR2GRAY );
    
    //create ROI
    cv::Mat eyeAreaROI(grayscaleImg, centreEyesRect);
    cv::imshow("roi", eyeAreaROI);
    
    //blur
    cv::Mat blurredROI;
    cv::blur(eyeAreaROI, blurredROI, Size(3,3));
    cv::imshow("blurred", blurredROI);
    
    //dilate thin lines
    cv::Mat dilated_dst;
    int dilate_elem = 0;
    int dilate_size = 1;
    int dilate_type = MORPH_RECT;
    
    cv::Mat element = getStructuringElement(dilate_type, 
        cv::Size(2*dilate_size + 1, 2*dilate_size+1), 
        cv::Point(dilate_size, dilate_size));
    
    cv::dilate(blurredROI, dilated_dst, element);
    cv::imshow("dilate", dilated_dst);
    
    //edge detection
    int lowThreshold = 100;
    int ratio = 3;
    int kernel_size = 3;    
    
    cv::Canny(dilated_dst, dilated_dst, lowThreshold, lowThreshold*ratio, kernel_size);
    
    //create matrix of the same type and size as ROI
    Mat dst;
    dst.create(eyeAreaROI.size(), dilated_dst.type());
    dst = Scalar::all(0);
    
    dilated_dst.copyTo(dst, dilated_dst);
    cv::imshow("edges", dst);
    
    //join the lines and fill in
    vector<Vec4i> hierarchy;
    vector<vector<Point>> contours;
    
    cv::findContours(dilated_dst, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
    cv::imshow("contours", dilated_dst);
    

    我不完全确定接下来的步骤是什么,或者如上所述,如果我应该使用HoughLinesP以及如何实现它。 很感谢任何形式的帮助!


    我认为有两个主要问题。

  • 分割眼镜框架

  • 找到分割帧的厚度

  • 现在我将发布一种方法来分割您的示例图像的眼镜。 也许这种方法也适用于不同的图像,但您可能需要调整参数,或者您可以使用主要想法。

    主要想法是:首先,找到图像中最大的轮廓,应该是眼镜。 其次,找出以前发现的最大轮廓中的两个最大轮廓,这应该是框架内的眼镜!

    我使用这个图像作为输入(这应该是你模糊但不扩张的图像):

    在这里输入图像描述

    // this functions finds the biggest X contours. Probably there are faster ways, but it should work...
    std::vector<std::vector<cv::Point>> findBiggestContours(std::vector<std::vector<cv::Point>> contours, int amount)
    {
        std::vector<std::vector<cv::Point>> sortedContours;
    
        if(amount <= 0) amount = contours.size();
        if(amount > contours.size()) amount = contours.size();
    
        for(int chosen = 0; chosen < amount; )
        {
            double biggestContourArea = 0;
            int biggestContourID = -1;
            for(unsigned int i=0; i<contours.size() && contours.size(); ++i)
            {
                double tmpArea = cv::contourArea(contours[i]);
                if(tmpArea > biggestContourArea)
                {
                    biggestContourArea = tmpArea;
                    biggestContourID = i;
                }
            }
    
            if(biggestContourID >= 0)
            {
                //std::cout << "found area: " << biggestContourArea << std::endl;
                // found biggest contour
                // add contour to sorted contours vector:
                sortedContours.push_back(contours[biggestContourID]);
                chosen++;
                // remove biggest contour from original vector:
                contours[biggestContourID] = contours.back();
                contours.pop_back();
            }
            else
            {
                // should never happen except for broken contours with size 0?!?
                return sortedContours;
            }
    
        }
    
        return sortedContours;
    }
    
    int main()
    {
        cv::Mat input = cv::imread("../Data/glass2.png", CV_LOAD_IMAGE_GRAYSCALE);
        cv::Mat inputColors = cv::imread("../Data/glass2.png"); // used for displaying later
        cv::imshow("input", input);
    
        //edge detection
        int lowThreshold = 100;
        int ratio = 3;
        int kernel_size = 3;    
    
        cv::Mat canny;
        cv::Canny(input, canny, lowThreshold, lowThreshold*ratio, kernel_size);
        cv::imshow("canny", canny);
    
        // close gaps with "close operator"
        cv::Mat mask = canny.clone();
        cv::dilate(mask,mask,cv::Mat());
        cv::dilate(mask,mask,cv::Mat());
        cv::dilate(mask,mask,cv::Mat());
        cv::erode(mask,mask,cv::Mat());
        cv::erode(mask,mask,cv::Mat());
        cv::erode(mask,mask,cv::Mat());
    
        cv::imshow("closed mask",mask);
    
        // extract outermost contour
        std::vector<cv::Vec4i> hierarchy;
        std::vector<std::vector<cv::Point>> contours;
        //cv::findContours(mask, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
        cv::findContours(mask, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
    
    
        // find biggest contour which should be the outer contour of the frame
        std::vector<std::vector<cv::Point>> biggestContour;
        biggestContour = findBiggestContours(contours,1); // find the one biggest contour
        if(biggestContour.size() < 1)
        {
            std::cout << "Error: no outer frame of glasses found" << std::endl;
            return 1;
        }
    
        // draw contour on an empty image
        cv::Mat outerFrame = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
        cv::drawContours(outerFrame,biggestContour,0,cv::Scalar(255),-1);
        cv::imshow("outer frame border", outerFrame);
    
        // now find the glasses which should be the outer contours within the frame. therefore erode the outer border ;)
        cv::Mat glassesMask = outerFrame.clone();
        cv::erode(glassesMask,glassesMask, cv::Mat());
        cv::imshow("eroded outer",glassesMask);
    
        // after erosion if we dilate, it's an Open-Operator which can be used to clean the image.
        cv::Mat cleanedOuter;
        cv::dilate(glassesMask,cleanedOuter, cv::Mat());
        cv::imshow("cleaned outer",cleanedOuter);
    
    
        // use the outer frame mask as a mask for copying canny edges. The result should be the inner edges inside the frame only
        cv::Mat glassesInner;
        canny.copyTo(glassesInner, glassesMask);
    
        // there is small gap in the contour which unfortunately cant be closed with a closing operator...
        cv::dilate(glassesInner, glassesInner, cv::Mat());
        //cv::erode(glassesInner, glassesInner, cv::Mat());
        // this part was cheated... in fact we would like to erode directly after dilation to not modify the thickness but just close small gaps.
        cv::imshow("innerCanny", glassesInner);
    
    
        // extract contours from within the frame
        std::vector<cv::Vec4i> hierarchyInner;
        std::vector<std::vector<cv::Point>> contoursInner;
        //cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
        cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
    
        // find the two biggest contours which should be the glasses within the frame
        std::vector<std::vector<cv::Point>> biggestInnerContours;
        biggestInnerContours = findBiggestContours(contoursInner,2); // find the one biggest contour
        if(biggestInnerContours.size() < 1)
        {
            std::cout << "Error: no inner frames of glasses found" << std::endl;
            return 1;
        }
    
        // draw the 2 biggest contours which should be the inner glasses
        cv::Mat innerGlasses = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
        for(unsigned int i=0; i<biggestInnerContours.size(); ++i)
            cv::drawContours(innerGlasses,biggestInnerContours,i,cv::Scalar(255),-1);
    
        cv::imshow("inner frame border", innerGlasses);
    
        // since we dilated earlier and didnt erode quite afterwards, we have to erode here... this is a bit of cheating :-(
        cv::erode(innerGlasses,innerGlasses,cv::Mat() );
    
        // remove the inner glasses from the frame mask
        cv::Mat fullGlassesMask = cleanedOuter - innerGlasses;
        cv::imshow("complete glasses mask", fullGlassesMask);
    
        // color code the result to get an impression of segmentation quality
        cv::Mat outputColors1 = inputColors.clone();
        cv::Mat outputColors2 = inputColors.clone();
        for(int y=0; y<fullGlassesMask.rows; ++y)
            for(int x=0; x<fullGlassesMask.cols; ++x)
            {
                if(!fullGlassesMask.at<unsigned char>(y,x))
                    outputColors1.at<cv::Vec3b>(y,x)[1] = 255;
                else
                    outputColors2.at<cv::Vec3b>(y,x)[1] = 255;
    
            }
    
        cv::imshow("output", outputColors1);
    
        /*
        cv::imwrite("../Data/Output/face_colored.png", outputColors1);
        cv::imwrite("../Data/Output/glasses_colored.png", outputColors2);
        cv::imwrite("../Data/Output/glasses_fullMask.png", fullGlassesMask);
        */
    
        cv::waitKey(-1);
        return 0;
    }
    

    我得到这个分割结果:

    原始图像中的叠加层会给您一种质量的印象:

    在这里输入图像描述

    和反向:

    在这里输入图像描述

    代码中有一些棘手的部分,它尚未整理。 我希望这是可以理解的。

    下一步将是计算分段帧的厚度。 我的建议是计算反转蒙版的距离变换。 从这里,您将需要计算脊线检测或者将面具镂空以找到脊线。 之后使用岭距的中值。

    无论如何,我希望这篇文章能够帮助你一点,尽管这还不是解决方案。


    根据照明,框架颜色等,这可能会或可能不会工作,但如何简单的颜色检测来分离框架? 框架颜色通常会比人体皮肤黑暗得多。 你会得到一个二进制图像(只是黑色和白色),并通过计算黑色像素的数量(面积),你可以得到帧的区域。

    另一种可能的方法是通过调整/扩大/侵蚀/两者来获得更好的边缘检测,直到获得更好的轮廓。 您还需要区分镜头轮廓,然后应用cvContourArea。

    链接地址: http://www.djcxy.com/p/82073.html

    上一篇: c++

    下一篇: How not to show transpiled files in Chrome Developer Tools when Ctrl + O