How to use Opencv FeatureDetecter on tiny images

I am using Opencv 3 in Java, I am trying to find small images(like 25x25 pixels) on other image. But FeatureDetector detection (0,0) size Mat on small image.

    Mat smallImage = ...

    FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
    DescriptorExtractor descriptor = DescriptorExtractor.create(DescriptorExtractor.ORB);
    DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);

    Mat descriptorsSmall = new Mat();
    MatOfKeyPoint keyPointsSmall = new MatOfKeyPoint();

    detector.detect(smallImage, keyPointsSmall);
    descriptor.compute(smallImage, keyPointsSmall, descriptorsSmall);

Here I am getting keyPointsSmall and descriptorsSmall size as zero, and sure detection is not working.

But if I try this on larger images like 150x150 pixels that is working fine. Any suggestions? Thank you.

Here I am adding samples. we have this source image: 这是源图片

And let it say we have template for P letter, so we need to detect this P on source image.

well, scaling image to higher resolution will not work for me. That will be lost of time and resource. Ideally it should be rotation-scale invariant. But simple solution without rotation and scale is also ok.

Other solutions except OpenCv is not acceptable for me. (for example using Tesseract)


Keypoint detection for text recognition is not the best solution, since you will get many features which look alike and if the templates are very small, the sliding window will not yield enough detected features.

Lucky for you, OpenCV 3 contains a text detection/recognition module in the contrib repository: link, with an example taken from here and many others to find here:

/*
 * cropped_word_recognition.cpp
 *
 * A demo program of text recognition in a given cropped word.
 * Shows the use of the OCRBeamSearchDecoder class API using the provided default classifier.
 *
 * Created on: Jul 9, 2015
 *     Author: Lluis Gomez i Bigorda <lgomez AT cvc.uab.es>
 */

#include "opencv2/text.hpp"
#include "opencv2/core/utility.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"

#include <iostream>

using namespace std;
using namespace cv;
using namespace cv::text;

int main(int argc, char* argv[])
{

    cout << endl << argv[0] << endl << endl;
    cout << "A demo program of Scene Text Character Recognition: " << endl;
    cout << "Shows the use of the OCRBeamSearchDecoder::ClassifierCallback class using the Single Layer CNN character classifier described in:" << endl;
    cout << "Coates, Adam, et al. "Text detection and character recognition in scene images with unsupervised feature learning." ICDAR 2011." << endl << endl;

    Mat image;
    if(argc>1)
        image  = imread(argv[1]);
    else
    {
        cout << "    Usage: " << argv[0] << " <input_image>" << endl;
        cout << "           the input image must contain a single character (e.g. scenetext_char01.jpg)." << endl << endl;
        return(0);
    }

    string vocabulary = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; // must have the same order as the clasifier output classes

    Ptr<OCRHMMDecoder::ClassifierCallback> ocr = loadOCRHMMClassifierCNN("OCRBeamSearch_CNN_model_data.xml.gz");

    double t_r = (double)getTickCount();
    vector<int> out_classes;
    vector<double> out_confidences;

    ocr->eval(image, out_classes, out_confidences);

    cout << "OCR output = "" << vocabulary[out_classes[0]] << "" with confidence "
         << out_confidences[0] << ". Evaluated in "
         << ((double)getTickCount() - t_r)*1000/getTickFrequency() << " ms." << endl << endl;

    return 0;
}

You can resample the image, its alot faster then scaling and very fast procedure on its on, its just mapping each pixel to a set of pixels until the resolution is as requested, in opencv you can do that with the resize function and INTER_AREA flag : http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html

Another solution can be to just copy the image to a bigger blank image and run the detection on the bigger one.


It looks like you try to read PersonalID properties. Bascially what one does is preparing the images, vectorize it (so its scale and rotation invariant) and do a compare/match. This can be done in OpenCV.

  • Preparation: Often you reduce the color and brightness. If your letter is that prominent you can use a threshold (brightness/color/perColorChannel) and remove those colors. For you you might reduce everything above not almost black to just become white. You might want to experiment with additional sharpening and even edge detection.

  • Vectorization is very simple done and can be further improved since you know that you only are interested in certain symbols, you should find additional qualities you can use to improve the result of the vectorization (supressing noise, better selection and correction of certain edges/angles etc).

  • Matching should be quite straight forward. Since you know the target font and the potential symbols, matching should produce a lot of positive results with a very slim error margin. Also most of the errors should be easily recognizable so that the few errors might exist you can send to a person for verification.

  • Potential Improvements:

  • Scaling using a fractal approach is often preserving properties of Letters and numbers very well and can increase quality of the result.

  • Detecting the different part of the IDs will help you to identify the target area of the detection. This allows you to improve the results further. Often people just focus on what they want to recognize and forget about the additional unneeded information. But these information give you an idea about the potential error you might make in detection. So if you can not recognize the name correctly it is likely that you also fail on the ID. So trying to get all information of an ID is a good indicator if the quality of the picture is good enough to be certain about the information you really care for.

  • If you know exactly what your target area is like, you can scale the target area to a fixed size and use a per pixel matching. Since you know exactly the font you care for, such a detection can have a surprising high detection rate. Using per pixel matching and vectorization will give you superior detection rate. Also per pixel matching is very fast when compared to vectorization.

  • Since you know the location and size of the expected symbols you can create decision trees based on the properties (actual size of the symbol, the distribution of black in certain areas etc). This will bring down the question from one out of 35 to one of four or even less.

  • 链接地址: http://www.djcxy.com/p/34534.html

    上一篇: R软件包不加载依赖关系

    下一篇: 如何在小图片上使用Opencv FeatureDetecter