Keeping track of features in successive frames in OpenCV

I have written a program that uses goodFeaturesToTrack and calcOpticalFlowPyrLK to track features from frame to frame. The program reliably works and can estimate the optical flow in the preview image on an Android camera from the previous frame. Here's some snippets that describe the general process:

goodFeaturesToTrack(grayFrame, corners, MAX_CORNERS, quality_level,
        min_distance, cv::noArray(), eig_block_size, use_harris, 0.06);

...

if (first_time == true) {
    first_time = false;
    old_corners = corners;
    safe_corners = corners;
    mLastImage = grayFrame;

} else {

    if (old_corners.size() > 0 && corners.size() > 0) {

        safe_corners = corners;
        calcOpticalFlowPyrLK(mLastImage, grayFrame, old_corners, corners,
                status, error, Size(21, 21), 5,
                TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 30,
                        0.01));
    } else {
        //no features found, so let's start over.
        first_time = true;
    }

}

The code above runs over and over again in a loop where a new preview frame is grabbed at each iteration. Safe_corners, old_corners, and corners are all arrays of class vector < Point2f > . The above code works great.

Now, for each feature that I've identified, I'd like to be able to assign some information about the feature... number of times found, maybe a descriptor of the feature, who knows... My first approach to doing this was:

class Feature: public Point2f {
private:
  //things about a feature that I want to track
public: 
  //getters and fetchers and of course:
  Feature() {
    Point2f();
  }
  Feature(float a, float b) { 
    Point2f(a,b);
  }
}

Next, all of my outputArrays are changed from vector < Point2f > to vector < Feature > which in my own twisted world ought to work because Feature is defined to be a descendent class of Point2f. Polymorphism applied, I can't imagine any good reason why this should puke on me unless I did something else horribly wrong.

Here's the error message I get.

OpenCV Error: Assertion failed (func != 0) in void cv::Mat::convertTo(cv::OutputArray, int, double, double) const, file /home/reports/ci/slave50-SDK/opencv/modules/core/src/convert.cpp, line 1095

So, my question to the forum is do the OpenCV functions truly require a Point2f vector or will a descendant class of Point2f work just as well? Next step would be to get gdb working with mobile code on an the Android phone and seeing more precisely where it crashes, however I don't want to go down that road if my approach is fundamentally flawed.

Alternatively, if a feature is tracked across multiple frames using the approach above, does the address in memory for each point change?

Thanks in advance.


The short answer is YES , OpenCV functions do require std::vector<cv::Point2f> as arguments.

Note that the vectors contain cv::Point2f objects themselves, not pointers to cv::Point2f , so there is no polymorphic behavior.

Additionally, having your Feature inherit from cv::Point2f is probably not an ideal solution. It would be simpler to use composition in this case, not to mention modeling the correct relationship ( Feature has-a cv::Point2f ).

Relying on an object's location in memory is also probably not a good idea. Rather, read up on your data structure of choice.


I'm just getting into OpenCV myself so can't address that aspect of the code but your problem might be a bug in your code that results in an uninitialized base class (at least not initialized as you might expect). Your code should look like this:

Feature()
  : Point2f()
{
}
Feature(float a, float b)
  : Point2f(a,b)
{
}

Your implementation creates two temporary Point2f objects in the constructor. Those temporary objects do not initialize the Feature object's Point2f base class and those temporary objects are destroyed at the end of the constructor.

链接地址: http://www.djcxy.com/p/95152.html

上一篇: 如何在SDL2中旋转矩形?

下一篇: 跟踪OpenCV中连续帧的特征