SIFT/SURF/ORB for detection and orientation of simple pattern
My project is centered on positioning of several small objects with a stationary camera. I've drawn some crisp, simple graphical pattern images (like this), printed them out and try to detect them in the image. My straightforward method:
I write on Python, with OpenCV. My initcode for ORB + BFMatcher is:
pt_detector = cv2.ORB()
pt_matcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
And for SIFT + FLANN I write:
pt_detector = cv2.SURF(400)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
pt_matcher = cv2.FlannBasedMatcher(index_params, search_params)
And then I go simply:
kp_r, des_r = pt_detector.detectAndCompute(pattern, None)
kp_o, des_o = pt_detector.detectAndCompute(obj_res, None)
matches = pt_matcher.match(des_r, des_o)
The problem: detectors find matches all over the sample and template, and though they generally manage to detect the pattern, the orientation is messed up.
Here's an example of matching between the template image (left) and actually found pattern in the camera frame (right). The camera image is masked and thresholded, of course. These 10 best matches from SIFT+FLANN are plain terrible. I've tried SURF, SIFT with FLANN matcher and ORB+BFMatcher on default settings, with no results. I suppose the problem is in the parameters of descriptor and matcher.
Can anyone tell me how should I try to set up the descriptor and matcher for robust matching of simple patterns? Or maybe there is another approach for this task?
链接地址: http://www.djcxy.com/p/61078.html上一篇: 基准标记或相机姿态估计