Comparing feature descriptors of large set of images

I have a set of a few thousand images, and for each image I have extracted a set of SIFT feature descriptors (currently bound to 200 per image).

I am required to form a complete graph of the distances between each of the images. That is, I need to work out the distance from each image to every other image via some metric.

So far I have tried using FLANN to calculate the 20 nearest neighbouring descriptors between the two nodes, and then calculating the mean distance between each of the matched descriptors. Unfortunately this process is taking far too long to perform.

Is there any way for me to compare the descriptors of these images more efficiently?


You can considerate to aggregate your SIFT descriptord into a Bag-of-visual words (BoV) or a Vector of Locally Aggregated Descriptor (VLAD). Basically:

1 - compute a codebook (K SIFT descriptors) with eg K-means

2 - For each image, extract the SIFT descriptors, then look for the nearest neighbor of each into the codebook. Hence, compute an histogram of the SIFT of the image according to the codebook. This is the simplest method (hard coding, Sum pooling) but alternative exists (and often give better results for computer visions problems)

3 - Hence, each image is represented with a unique vector of size K (the histogram). You can then simply compute the distance between the images as the (eg euclidean) distance between these histograms.

链接地址: http://www.djcxy.com/p/29166.html

上一篇: 在内存方面,SIFT有多昂贵?

下一篇: 比较大图像的特征描述符