How to project pixels between two images according to known region anchors

I'm looking for a method or an algorithm to transform an image in such a way that known sites in both images overlap, and such that pixels in the regions between the known sites are mapped according to these known sites.

Simplified example is shown below; I want to transform image B such that all the red triangles overlap and any pixels between the triangles are mapped sensibly to their corresponding locations according to the 'coordinate system' of image A .

Of course in this example only one point in B deviates from a scaled version of image A , whereas in real applications no points are guaranteed to overlap (even in their scaled versions).

示例转换

Most likely the algorithm needs to be (re-)implemented in C, so I'm more interested in the description of an algorithm rather than the implementation of it.

Ideally the method would not yield artefacts in the form of sharp lines (eg between the anchors) from simple methods such as breaking the image into triangular vertices and transforming them individually.

I apologise if this question or similar ones have been raised before; if so I've been unable to identify it, probably due to lacking formal nomenclature...

UPDATE : It may be of value to know that these are scanned images of a very large nature (microscopy scans). As such they are actually multiple images stitched together, based on similarities along the edges of each neighbouring snapshot. Because of this, there may be small individual glitches within two similar scans (where offset of sub-images may be wrongly calculated). Secondly the images I'm trying to spatially homogenise may be bent from processing ahead of scanning as well. Concerning the second source of distortion it may be natural to think of the displacement being caused by 3D 'peaks' and 'valleys' in the image (pivotal points where a stretch/compression has occurred), although their positions are not known.

SUGGESTIONS :
Alternative A: Generating a voronoi map to establish regions of dominance for each anchor, asserting that each point along the boundaries of each region should be positioned the same between A and B , then applying something akin to a homography based transform to each region.

Alternative B: For each pixel calculate correction based on the closest (3-4) anchor points.


Your images are bent, the planarity assumption is broken (no existence of an homography between the two shapes), there is a Non-Rigid transformation between the two shapes. This problem is called Shape-from-Template.

I suggest you a lecture and an article from Adrien Bartoli.

链接地址: http://www.djcxy.com/p/50052.html

上一篇: 估计Python中图像的子像素值

下一篇: 如何根据已知区域锚点在两个图像之间投影像素