anyway to remove algorithmically discolorations from aerial imagery

I don't know much about image processing so please bear with me if this is not possible to implement.

I have several sets of aerial images of the same area originating from different sources. The pictures have been taken during different seasons, under different lighting conditions etc. Unfortunately some images look patchy and suffer from discolorations or are partially obstructed by clouds or pix-elated, as par example picture1 and picture2

I would like to take as an input several images of the same area and (by some kind of averaging them) produce 1 picture of improved quality. I know some C/C++ so I could use some image processing library.

Can anybody propose any image processing algorithm to achieve it or knows any research done in this field?


I would try with a "color twist" transform, ie a 3x3 matrix applied to the RGB components. To implement it, you need to pick color samples in areas that are split by a border, on both sides. You should fing three significantly different reference colors (hence six samples). This will allow you to write the nine linear equations to determine the matrix coefficients.

Then you will correct the altered areas by means of this color twist. As the geometry of these areas is intertwined with the field patches, I don't see a better way than contouring the regions by hand.

In the case of the second picture, the limits of the regions are blurred so that you will need to blur the region mask as well and perform blending.

In any case, don't expect a perfect repair of those problems as the transform might be nonlinear, and completely erasing the edges will be difficult. I also think that colors are so washed out at places that restoring them might create ugly artifacts.

For the sake of illustration, a quick attempt with PhotoShop using manual HLS adjustment (less powerful than color twist).

在这里输入图像描述


The first thing I thought of was a kernel matrix of sorts.

Do a first pass of the photo and use an edge detection algorithm to determine the borders between the photos - this should be fairly trivial, however you will need to eliminate any overlap/fading (looks like there's a bit in picture 2), you'll see why in a minute.

Do a second pass right along each border you've detected, and assume that the pixel on either side of the border should be the same color. Determine the difference between the red, green and blue values and average them along the entire length of the line, then divide it by two. The image with the lower red, green or blue value gets this new value added. The one with the higher red, green or blue value gets this value subtracted. On either side of this line, every pixel should now be the exact same. You can remove one of these rows if you'd like, but if the lines don't run the length of the image this could cause size issues, and the line will likely not be very noticeable. This could be made far more complicated by generating a filter by passing along this line - I'll leave that to you.

The issue with this could be where there was development/ fall colors etc, this might mess with your algorithm, but there's only one way to find out!

链接地址: http://www.djcxy.com/p/79658.html

上一篇: 是否有可能找到emgucv中“斑点”区域的边缘?

下一篇: 无论如何去除航空影像的算法变色