Workflow to compare the overlap in two images?

I am going to have to apologize for the vagueness of this question in advance. I'm looking to help a friend who is trying to compare the overlap of flourescent nerves in two images.

The setup is as follows: There is an image of nerves and background labeled in red, and the same image of nerves and background labeled in green. Some nerves only labeled with red, and some only labeled with green, and that's important. What we want to do is overlay the red image and green image, and then figure out which nerves label with both (where red and green overlap).

There is a continuum of intensity for both labels (so there're bright obvious lines of red and dim areas of red speckle, same for green).

I was thinking what you could do is set a threshold intensity for red, and re-plot the image with only those nerves that met that threshold intensity. You could do this for green as well. Then, with the outputs you could compare the proportion of overlapping nerves by measuring the percentage of nerve tissue in one output that is found in the other output.

It would be ideal to be able to generate an output image where overlapping areas could be coloured (eg blue). Then, we could take that output where all the overlaps are blue, and overlay it on the original red/green images to visually highlight nerves that overlap.

Alternatively (or in combination), you could convert the images to character strings, and then look for matching characters in the same positions in those character strings.

Either of these is far beyond my skill level ^_^;

If anyone thinks this is possible, has any suggestions for a better workflow to do it, or can point me in the direction of some good reading or a package to figure this out, it would be much appreciated.

Thanks for any help you can offer!

edit: This suggestion in Matlab answers the start of my question I think (how to measure which pixels exceed some threshold intensity). They convert to greyscale, and then look for intensity as pixels that are whitish. I'd then need to generate an output of just those pixels that meet that threshold intensity.

http://www.mathworks.com/matlabcentral/answers/86484-how-to-find-the-intensity-of-each-pixel-of-an-image


I have been playing around learning some OpenCV which is free and available for OSX, Linux and Windows. I thought it might add some interactivity for your question so I have implemented the exact same algorithm as I did in ImageMagick, but in my beginner's OpenCV . If anyone more experienced has any constructive guidelines for my code, I am all ears.

Anyway, it looks like this when running and you slide the threshold sliders around:

在这里输入图像描述

Here is the code:

////////////////////////////////////////////////////////////////////////////////
// NerveView.cpp
// Mark Setchell
//
// OpenCV program that takes two images as parameters, the first the red 
// fluorescent nerve image and the second which is the green image.
//
// The two images are resized, normalised and then displayed merged into a 
// single RGB image where the thresholds on red and green are controlled by
// sliders. Common areas therefore appear in yellow. Move either slider left
// to lower the threshold and therefore include more of that colour in the 
// combined output image, or move slider right to decrease amount of that
// colour.
//
// Run with:
//
// ./NerveView red.jpg green.jpg
//
// Compile with:
//
// g++ `pkg-config --cflags --libs opencv` NerveView.cpp -o NerveView
//
////////////////////////////////////////////////////////////////////////////////

#include <sstream>
#include <string>
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdlib.h>
#include <stdio.h>

using namespace std;
using namespace cv;

int main(int argc, char** argv) {

    // Temp workspace
    Mat tmp1,tmp2;

    // Image size
    Size size(640,480);

    // Create a window for controls
    namedWindow("Controls", CV_WINDOW_NORMAL);
    resizeWindow("Controls",640,100);

    // Create slider to change Red threshold
    int RedThreshold = 50;
    createTrackbar("Red Threshold Percentage", "Controls", &RedThreshold, 100);

    // Create slider to change Green threshold
    int GreenThreshold = 50;
    createTrackbar("Green Threshold Percentage", "Controls", &GreenThreshold, 100);

    // Create variables to store input images and load them
    Mat Red,Green;
    Mat planes[3];

    // Load red image, split, discard G & B, resize, normalize
    tmp1   = imread(argv[1], CV_LOAD_IMAGE_COLOR);
    split(tmp1,planes);
    resize(planes[2],tmp1,size);
    normalize(tmp1,Red,0,255,NORM_MINMAX,CV_8UC1);

    // Load green image, split, discard R & B, resize, normalize
    tmp1   = imread(argv[2], CV_LOAD_IMAGE_COLOR);
    split(tmp1,planes);
    resize(planes[1],tmp1,size);
    normalize(tmp1,Green,0,255,NORM_MINMAX,CV_8UC1);

    // Make empty Blue channel, same size
    Mat Blue=Mat::zeros(size,CV_8UC1);

    //Create window to display images
    cv::namedWindow("Image", CV_WINDOW_AUTOSIZE);

    // Create variable to store the processed image
    Mat img=Mat::zeros(640,480,CV_8UC3);

    while (true){

    // Get thresholds, apply to their channels and combine to form result
        threshold(Red,tmp1,(RedThreshold*255)/100,255,THRESH_BINARY);
        threshold(Green,tmp2,(GreenThreshold*255)/100,255,THRESH_BINARY);

        // Combine B, G and R channels into "img"
        vector<Mat> channels;
        channels.push_back(Blue);
        channels.push_back(tmp2);
        channels.push_back(tmp1);
        merge(channels,img);

        // Display result
        imshow("Image",img);

        // See if user pressed a key
        int key=cvWaitKey(50);
        if(key>=0)break;
    }
    return 0;
}

Keywords : nerve, nerves, fluorescence


As @JanEglinger suggests, there is no need to use (or pay for) Matlab, especially if your programming skills are limited.

I would suggest ImageMagick which is free and installed on most Linux distros and is also available for OSX and Windows from here. The things you want to do can then be done from the commandline without compilers or development environments and steep learning curves.

Let's take your red image first, just using the commandline in the Terminal, we can look at the statistics like this:

identify -verbose red.jpg | more

Image: red.jpg
  Format: JPEG (Joint Photographic Experts Group JFIF format)
  Mime type: image/jpeg
  Class: DirectClass
  Geometry: 100x100+0+0
  Resolution: 72x72
  Print size: 1.38889x1.38889
  Units: PixelsPerInch
  Type: TrueColor
  Endianess: Undefined
  Colorspace: sRGB
  Depth: 8-bit
  Channel depth:
    red: 8-bit
    green: 8-bit
    blue: 8-bit
  Channel statistics:
    Pixels: 10000
    Red:
      min: 8 (0.0313725)
      max: 196 (0.768627)                                 <--- Good stuff in Red channel
      mean: 46.1708 (0.181062)
      standard deviation: 19.4835 (0.0764057)
    Green:
      min: 0 (0)
      max: 13 (0.0509804)                                 <--- Nothing much in Green channel
      mean: 3.912 (0.0153412)
      standard deviation: 1.69117 (0.00663204)
    Blue:
      min: 0 (0)
      max: 23 (0.0901961)                                 <--- Nothing much in Blue channel
      mean: 4.3983 (0.0172482)
      standard deviation: 2.88733 (0.0113229)

and see there is little information of any use outside the Red channel. So, we can then separate it into its 3 channels of Red, Green and Blue, and discard the Green and Blue channels like this:

convert red.jpg -separate -delete 1,2 justRed.jpg

which gives us this

The contrast is not very good though because it only ranges from 8-196 instead of 0-255 so we can normalize it to the full range and then threshold at 50% like this:

convert red.jpg -separate -delete 1,2 -normalize -threshold 50% r.jpg

which gives this:

If we now want to do the same for your green image, we would do this but delete the Red and Blue channels this time:

convert green.jpg -separate -delete 0,2 -normalize -threshold 50% g.jpg

giving this:

Finally, we take the separated, normalised, thresholded Red and Green and synthesize an empty Blue channel the same size and put them all together as the Red, Green and Blue channels of a new image:

convert r.jpg g.jpg -size 100x100 xc:black -combine result.jpg

That gives us this where we see in Yellow areas that were bright red and green.

The whole procedure is actually just 3 lines of typing in the Terminal:

convert red.jpg   -separate -delete 1,2 -normalize -threshold 50% r.jpg
convert green.jpg -separate -delete 0,2 -normalize -threshold 50% g.jpg
convert r.jpg g.jpg -size 100x100 xc:black -combine result.jpg

If you like the approach, you can vary the percentages, you could introduce noise reduction and also in fact do the whole lot in a single, more complicated command with no intermediate files. Anyway, I commend ImageMagick to you and even if you must use Matlab, maybe this example in ImageMagick will give you some pointers to a workflow.

Extra example... if you set the threshold lower for the Red channel, you will get more Red pixels in the output file, so if you change the Red threshold to say 30%, you will get this:

链接地址: http://www.djcxy.com/p/67298.html

上一篇: 在MATLAB中绘制两个球体的相交体积?

下一篇: 工作流程来比较两个图像中的重叠?