Uniform Circular LBP face recognition implementation

I am trying to implement a basic face recognition system using Uniform Circular LBP (8 Points in 1 unit radius neighborhood). I am taking an image, re-sizing it to 200 x 200 pixels and then splitting the image in 8x8 little images . I then compute the histogram for each little image and get a list of histograms . To compare 2 images , I compute chi-squared distance between the corresponding histograms and generate a score.

Here's my Uniform LBP implementation:

import numpy as np
import math
uniform = {0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 58, 6: 5, 7: 6, 8: 7, 9: 58, 10: 58, 11: 58, 12: 8, 13: 58, 14: 9, 15: 10, 16: 11, 17: 58, 18: 58, 19: 58, 20: 58, 21: 58, 22: 58, 23: 58, 24: 12, 25: 58, 26: 58, 27: 58, 28: 13, 29: 58, 30: 14, 31: 15, 32: 16, 33: 58, 34: 58, 35: 58, 36: 58, 37: 58, 38: 58, 39: 58, 40: 58, 41: 58, 42: 58, 43: 58, 44: 58, 45: 58, 46: 58, 47: 58, 48: 17, 49: 58, 50: 58, 51: 58, 52: 58, 53: 58, 54: 58, 55: 58, 56: 18, 57: 58, 58: 58, 59: 58, 60: 19, 61: 58, 62: 20, 63: 21, 64: 22, 65: 58, 66: 58, 67: 58, 68: 58, 69: 58, 70: 58, 71: 58, 72: 58, 73: 58, 74: 58, 75: 58, 76: 58, 77: 58, 78: 58, 79: 58, 80: 58, 81: 58, 82: 58, 83: 58, 84: 58, 85: 58, 86: 58, 87: 58, 88: 58, 89: 58, 90: 58, 91: 58, 92: 58, 93: 58, 94: 58, 95: 58, 96: 23, 97: 58, 98: 58, 99: 58, 100: 58, 101: 58, 102: 58, 103: 58, 104: 58, 105: 58, 106: 58, 107: 58, 108: 58, 109: 58, 110: 58, 111: 58, 112: 24, 113: 58, 114: 58, 115: 58, 116: 58, 117: 58, 118: 58, 119: 58, 120: 25, 121: 58, 122: 58, 123: 58, 124: 26, 125: 58, 126: 27, 127: 28, 128: 29, 129: 30, 130: 58, 131: 31, 132: 58, 133: 58, 134: 58, 135: 32, 136: 58, 137: 58, 138: 58, 139: 58, 140: 58, 141: 58, 142: 58, 143: 33, 144: 58, 145: 58, 146: 58, 147: 58, 148: 58, 149: 58, 150: 58, 151: 58, 152: 58, 153: 58, 154: 58, 155: 58, 156: 58, 157: 58, 158: 58, 159: 34, 160: 58, 161: 58, 162: 58, 163: 58, 164: 58, 165: 58, 166: 58, 167: 58, 168: 58, 169: 58, 170: 58, 171: 58, 172: 58, 173: 58, 174: 58, 175: 58, 176: 58, 177: 58, 178: 58, 179: 58, 180: 58, 181: 58, 182: 58, 183: 58, 184: 58, 185: 58, 186: 58, 187: 58, 188: 58, 189: 58, 190: 58, 191: 35, 192: 36, 193: 37, 194: 58, 195: 38, 196: 58, 197: 58, 198: 58, 199: 39, 200: 58, 201: 58, 202: 58, 203: 58, 204: 58, 205: 58, 206: 58, 207: 40, 208: 58, 209: 58, 210: 58, 211: 58, 212: 58, 213: 58, 214: 58, 215: 58, 216: 58, 217: 58, 218: 58, 219: 58, 220: 58, 221: 58, 222: 58, 223: 41, 224: 42, 225: 43, 226: 58, 227: 44, 228: 58, 229: 58, 230: 58, 231: 45, 232: 58, 233: 58, 234: 58, 235: 58, 236: 58, 237: 58, 238: 58, 239: 46, 240: 47, 241: 48, 242: 58, 243: 49, 244: 58, 245: 58, 246: 58, 247: 50, 248: 51, 249: 52, 250: 58, 251: 53, 252: 54, 253: 55, 254: 56, 255: 57}


def bilinear_interpolation(i, j, y, x, img):
    fy, fx = int(y), int(x)
    cy, cx = math.ceil(y), math.ceil(x)

    # calculate the fractional parts
    ty = y - fy
    tx = x - fx

    w1 = (1 - tx) * (1 - ty)
    w2 =      tx  * (1 - ty)
    w3 = (1 - tx) *      ty
    w4 =      tx  *      ty

    return w1 * img[i + fy, j + fx] + w2 * img[i + fy, j + cx] + 
           w3 * img[i + cy, j + fx] + w4 * img[i + cy, j + cx]

def thresholded(center, pixels):
    out = []
    for a in pixels:
        if a > center:
            out.append(1)
        else:
            out.append(0)
    return out


def uniform_circular(img, P, R):
    ysize, xsize = img.shape
    transformed_img = np.zeros((ysize - 2 * R,xsize - 2 * R), dtype=np.uint8)
    for y in range(R, len(img) - R):
        for x in range(R, len(img[0]) - R):
            center = img[y,x]
            pixels = []
            for point in range(0, P):
                r = R * math.cos(2 * math.pi * point / P)
                c = R * math.sin(2 * math.pi * point / P)
                pixels.append(bilinear_interpolation(y, x, r, c, img))

            values = thresholded(center, pixels)
            res = 0
            for a in range(0, P):
                    res += values[a] << a
            transformed_img.itemset((y - R,x - R), uniform[res])

    transformed_img = transformed_img[R:-R,R:-R]
    return transformed_img

I did an experiment on AT&T database taking 2 gallery images and 8 probe images per subject. The ROC for the experiment came out to be:

In the above ROC, x axis denotes the false accept rate and the y axis denotes the genuine accept rate . The accuracy seems to be poor according to Uniform LBP standards. I am sure there is something wrong with my implementation. It would great if someone could help me with it. Thanks for reading.

EDIT:

I think I made a mistake in the above code. I am going clockwise while the paper on LBP suggest that I should go anticlockwise while assigning weights. The line: c = R * math.sin(2 * math.pi * point / P) should be c = -R * math.sin(2 * math.pi * point / P) . Results after the edit are even worse. This suggests something is way wrong with my code. I guess the way I am choosing the coordinates for interpolation is messed up.

Edit: next I tried to replicate @bytefish's code here and used the uniform hashmap to make the implementation Uniform Circular LBP.

def uniform_circular(img, P, R):
    ysize, xsize = img.shape
    transformed_img = np.zeros((ysize - 2 * R,xsize - 2 * R), dtype=np.uint8)
    for point in range(0, P):
        x = R * math.cos(2 * math.pi * point / P)
        y = -R * math.sin(2 * math.pi * point / P)
        fy, fx = int(y), int(x)
        cy, cx = math.ceil(y), math.ceil(x)

        # calculate the fractional parts
        ty = y - fy
        tx = x - fx

        w1 = (1 - tx) * (1 - ty)
        w2 =      tx  * (1 - ty)
        w3 = (1 - tx) *      ty
        w4 =      tx  *      ty 
        for i in range(R, ysize - R):
            for j in range(R, xsize - R):
                t = w1 * img[i + fy, j + fx] + w2 * img[i + fy, j + cx] + 
                    w3 * img[i + cy, j + fx] + w4 * img[i + cy, j + cx]
                center = img[i,j]
                pixels = []
                res = 0
                transformed_img[i - R,j - R] += (t > center) << point

    for i in range(R, ysize - R):
        for j in range(R, xsize - R):
            transformed_img[i - R,j - R] = uniform[transformed_img[i - R,j - R]]

Here's the ROC for the same:

I tried to implement the same code in C++. Here is the code:

#include <stdio.h>
#include <stdlib.h>
#include <opencv2/opencv.hpp>

using namespace cv;

int* uniform_circular_LBP_histogram(Mat& src) {
 int i, j;
 int radius = 1;
 int neighbours = 8;
 Size size = src.size();
 int *hist_array = (int *)calloc(59,sizeof(int));
 int uniform[] = {0,1,2,3,4,58,5,6,7,58,58,58,8,58,9,10,11,58,58,58,58,58,58,58,12,58,58,58,13,58,14,15,16,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,17,58,58,58,58,58,58,58,18,58,58,58,19,58,20,21,22,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,23,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,24,58,58,58,58,58,58,58,25,58,58,58,26,58,27,28,29,30,58,31,58,58,58,32,58,58,58,58,58,58,58,33,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,34,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,35,36,37,58,38,58,58,58,39,58,58,58,58,58,58,58,40,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,41,42,43,58,44,58,58,58,45,58,58,58,58,58,58,58,46,47,48,58,49,58,58,58,50,51,52,58,53,54,55,56,57};
 Mat dst = Mat::zeros(size.height - 2 * radius, size.width - 2 * radius, CV_8UC1);

 for (int n = 0; n < neighbours; n++) {
   float x = static_cast<float>(radius) *  cos(2.0 * M_PI * n / static_cast<float>(neighbours));
   float y = static_cast<float>(radius) * -sin(2.0 * M_PI * n / static_cast<float>(neighbours));

   int fx = static_cast<int>(floor(x));
   int fy = static_cast<int>(floor(y));
   int cx = static_cast<int>(ceil(x));
   int cy = static_cast<int>(ceil(x));

   float ty = y - fy;
   float tx = y - fx;

   float w1 = (1 - tx) * (1 - ty);
   float w2 =      tx  * (1 - ty);
   float w3 = (1 - tx) *      ty;
   float w4 = 1 - w1 - w2 - w3;

   for (i = 0; i < 59; i++) {
     hist_array[i] = 0;
   }

   for (i = radius; i < size.height - radius; i++) {
     for (j = radius; j < size.width - radius; j++) {
       float t = w1 * src.at<uchar>(i + fy, j + fx) + 
                 w2 * src.at<uchar>(i + fy, j + cx) + 
                 w3 * src.at<uchar>(i + cy, j + fx) + 
                 w4 * src.at<uchar>(i + cy, j + cx);
       dst.at<uchar>(i - radius, j - radius) += ((t > src.at<uchar>(i,j)) && 
                                                 (abs(t - src.at<uchar>(i,j)) > std::numeric_limits<float>::epsilon())) << n;
     }
   }
 }

 for (i = radius; i < size.height - radius; i++) {
   for (j = radius; j < size.width - radius; j++) {
       int val = uniform[dst.at<uchar>(i - radius, j - radius)];
       dst.at<uchar>(i - radius, j - radius) = val;
       hist_array[val] += 1;
   }
 }
 return hist_array;
}

int main( int argc, char** argv )
{
 Mat src;

 int i,j;
 src = imread( argv[1], 0 );
 if( argc != 2 || !src.data )
   {
     printf( "No image data n" );
     return -1;
   }

 const int width = 200;
 const int height = 200;
 Size size = src.size();
 Size new_size = Size();
 new_size.height = 200;
 new_size.width  = 200;
 Mat resized_src;
 resize(src, resized_src, new_size, 0, 0, INTER_CUBIC);

 int count = 1;
 for (i = 0; i <= width - 8; i += 25) {
   for (j = 0; j <= height - 8; j += 25) {
     Mat new_mat = resized_src.rowRange(i, i + 25).colRange(j, j + 25);
     int *hist = uniform_circular_LBP_histogram(new_mat);
     int z;
     for (z = 0; z < 58; z++) {
       std::cout << hist[z] << ",";
     }
     std::cout << hist[z] << "n";
     count += 1;
   }
 }
 return 0;
}

ROC for the same:

I also did a rank based experiment. And got this CMC curve.

CMC曲线

Some details about the CMC curve: X axis represents ranks. (1-10) and Y axis represents the accuracy (0-1). So, I got a 80%+ Rank1 accuracy.


I don't know about python but most probably your code is broken.

My advice is, follow these 2 links, and try to port one of the C++ codes to python. First link also contains some information about LBP.

http://www.bytefish.de/blog/local_binary_patterns/

https://github.com/berak/uniform-lbp

And one other thing I can say, you said you are resizing images into 200x200. Why are you doing that? As far as I know AT&T images are smaller than that, your are just making images bigger but I don't think it is going to help you, moreover it may have a negative effect in performance.


I will say some things try them

(1) ROC Curve is drawn between False Acceptance Rate and False Rejection Rate.

False Acceptance is fine but it should be Genuine rejection rate in the above description and not Genuine acceptance rate.

Check to what parameters the curve is being drawn. I can't help u much with the code. I don't know Python.

(2) If you are not getting better result, please check whether LBP works efficiently for the problem you trying to solve. LBP is mainly used for Texture analysis.

链接地址: http://www.djcxy.com/p/76192.html

上一篇: 依赖注入不知道我想要注入的类型

下一篇: 统一圆形LBP人脸识别实施