Draw rectangles on image view.image not scaling properly

  • I start out with an imageView.image (a photo).
  • I submit (POST) the imageView.image to remote service (Microsoft face detection) for processing.
  • Remote service returns JSON of CGRect 's for each detected face on the image.
  • I feed JSON into my UIView to draw the rectangles. I initiate my UIView with a frame of {0, 0, imageView.image.size.width, imageView.image.size.height} . <-- my thinking, a frame equivalent to the size of the imageView.image
  • Add my UIView as a subview of self.imageView OR self.view (tried both)
  • End Result: Rectangles are drawn but they do not appear correctly on the imageView.image . That is, the CGRects generated for each of the faces are supposed to be relative to the image's coordinate space, as returned by the remote service but they appear off once I add my custom view.

    I believe I may have a scaling issue of some sort as, if I divide each value in the CGRects / 2 (as a test) I can get an approximation but still off. The microsoft documentation states the detected faces are returned with rectangles indicating the location of faces in the image in Pixels . Yet, aren't they being treated as points when drawing my path?

    Also, shouldn't I be initiating my view with a frame equivalent to the imageView.image 's frame so that the view matches an identical coordinate space as the submitted image?

    Here is a screenshot example of what it looks like if i try to scale down each CGRect by dividing them by 2.

    在这里输入图像描述

    I am new to iOS and broke away from the books to work on this as a self exercise. I can provide more code as needed. Thanks in advance for your insight!

    EDIT 1

    I add a subview for each rectangle as I iterate over an array of face attributes which include the rectangle for each face via the following method, which gets called during (void)viewDidAppear:(BOOL)animated

    - (void)buildFaceRects {
    
        // build an array of CGRect dicts off of JSON returned from analized image
        NSMutableArray *array = [self analizeImage:self.imageView.image];
    
        // enumerate over array using block - each obj in array represents one face
       [array enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
    
           // build dictionary of rects and attributes for the face
           NSDictionary *json = [NSDictionary dictionaryWithObjectsAndKeys:obj[@"attributes"], @"attributes", obj[@"faceId"], @"faceId", obj[@"faceRectangle"], @"faceRectangle", nil];
    
           // initiate face model object with dictionary
           ZGCFace *face = [[ZGCFace alloc] initWithJSON:json];
    
           NSLog(@"%@", face.faceId);
           NSLog(@"%d", face.age);
           NSLog(@"%@", face.gender);
           NSLog(@"%f", face.faceRect.origin.x);
           NSLog(@"%f", face.faceRect.origin.y);
           NSLog(@"%f", face.faceRect.size.height);
           NSLog(@"%f", face.faceRect.size.width);
    
           // define frame for subview containing face rectangle
           CGRect imageRect = CGRectMake(0, 0, self.imageView.image.size.width, self.imageView.image.size.height);
    
           // initiate rectange subview with face info
           ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imageRect];
    
           // add view as subview of imageview (?)
           [self.imageView addSubview:faceRect];
    
       }];
    
    }
    

    EDIT 2:

        /* Image info */
        UIImageView *iv = self.imageView;
        UIImage *img = iv.image;
        CGImageRef CGimg = img.CGImage;
    
        // Bitmap dimensions [pixels]
        NSUInteger imgWidth = CGImageGetWidth(CGimg);
        NSUInteger imgHeight = CGImageGetHeight(CGimg);
        NSLog(@"Image dimensions: %lux%lu", imgWidth, imgHeight);
    
        // Image size pixels (size * scale)
        CGSize imgSizeInPixels = CGSizeMake(img.size.width * img.scale, img.size.height * img.scale);
        NSLog(@"image size in Pixels: %fx%f", imgSizeInPixels.width, imgSizeInPixels.height);
    
        // Image size points
        CGSize imgSizeInPoints = img.size;
        NSLog(@"image size in Points: %fx%f", imgSizeInPoints.width, imgSizeInPoints.height);
    
    
    
           // Calculate Image frame (within imgview) with a contentMode of UIViewContentModeScaleAspectFit
           CGFloat imgScale = fminf(CGRectGetWidth(iv.bounds)/imgSizeInPoints.width, CGRectGetHeight(iv.bounds)/imgSizeInPoints.height);
           CGSize scaledImgSize = CGSizeMake(imgSizeInPoints.width * imgScale, imgSizeInPoints.height * imgScale);
           CGRect imgFrame = CGRectMake(roundf(0.5f*(CGRectGetWidth(iv.bounds)-scaledImgSize.width)), roundf(0.5f*(CGRectGetHeight(iv.bounds)-scaledImgSize.height)), roundf(scaledImgSize.width), roundf(scaledImgSize.height));
    
    
           // initiate rectange subview with face info
           ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imgFrame];
    
           // add view as subview of image view
           [iv addSubview:faceRect];
    
       }];
    

    We've got several problems :

  • Microsoft returns pixel and iOS uses points. The difference between them is just about screen dimension. For instance on an iPhone 5 : 1 pt = 2 px and on an 3GS 1px = 1 pt. Look at the iOS documentation for more informations.

  • The frame of your UIImageView is not the image frame. When Microsofts returns the frame of a face, it returns it in the frame of the image and not in the frame of the UIImageView. So we've got a problem of coordinates system.

  • Be careful about time if you use Autolayout. The frame of a view set by constraints is not the same when ViewDidLoad: is called than when you see it on the screen.

  • Solution :

    I'm just a read-only Objective C developer so I can't give you code. I could in Swift but it's not necessary.

  • Convert pixels into points. That's easy : use ratio.

  • Define the frame of a face using what you did. Then you have to move the coordinates you determined from the image frame coordinates system to the UIImageView coordinates system. That's less easy. It depends on the contentMode of your UIImageView. But I quickly find informations about it on the Internet.

  • If you use AutoLayout, add the frame of the face when AutoLayout finishes to calculate the layouts. So when ViewDidLayoutSubview: is called. Or, that's better, use constraints to set your frame in the UIImageView.

  • I hope to be clear enough.

    Some links :

  • iOS Drawing Concepts

  • Displayed Image Frame In UIImageView

  • 链接地址: http://www.djcxy.com/p/14386.html

    上一篇: 在Android上使用蓝牙的服务发现失败例外

    下一篇: 在图像view.image上绘制矩形不能正确缩放