ios – UIImage人脸检测

前端之家收集整理的这篇文章主要介绍了ios – UIImage人脸检测前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我正在尝试编写一个带有UI Image的例程,并返回一个只包含face的新UIImage.这看起来非常简单,但我的大脑在绕过CoreImage和UIImage空间时遇到了问题.

这是基础知识:

- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
    CGImageRef sourceImageRef = [image CGImage];
    CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef,rect);
    UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
    CGImageRelease(newImageRef);
    return newImage;
}


-(UIImage *)getFaceImage:(UIImage *)picture {
  CIDetector  *detector = [CIDetector detectorOfType:CIDetectorTypeFace 
                                             context:nil 
                                             options:[NSDictionary dictionaryWithObject: CIDetectorAccuracyHigh forKey: CIDetectorAccuracy]];

  CIImage *ciImage = [CIImage imageWithCGImage: [picture CGImage]];
  NSArray *features = [detector featuresInImage:ciImage];

  // For simplicity,I'm grabbing the first one in this code sample,// and we can all pretend that the photo has one face for sure. :-)
  CIFaceFeature *faceFeature = [features objectAtIndex:0];

  return imageFromImage:picture inRect:faceFeature.bounds;
}

返回的图像来自翻转图像.我尝试使用以下方法调整faceFeature.bounds:

CGAffineTransform t = CGAffineTransformMakeScale(1.0f,-1.0f);
CGRect newRect = CGRectApplyAffineTransform(faceFeature.bounds,t);

…但是这给了我图像之外的结果.

我确定有一些简单的方法可以解决这个问题,但是如果没有计算自下而上,然后使用它作为X创建一个新的rect,是否有“正确”的方法来做到这一点?

谢谢!

解决方法

使用CIContext从图像中裁剪脸部会更容易,也不那么麻烦.像这样的东西:
CGImageRef cgImage = [_ciContext createCGImage:[CIImage imageWithCGImage:inputImage.CGImage] fromRect:faceFeature.bounds];
UIImage *croppedFace = [UIImage imageWithCGImage:cgImage];

其中inputImage是您的UIImage对象,faceFeature对象是您从[CIDetector featuresInImage:]方法获得的CIFaceFeature类型.

原文链接:https://www.f2er.com/iOS/334086.html

猜你在找的iOS相关文章