我用
the code from here找到了Face.我试图画出眼睛和脸.
但我只能根据我先写的哪个陈述来显示眼睛或脸部.这该怎么做?
但我只能根据我先写的哪个陈述来显示眼睛或脸部.这该怎么做?
// Detect faces std::vector faces; _faceCascade.detectMultiScale(mat,faces,1.1,2,kHaarOptions,cv::Size(60,60)); //Detect eyes std::vector eyes; _eyesCascade.detectMultiScale(mat,eyes,cv::Size(30,30));
这里eyes.size()= 0.如果我互换两个语句的位置,我得到eyes.size()= 2和faces.size()= 0
解决方法
如果您的目标是检索iOS上的面部和眼睛位置,为什么不使用CoreImage功能?
CIImage *image = [CIImage imageWithContentsOfURL:[NSURL fileURLWithPath:@"image.jpg"]]; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:CIDetectorAccuracyHigh,CIDetectorAccuracy,nil]; CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:options]; NSArray *features = [faceDetector featuresInImage:image]; for(CIFaceFeature* faceFeature in features) { CGRect faceSize = faceFeature.bounds.size; PointF leftEyePosition; PointF rightEyePosition; PointF mouthPosition; if(faceFeature.hasLeftEyePosition) leftEyePosition = faceFeature.leftEyePosition; //do the same for right eye and mouth }
它不使用OpenCV,但你可以免费获得口位.