我正在创建一个iPad应用程序,其中一个功能是扫描QR码.我有QR扫描部分工作,但我的问题是iPad屏幕非常大,我将扫描一张纸的小QR码,同时可以看到许多QR码.我想指定显示器的较小区域是唯一可以实际捕获QR码的区域,因此用户更容易扫描他们想要的特定QR码.
我目前已经制作了一个带有红色边框的临时UIView,它以页面为中心,作为我希望用户扫描QR码的示例.它看起来像这样:
我已经到处寻找一个答案,我可以如何定位AVCaptureVideoPreviewLayer的特定区域来收集QR码数据,我发现建议使用“rectOfInterest”和AVCaptureMetadataOutput.我试图这样做,但是当我将rectOfInterest设置为与我用于正确显示的UIView相同的坐标和大小时,我无法扫描/识别任何QR码.有人可以告诉我为什么可扫描区域与看到的UIView的位置不匹配,如何让rectOfInterest在我添加到屏幕的红色边框内?
func startScan() { // Get an instance of the AVCaptureDevice class to initialize a device object and provide the video // as the media type parameter. let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo) // Get an instance of the AVCaptureDeviceInput class using the prevIoUs device object. var error:NSError? let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice,error: &error) if (error != nil) { // If any error occurs,simply log the description of it and don't continue any more. println("\(error?.localizedDescription)") return } // Initialize the captureSession object. captureSession = AVCaptureSession() // Set the input device on the capture session. captureSession?.addInput(input as! AVCaptureInput) // Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session. let captureMetadataOutput = AVCaptureMetadataOutput() captureSession?.addOutput(captureMetadataOutput) // calculate a centered square rectangle with red border let size = 300 let screenWidth = self.view.frame.size.width let xPos = (CGFloat(screenWidth) / CGFloat(2)) - (CGFloat(size) / CGFloat(2)) let scanRect = CGRect(x: Int(xPos),y: 150,width: size,height: size) // create UIView that will server as a red square to indicate where to place QRCode for scanning scanAreaView = UIView() scanAreaView?.layer.borderColor = UIColor.redColor().CGColor scanAreaView?.layer.borderWidth = 4 scanAreaView?.frame = scanRect // Set delegate and use the default dispatch queue to execute the call back captureMetadataOutput.setMetadataObjectsDelegate(self,queue: dispatch_get_main_queue()) captureMetadataOutput.MetadataObjectTypes = [AVMetadataObjectTypeQRCode] captureMetadataOutput.rectOfInterest = scanRect // Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer. videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill videoPreviewLayer?.frame = view.layer.bounds view.layer.addSublayer(videoPreviewLayer) // Start video capture. captureSession?.startRunning() // Initialize QR Code Frame to highlight the QR code qrCodeFrameView = UIView() qrCodeFrameView?.layer.borderColor = UIColor.greenColor().CGColor qrCodeFrameView?.layer.borderWidth = 2 view.addSubview(qrCodeFrameView!) view.bringSubviewToFront(qrCodeFrameView!) // Add a button that will be used to close out of the scan view videoBtn.setTitle("Close",forState: .Normal) videoBtn.setTitleColor(UIColor.blackColor(),forState: .Normal) videoBtn.backgroundColor = UIColor.grayColor() videoBtn.layer.cornerRadius = 5.0; videoBtn.frame = CGRectMake(10,30,70,45) videoBtn.addTarget(self,action: "pressClose:",forControlEvents: .TouchUpInside) view.addSubview(videoBtn) view.addSubview(scanAreaView!) }
更新
我不认为这是重复的原因是因为引用的其他帖子在Objective-C中,而我的代码在Swift中.对于我们这些刚接触iOS的人来说,翻译这两者并不容易.此外,引用的帖子的答案并未显示解决其问题的代码中的实际更新.他留下了关于必须使用MetadataOutputRectOfInterestForRect方法来转换矩形坐标的一个很好的解释,但我似乎仍然无法使这个方法起作用,因为我不清楚如何在没有示例的情况下这应该如何工作.
解决方法
整个早上与metedataOutputRectOfInterestForRect方法战斗后,我厌倦了它并决定编写我自己的转换.
func convertRectOfInterest(rect: CGRect) -> CGRect { let screenRect = self.view.frame let screenWidth = screenRect.width let screenHeight = screenRect.height let newX = 1 / (screenWidth / rect.minX) let newY = 1 / (screenHeight / rect.minY) let newWidth = 1 / (screenWidth / rect.width) let newHeight = 1 / (screenHeight / rect.height) return CGRect(x: newX,y: newY,width: newWidth,height: newHeight) }
注意:我有一个带方框的图像视图,用于向用户显示扫描位置,请务必使用imageView.frame而不是imageView.bounds,以便在屏幕上获得正确的位置.
这对我来说已经成功.