Sunday, April 16, 2017

OpenCV on iOS swift

I found OpenCV was available on Xcode.

So, I introduce how to use OpenCV sobel edge with swift as a simple example.


  • Mac

    Use a Mac that is surplus at home!
  • iPhone or iPad

    There are a few iOS devices at any home, aren't there?
  • Xcode

    It's a development environment software for iOS apps.

    If only for writing code and building apps (without distribution), it is free to use.
    It is available on AppStore for Mac.
  • OpenCV

    Global standard in image processing field, maybe.
    Version 3.0.0 was available, when I tried the following way.

Install Cocoapods

Launch terminal app on a Mac.

During cocoapods installing, a large number of packages are also installed with dependencies. Do not worry if there are no errors.

iMac:~ $ sudo gem install cocoapods
Fetching: i18n-0.7.0gem (100%)
Successfully installed i18n-0.7.0
Parsing documentation for cocoapods-1.0.1
Installing ri documentation for cocoapods-1.0.1
23 gems installed
iMac:~ $ pod setup

Make a new project

When developing applications on Xcode, a set of data containing the source code etc. of each application is called "Project".

We create a project first, since OpenCV should be installed for each project.

Launch Xcode via the icon below.
Then, select "Create a new Xcode Project"

Select "Single View Application" for making easy sample.

Enter an app name whatever you like.

A project has been created.

To distribute the app, many configurations are necessary.
But, leave those configurations as default, for now.
Because this is just a training.

Install OpenCV

Close Xcode project,

On the window of terminal app, make a Podfile under the xcodeproj directory of the project.
And then, install Open CV.

I set 7.0 as target iOS version.
If you try to support under 6.x, many terrible events will await.

iMac: ~ $ cd FaceRecogApp
iMac:FaceRecogApp $ vi Podfile
iMac:FaceRecogApp $ cat Podfile
target "FaceRecogApp" do
echo 'platform :ios, "7.0"
pod 'OpenCV' > Podfile
iMac:FaceRecogApp $ pod install

It needs so long minutes.
If it ends without errors, it's success to install.

Double click the file "*.xcworkspace" to open the project on Xocde.

Capture a video by an app

Most Hello World of iOS app start "On storyboard, ~".
However, I think Storyboard is not easy to understand in short minutes for swift beginner.
So, let's make view by coding.

Open ViewController.swift file in the project on Xocde.
The following code provides a camera preview function.

import UIKit
import AVFoundation // Library for camera capturing

class ViewController: UIViewController {

  var input:AVCaptureDeviceInput! // Video input

  var cameraView:UIView! // View for video preview
  var session:AVCaptureSession! // Capture Session
  var camera:AVCaptureDevice! // Camera Device

  override func viewDidLoad() {
    // Do any additional setup after loading the view, typically from a nib.

  override func didReceiveMemoryWarning() {
    // Dispose of any resources that can be recreated.

  // Initialize view
  override func viewWillAppear(animated: Bool) {

    // Extend the preview to the whole screen
    let screenWidth = UIScreen.mainScreen().bounds.size.width;
    let screenHeight = UIScreen.mainScreen().bounds.size.height;
    cameraView = UIView(frame: CGRectMake(0.0, 0.0, screenWidth, screenHeight))

    // Detect a backside camera.
    // Use AVCaptureDevicePosition.Front for selfie.
    session = AVCaptureSession()
    for captureDevice: AnyObject in AVCaptureDevice.devices() {
      if captureDevice.position == AVCaptureDevicePosition.Back {
        camera = captureDevice as? AVCaptureDevice

    // Fetch video
    do {
      input = try AVCaptureDeviceInput(device: camera) as AVCaptureDeviceInput
    } catch let error as NSError {

    if( session.canAddInput(input)) {

    // Show on the display
    let previewLayer = AVCaptureVideoPreviewLayer(session: session)
    previewLayer.frame = cameraView.frame
    previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill



Build, Run, and Fail...

Even if you start "Run", the app will fail to run on the simulator.
Because, this app needs video capture.

Connect iPhone to Mac by a USB cable.
After that, click the run button on Xcode.

...And then, the build process will fail.

Although Xcode7 uses "bitcode",
OpenCV does not include bitcode, so build fails by link errors.

Select the project name on Xocde, and set disable bitcode at "Build Settings" menu.

Let's try "Run" again.
The app will launch on iPhone.

Fetch a target image.

By the above code, captured image is shown on display.
We want to show the result of processing images.

First, fetch image data as UIImage object.
// Inherit AVCaptureVideoDataOutputSampleBufferDelegate to implement buffer processing delegate 
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {

var input:AVCaptureDeviceInput! // Video input
var output:AVCaptureVideoDataOutput! // Video output

Modify the code to put the captured image into queue, and process it in delegate.

Because image processing does not keep up with capturing, the frames not in time are discarded.
It has might better to set frame rate down, I don't do so on the code.

override func viewWillAppear(animated: Bool) {

let screenWidth = UIScreen.mainScreen().bounds.size.width;
let screenHeight = UIScreen.mainScreen().bounds.size.height;
cameraView = UIImageView(frame: CGRectMake(0.0, 0.0, screenWidth, screenHeight))

session = AVCaptureSession()
for captureDevice: AnyObject in AVCaptureDevice.devices() {
if captureDevice.position == AVCaptureDevicePosition.Back {
camera = captureDevice as? AVCaptureDevice

do {
input = try AVCaptureDeviceInput(device: camera) as AVCaptureDeviceInput
} catch let error as NSError {

if( session.canAddInput(input)) {

// Send the image to processing
output = AVCaptureVideoDataOutput()
output.videoSettings = [kCVPixelBufferPixelFormatTypeKey : Int(kCVPixelFormatType_32BGRA)]

// delegate
let queue: dispatch_queue_t = dispatch_queue_create("videoqueue" , nil)
output.setSampleBufferDelegate(self, queue: queue)

// Discard the frames not in time.
output.alwaysDiscardsLateVideoFrames = true

// Add output to session
if session.canAddOutput(output) {

for connection in output.connections {
if let conn = connection as? AVCaptureConnection {
if conn.supportsVideoOrientation {
conn.videoOrientation = AVCaptureVideoOrientation.Portrait

//let previewLayer = AVCaptureVideoPreviewLayer(session: session)
//previewLayer.frame = cameraView.frame
//previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill


Create UIImage to pick up an image buffer at delegate.

To be honest, I don't understand this code so much...

At the line of "ImageProcessing.SobelFilter", created UIImage is sent to OpenCV.

func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

let image:UIImage = self.captureImage(sampleBuffer)

dispatch_async(dispatch_get_main_queue()) {
// draw
self.cameraView.image = image

// Create a UIImage from sampleBuffer
func captureImage(sampleBuffer:CMSampleBufferRef) -> UIImage {

// Fetch an image
let imageBuffer: CVImageBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)!

// Lock a base address
CVPixelBufferLockBaseAddress(imageBuffer, 0 )

// Image information
let baseAddress: UnsafeMutablePointer<Void> = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0)

let bytesPerRow: Int = CVPixelBufferGetBytesPerRow(imageBuffer)
let width: Int = CVPixelBufferGetWidth(imageBuffer)
let height: Int = CVPixelBufferGetHeight(imageBuffer)
let bitmapInfo = CGImageAlphaInfo.PremultipliedFirst.rawValue|CGBitmapInfo.ByteOrder32Little.rawValue as UInt32

//RGB color space
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceRGB()!
let newContext: CGContextRef = CGBitmapContextCreate(baseAddress,                                      width, height, 8, bytesPerRow, colorSpace, bitmapInfo)!
// Quartz Image
let imageRef: CGImageRef = CGBitmapContextCreateImage(newContext)!

// UIImage
let cameraImage: UIImage = UIImage(CGImage: imageRef)

// Execute sobel filter by OpenCV
let resultImage: UIImage = ImageProcessing.SobelFilter(cameraImage)

return resultImage


Implement OpenCV

OpenCV does not support swift.
Swift does not support raw C++.
So, OpenCV functions are not written in swift directly.

To solve this problem, click "New File" at the file list on Xocde.
And then, make a C++ file.

At that time, Xcode create a header file automatically.

I made a file "ImageProcessing.m".
Xcode made a header file "{Project Name}-Bridging-Header.h".
Aha...??? It does not "ImageProcessing.h".

Define a class in the header file.

#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>

@interface ImageProcessing: NSObject
+(UIImage *)SobelFilter:(UIImage *)image;

We want to implement in the C++ file, but it cannot, because the file extension is ".m".

Change the extension to ".mm" at the menu "Identity and Type" on the right upper.

After that, convert UIImage to an image buffer for OpenCV, and return an UIImage object which is result of image processing.

What a wonderful OpenCV. It's so easy.

#import <Foundation/Foundation.h>
#import >UIKit/UIKit.h>

#import "FaceRecogApp-Bridging-Header.h"

#import <opencv2/opencv.hpp>
#import <opencv2/imgcodecs/ios.h>

@implementation ImageProcessing: NSObject

+(UIImage *)SobelFilter:(UIImage *)image{

//Convert UIImage to cv::Mat
cv::Mat mat;
UIImageToMat(image, mat);

//Convert to a gray scale image.
cv::Mat gray;
cv::cvtColor(mat, gray, cv::COLOR_BGR2GRAY);

// Detect edge
cv::Mat edge;
cv::Canny(gray, edge, 100, 200);

//Convert cv::Mat to  UIImage 
UIImage *result = MatToUIImage(edge);
return result;

By the way, many web sites explain "#import <opencv2/highgui/ios.h>」",
but it should be "imgcodecs" on OpenCV 3.0.0 .

Plus, many sample code use "cv::cvColor", but it should be "cv::cvColor" on 3.0.0.


The following video is the sample on iPad.

No comments:

Post a Comment