Solutions

Implementation of ARKit in iOS for Face Detection & Image Processing

​​What is ARKit?

ARKit is an augmented reality framework for iPhones and iPads that is included with Xcode. By combining the camera on the screen with virtual things, ARKit allows developers to place digital objects in the actual world, letting users interact with them in the real world.

Using CoreMotion data, it creates a map of the environment using the camera on iOS/iPad devices, recognizing things like face, photos, space, and the device’s location in the actual world.

AR face tracking configuration

With the help of the TrueDepth camera, the AR face tracking system tracks the movement and expressions of a face.

The front camera is used to recognize a user’s face in the face tracking configuration. An AR session identifies the user’s face and adds an ARFaceAnchor object representing the face to the anchor’s list when the setup is run.

All information about a face’s position, orientation, and traits that describe facial expressions is stored in the face anchor.

Tracking Face Orientation and positions

Inherited transform property describes the face’s current position in world coordinates. Use this transform matrix to position the virtual content you want to add to the face in the AR scene.

Transform matrix creates a face coordinates system for positioning other elements relative to the user’s face.

Face Topology

Geometry property is providing an ARFaceGeometry object which is representing detailed topology for the face, which confirms a face model to match the dimensions, shape, and expression of the captured face.

We can use this face model for overlaying content that follows the user’s face.

For example, adding glasses or other makeup on the face model.

Facial Expressions

BlendShapes property provides a high-level model of the current user’s facial expressions. We can use blend shape coefficients to animate 2D or 3D content.

Requirements:

  • An iOS device with a True Depth Front camera
    • iPhone X or later
  • iOS 11.0 or later
  • Xcode 11.0 or later

ARBody Tracking Configuration

ARKit’s body tracking configuration track the body’s moment using ARBodyAnchor.

When ARKit recognizes a person in the back camera, it calls delegates’s session(_:didAdd:) function with ARBodyAnchor.

The body anchor’s transform position defines the world position of the body’s joints.

Place Skeleton on a Surface:

The body anchor’s origin maps to the hip joint, user can calculate the current offset of the feet to the hip to place the body’s skeleton on the surface. 

Pass the foot joint index to jointModelTransfroms, then you will get the foot’s offset from skeleton’s origin.

static var hipToFootOffsetCal: Float {

let footIndexs = ARSkeletonDefinition.defaultBody3D.index(forJointName: .leftFoot)

let footTransforms = ARSkeletonDefinition.defaultBody3D.neutralBodySkeleton3D!.jointModelTransforms[footIndexs]

let distanceFromHipsOnY = abs(footTransforms.columns.3.y)

return distanceFromHipsOnY}

frameSemantics includes bodyDetection, which gives you access to the joint positions of a user that ARKit detects in the camera feed via the frame’s detectedBody.

ARWorld Tracking Configuration ARWorldTrackingConfiguration class tracks device’s movement with three rotation axes and three translation axis.

Plane detection And ARPlaneAnchor

var planeDetectionMain: ARWorldTrackingConfiguration.PlaneDetection { get set }

If we enable horizontal or vertical plane detections, the session adds ARPlaneAnchor objects and notifies ARSessionDelegate, ARSCNViewDelegate object when its analysis of captured video images detects an area that appears to be a flat surface.  Each plane anchor offers information about the surface, such as its location and shape in the real world.

ARImageAnchor

When we run world tracking AR sessions and use the ARReferenceImage object as the detectionImages property in the session setup, ARKit looks for those images in the actual world. When this session recognizes the images it will automatically add an ARImageAnchor for each detected image to its list of anchors.

ARObjectAnchor

When we run a world tracking AR session and specify ARReferenceObject objects for the detectionObjects property in the session setup, ARKit looks for those items in the actual world. When the session detects an object, an ARObjectAnchor is automatically added to the list of anchors for each identified object.

AR Image Tracking Configuration

ARKit creates a three-dimensional space with ARImageTrackingConfiguration by not only tracking the device’s motion in relation to the real world but also recognizing and tracking the motion of known pictures in front of the camera.

In ARWorldTrackingConfiguration also we can detect images but with ARImageTrackingConfiguration we can reliably track more images at once.

ARImageTrackingConfiguration, define ARReferenceImage objects and assign them to the configuration’s tracking images property. Then send the configurations to your session’s run( :options:) method, just like any other AR configuration.

Recent Posts

  • Mobile Application

SwiftUI Importants: Best Practices for Developers

Diving deep into SwiftUI This blog post drops us into…

  • Online Booking Engine

Custom Salon Booking System for Salons, Spas & Hospitality

Corporate efficiency and customization are vital in today's fast-paced world,…

  • Mobile Application

Flutter Codemagic CI/CD: Complete Guide to Flutter Automation

Flutter Codemagic CI/CD makes your Flutter app build, test, and…