Are there any limitations in Vuforia compared to ARCore and ARKit?

Kitwradr picture Kitwradr · Jun 12, 2018 · Viewed 19.8k times · Source

I am a beginner in the field of augmented reality, working on applications that create plans of buildings (floor plan, room plan, etc with accurate measurements) using a smartphone. So I am researching about the best AR SDK which can be used for this. There are not many articles pitting Vuforia against ARCore and ARKit.

Please suggest the best SDK to use, pros and cons of each.

Answer

Andy Fedoroff picture Andy Fedoroff · Jun 12, 2018

Updated: November 16, 2020.

TL;DR

Google ARCore allows you build apps for Android and iOS/iPadOS. With Apple ARKit you can build apps for iOS and iPadOS only. And a great old PTC Vuforia was designed to create apps for Android, iOS/iPadOS and Universal Windows Platform.

A crucial Vuforia's peculiarity is that it uses ARCore/ARKit technology if the hardware it's running on supports it, otherwise Vuforia uses its own AR technology and engine, known as software solution without dependant hardware.

When developing for Android OEM smartphones, you may encounter with an unpleasant problem: devices from different manufacturers need a sensors’ calibration in order to observe the same AR experience. Luckily, Apple gadgets have no such drawback because all sensors used there were calibrated under identical conditions. 



To answer this question, let’s put first things first.


enter image description here enter image description here


Google ARCore 1.21

ARCore is based on the three main fundamental concepts : Motion Tracking, Environmental Understanding and Light Estimation. Thus ARCore allows a supported mobile device to track its position and orientation relative to the world in 6 degrees of freedom (6DOF) using special technique called Concurrent Odometry and Mapping. COM also helps us detect the size and location of horizontal, vertical and angled tracked surfaces (like ground, tables, benches, walls, slopes, etc). Motion Tracking works robustly thanks to optical data coming from a camera at 60 fps, combined with inertial data coming from gyroscope and accelerometer at 1000 fps. Naturally, ARKit and Vuforia operate almost the same way.



When you move your phone through the real environment, ARCore tracks a surrounding space to understand where a smartphone is, relative to the world coordinates. At tracking stage ARCore "sows" so called feature points which form a sparse point cloud and this cloud lives while tracking session is active. These feature points are visible through RGB camera, and ARCore uses them to compute phone's change in location. The visual data then must be combined with measurements coming from accelerometer and gyroscope (Inertial Measurement Unit) to estimate the position and orientation of the ArCamera over time. ARCore looks for clusters of feature points that appear to lie on horizontal, vertical or angled surfaces and makes these surfaces available to your app as planes (we call this technique as plane detection). So, now you can use these planes to place 3D objects in your scene. After this, virtual geometry with assigned shaders will be rendered by ARCore's companion – Sceneform, supporting OBJ, FBX and glTF assets and using a real-time Physically Based Rendering (a.k.a. PBR) engine – Filament.

Notwithstanding the above, at this moment Sceneform repository has been archived and it no longer actively maintaining by Google. The last released version was Sceneform 1.17.0.


ARCore's environmental understanding lets you place 3D objects and 2D annotations in a way that integrates with the real world. For example, you can place a virtual cup of coffee on the corner of your real-world table using ArAnchor.


ARCore can also define lighting parameters of a real environment and provide you with the average intensity and color correction of a given camera image. This data lets you light your virtual scene under the same conditions as the environment around you, considerably increasing the sense of realism.



Current ARCore version has such a significant APIs as Depth API, Lighting Estimation with Environmental HDR mode, Augmented Faces, Augmented Images, Instant Placement, Sceneform Animations, 365-days Cloud Anchors, Recording and Playback and Multiplayer support. The main advantage of ARCore in Android Studio over ARKit in Xcode is Android Emulator allowing you run and debug AR apps using virtual device.



ARCore is older than ARKit. Do you remember Project Tango released in 2014? Roughly speaking, ARCore is just a rewritten Tango SDK without a depth camera support. But a wise acquisition of FlyBy and MetaIO helped Apple not only to catch up but also significantly overtake. I suppose it is extremely good for AR industry.

The latest version of ARCore requires Android 7.0 Nougat or later, supports OpenGL ES 3.1 acceleration, and integrates with Unity, Unreal, and Web applications. At the moment the most powerful and energy efficient chipsets for AR experience on Android platform are Snapdragon 875 (5nm), Exynos 1080 (5nm) and Kirin 980 (7nm).

ARCore price: FREE.

|------------------------------|------------------------------|
|        "ARCore PROs"         |        "ARCore CONs"         | 
|------------------------------|------------------------------|
| iToF and Depth API support   | No Body Tracking support     |
|------------------------------|------------------------------|
| Quick Plane Detection        | Cloud Anchors hosted online  |
|------------------------------|------------------------------|
| Long-distance-accuracy       | Lack of rendering engines    |
|------------------------------|------------------------------|
| ARCore Emulator in AS        | Poor developer documentation | 
|------------------------------|------------------------------|
| High-quality Lighting API    | No external camera support   |
|------------------------------|------------------------------|
| A lot of supported devices   | Poor Google Glass API        |
|------------------------------|------------------------------|

Here's ARCore code's snippet written in Kotlin:

private fun addNodeToScene(fragment: ArFragment, anchor: Anchor, renderable: Renderable) {
    
    val anchorNode = AnchorNode(anchor)
    anchorNode.setParent(fragment.arSceneView.scene)
    
    val modelNode = TransformableNode(fragment.transformationSystem)
    modelNode.setParent(anchorNode)
    modelNode.setRenderable(renderable)
    modelNode.localPosition = Vector3(0.0f, 0.0f, -3.0f)
    fragment.arSceneView.scene.addChild(anchorNode)
    
    modelNode.select()
}



enter image description here enter image description here


Apple ARKit 4.0

ARKit was released in June 2017 and just two years later it became very popular. Like its competitors, ARKit also uses special technique, called Visual Inertial Odometry, to very accurately track the world around your device. VIO is quite similar to COM found in ARCore. There are also three similar fundamental concepts in ARKit: World Tracking, Scene Understanding (which includes four stages: Plane Detection, Hit-Testing / Ray-Casting, Light Estimation, Scene Reconstruction), and Rendering with a great help of ARKit companions – SceneKit framework, that’s actually an Apple 3D game engine since 2012, RealityKit framework specially made for AR and written in Swift from scratch (released in 2019), and SpriteKit framework with its 2D engine (since 2013).

VIO fuses RGB sensor data at 60 fps with Core-Motion data (IMU) at 1000 fps. In addition to that, SceneKit, for example, can render all the 3D geometry at 30/60/120 fps. So, under such circumstances, I think it should be noted that due to a very high energy impact (because of an enormous burden on CPU and GPU), your iPhone's battery will be drained pretty quickly.

ARKit has a handful of useful methods for robust tracking and accurate measurements. Among its arsenal you can find easy-to-use functionality for saving and retrieving ARWorldMaps. World map is an indispensable "portal" for Persistent and Multiuser AR experience that allows you to come back to the same environment filled with the same chosen 3D content just before the moment your app became inactive. Support for simultaneous front and back camera capture and support for collaborative sessions that allows us to share World Maps are also great.

There are good news for gamers: up to 6 people are simultaneously able to play the same AR game, thanks to MultipeerConnectivity framework. For 3D geometry you could use a brand-new USDZ file format, developed and supported by Pixar, that is good choice for sophisticated 3D models with lots of PBR shaders and animations. Also you can use the following 3D formats for ARKit.

ARKit not only can help you track a position and orientation of your device relative to the world in 6 DOF, but also help you perform People and Objects Occlusion technique (based on alpha and depth channels' segmentation), LiDAR Scene Reconstruction, Body Motion Capture tracking, 2D tracking, Vertical and Horizontal Planes detection, Image detection, 3D Object detection and 3D Object scanning. With People and Objects Occlusion tool your AR content realistically passes behind and in front of real world entities, making AR experiences even more immersive. Also, Realistic reflections, that use machine learning algorithms and Face-based AR experience that allows to track up to 3 faces at a time are available for you.



Using iBeacons along with ARKit, you assist an iBeacon-aware application to know what room it’s in, and show the correct 3D/2D content chosen for that room. Working with ARKit you should intensively exploit ARAnchor class and all its subclasses, the same way you’ve been using it in ARCore.


Pay particular attention to RealityKit's satellite – Reality Composer application that's now a part of Xcode. This brand-new app helps you build 3D scenes for AR. Scenes built in Reality Composer can be packed with dynamics, simple animations and PBR materials. Reality Composer can be installed on iOS and iPadOS as a standalone app.

For creating AR apps built on the latest versions of ARKit 4.0, including brand-new LiDAR scanner support, you need macOS 11 Big Sur, Xcode 12 and device running iOS 14 or iPadOS 14. A sad news is – all ARKit 4.0 top features are restricted to devices powered by Apple A12 chipset and higher. Also ARKit 4.0 is a worthy candidate to marry Metal framework for GPU acceleration. Don’t forget that ARKit tightly integrates with Unity and Unreal. At the moment the most powerful and energy efficient chipsets for AR experience are A14 Bionic (5nm), A13 Bionic (7nm) and A12z Bionic (7nm).

ARKit price: FREE.

|------------------------------|------------------------------|
|         "ARKit PROs"         |         "ARKit CONs"         | 
|------------------------------|------------------------------|
| LiDAR and Depth API support  | No AR glasses support        |
|------------------------------|------------------------------|
| Stable 6 DoF World Tracking  | No auto-update for Anchors   |
|------------------------------|------------------------------|
| Collaborative Sessions       | ARKit 4.0 / 3.5 Restrictions |
|------------------------------|------------------------------|
| WorldMaps, iBeacon-awareness | No ARKit Simulator in Xcode  |
|------------------------------|------------------------------|
| 4 rendering technologies     | No external camera support   |
|------------------------------|------------------------------|
| Rich developer documentation | Quickly drains your battery  |
|------------------------------|------------------------------|

Here's ARKit code's snippet written in Swift:

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
    
    guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
    let planeNode = tableTop(planeAnchor)
    node.addChildNode(planeNode)
}
    
func tableTop(_ anchor: ARPlaneAnchor) -> SCNNode {
    
    let x = CGFloat(anchor.extent.x)
    let z = CGFloat(anchor.extent.z)
    
    let tableNode = SCNNode()
    tableNode.geometry = SCNPlane(width: x, height: z)
    tableNode.position = SCNVector3(anchor.center.x, 0, anchor.center.z)
    return tableNode
}



enter image description here enter image description here


Apple RealityKit 2.0

You should carefully look at RealityKit that was introduced in WWDC 2019. There’s been a lot of hype around it since then. RealityKit allows you create AR experiences for iOS/iPadOS and VR experiences for mobiles and macOS. This high-level framework works with .usdz assets as well as with .rcproject and .reality file formats which you can import from standalone macOS or iOS app – Reality Composer (RC). Cupertino software engineers built RealityKit from the ground for augmented reality apps that you can create with no repetitive code. It works with Swift from scratch – there’s no Objective-C legacy. And, of course, RealityKit shines not only with SwiftUI and UIKit but with Metal too.

RealityKit framework has several fundamental blocks on which RealityKit's scenes are based: a parent class Entity, a class AnchorEntity that automatically tracks target (unlike in ARKit), classes BodyTrackedEntity, ModelEntity, PointLight, SpotLight, DirectionalLight, TriggerVolume and PerspectiveCamera. These entities are just like SceneKit's nodes but slightly different in hierarchical structure. And, of course, most entities have Components. It would be useful to say that ModelEntity is built on MeshResource and Materials and now there's a support for VideoMaterial in RealityKit 2.0.

enter image description here

RealityKit framework gives you a rich set of building blocks to work with AR and VR: new declarative Swift syntax, 3D primitives (at the moment box, plane, sphere and text), PBR materials with textures, occlusion material and video material, lighting fixtures (directional, spot and point) with realistic ray-traced shadows, spacial audio processing, different anchors types (body, camera, face, image, object, horizontal plane, vertical plane, raycastResult, ARAnchor and world), simplified setup for collaborative sessions, robust animations' and physics' setup, indispensable AI and ML built-in features and many other useful things.

Reality Composer application gives you a simple and intuitive UI for constructing 3D scenes for Augmented Reality experiences. It has a royalty free library with downloadable 3D assets that allow you construct sophisticated 3D scenes with animation, audio, and dynamics which contain a thorough description of how these objects behave. You can also export your composition as a lightweight AR Quick Look experience that lets users place and preview a content. In Reality Composer app you can start your project using one of five anchor types: horizontal, vertical, image, face and object – which corresponds to desired type of tracking.

RealityKit and Reality Composer price: FREE.

|------------------------------|------------------------------|
|       "RealityKit PROs"      |      "RealityKit CONs"       | 
|------------------------------|------------------------------|
| Can create AR apps w/o ARKit | Intensive usage of CPU/GPU   |
|------------------------------|------------------------------|
| Very little boilerplate code | iOS 13+, macOS 10.15+ only   |
|------------------------------|------------------------------|
| Suitable for AR/VR projects  | Start lagging on old devices |
|------------------------------|------------------------------|
| Robust API for RC scenes     | Limited shaders capabilities |
|------------------------------|------------------------------|
| Asynchronous asset loading   | Lack of Apple documentation  |
|------------------------------|------------------------------|
| Autoupdating tracking target | No AR glasses support        |
|------------------------------|------------------------------|

Here's RealityKit code's snippet written in Swift:

override func viewDidLoad() {
    super.viewDidLoad()
    
    let textAnchor = try! SomeText.loadTextScene()
    let textEntity: Entity = textAnchor.realityComposer!.children[0]
    var textMC: ModelComponent = textEntity.children[0].components[ModelComponent]!
    
    var material = SimpleMaterial()
    material.baseColor = .color(.yellow)
    textMC.materials[0] = material
    textMC.mesh = .generateText("Hello, RealityKit")
    textAnchor.realityComposer!.children[0].children[0].components.set(textMC)
    arView.scene.anchors.append(textAnchor)
}

descr

One more important part of Apple's AR ecosystem is Reality Converter app. Now, instead of using a command line conversion tool, you can use a Reality Converter. The brand-new app makes it easy for you to convert, view and customize .usdz 3D objects on Mac. Simply drag-and-drop common 3D file formats, such as .obj, .gltf or .usd, to view the converted .usdz result, customize material properties with your own textures and edit file metadata. You can even preview your .usdz object under a variety of lighting and environment conditions with built-in Image-Based Lighting (IBL) options.



descr descr


PTC Vuforia 9.5

In October 2015 PTC acquired Vuforia from Qualcomm for $65 million. Take into consideration that Qualcomm launched Vuforia in 2010. So Vuforia is an older sister in AR family. Big sister is watching you, guys! ;)

In November 2016 Unity Technologies and PTC announced a strategic collaboration to simplify AR development. Since then they work together integrating new features of the Vuforia AR platform into the Unity game engine. Vuforia can be used with such development environments as Unity, MS Visual Studio, Apple Xcode and Android Studio. It supports a wide range of smartphones, tablets and AR smart glasses, such as HoloLens, Magic Leap, Vuzix M400, and ODG R7.

Vuforia Engine boasts approximately the same main capabilities that you can find in the latest versions of ARKit but also it has its own features, such as Model Targets with Deep Learning, VISLAM for markerless AR experience and External Camera support for iOS, new experimental APIs for ARCore and ARKit and support for industry latest AR glasses. The main advantage of Vuforia over ARKit and ARCore that it has a wider list of supported devices and it supports the development of Universal Windows Platform apps for Intel-based Windows 10 devices, including Microsoft Surface and HoloLens.

Vuforia has a standalone version and a version baked directly into Unity. It has the following functionality:

  • Advanced Model Targets 360, recognition powered by AI;
  • Model Targets with Deep Learning, allow to instantly recognize objects by shape using pre-existing 3D models and deep learning algorithms;
  • Image Targets, the easiest way to put AR content on flat objects;
  • Multi Targets, for objects with flat surfaces and multiple sides;
  • Cylinder Targets, for placing AR content on objects with cylindrical shapes, like bottles;
  • Ground Plane, as a part of Smart Terrain, this feature enables digital content to be placed on floors and tabletop surfaces;
  • VuMarks, allows identify and add content to series of objects;
  • Object Targets, for scanning an object;
  • Static and Adaptive Modes, for stationary and moving objects;
  • Simulation Play Mode, allows developers to “walk through” or around the 3D model and see the final AR experience from their computer;
  • Vuforia Area Target Creator app, enables us to scan and generate new Area Targets by using a depth-enabled mobile devices;
  • AR Session Recorder, can record AR experiences in the location, and then use that recording with Playback mode in Unity for editing and updating;
  • and, of course, Vuforia Fusion and Vuforia Engine Area Targets.

Vuforia Fusion is a capability designed to solve the problem of fragmentation in AR enabling technologies such as cameras, sensors, chipsets, and software frameworks like ARKit. With Vuforia Fusion, your application will automatically provide the best experience possible with no extra work required on your end.

Vuforia Engine Area Targets enable developers to use an entire space, be it a factory floor or retail store, as an augmented reality target. Using a first supported device, a Matterport Pro2 camera, developers can create a detailed 3D scan of a desired location. Locations are recommended to be indoors, mostly static, and no larger than 1,000 sqm (around 10,000 sqft). Once the scan produces a 3D model it can be converted into an Area Target with the Vuforia Area Target Generator. This target can then be brought into Unity.

Vuforia API allows for a Static or Adaptive mode. When the real-world model remains stationary, like a large industrial machine, implementing the Static API will use significantly less processing power. This enables a longer lasting and higher performance experience for those models. For objects that won’t be stationary the Adaptive API allows for a continued robust experience.

The External Camera feature is a part of the Vuforia Engine Driver Framework. External Camera provides a new perspective on what’s possible with Augmented Reality. It allows Vuforia Engine to access external video sources beyond the camera equipped in phones and tablets. By using an independent camera, developers can create an AR experience that offers a first-person view from toys, robots or industrial tools.

Occlusion Management is one of the key features for building a realistic AR experience. When you're using Occlusion Management, Vuforia Engine detects and tracks targets, even when they’re partially hidden behind everyday barriers, like your hand. Special occlusion handling allows apps to display graphics as if they appear inside physical objects.

Vuforia supports Metal acceleration for iOS devices. Also you can use Vuforia Samples for your projects. For example: the Vuforia Core Samples library includes various scenes using Vuforia features, including a pre-configured Object Recognition scene that you can use as a reference and starting point for Object Recognition application.

enter image description here

Here's AR Foundation code's snippet written in C#:

private void UpdatePlacementPose() {

    var screenCenter = Camera.main.ViewportToScreenPoint(new Vector3(0.5f, 0.5f));
    var hits = new List<ARRaycastHit>();
    arOrigin.Raycast(screenCenter, hits, TrackableType.Planes);
    
    placementPoseIsValid = hits.Count > 0;

    if (placementPoseIsValid) {

        placementPose = hits[0].pose;
    
        var cameraForward = Camera.current.transform.forward;
        var cameraBearing = new Vector3(cameraForward.x, 
                                        0, 
                                        cameraForward.z).normalized;

        placementPose.rotation = Quaternion.LookRotation(cameraBearing);
    }
}

Vuforia SDK Pricing Options:

  • Free license – you just need to register for a free Development License Key

  • Basic license ($42/month, billed annually) – For Students

  • Basic + Cloud license ($99/month) – For Small Businesses

  • Agency Package (personal price) – 5 short-term licenses

  • Pro license (personal price) – For All Companies Types

Here are Pros and Cons.

|------------------------------|------------------------------|
|       "Vuforia PROs"         |        "Vuforia CONs"        | 
|------------------------------|------------------------------|
| Supports Android, iOS, UWP   | The price is not reasonable  |
|------------------------------|------------------------------|
| A lot of supported devices   | Poor developer documentation |
|------------------------------|------------------------------|
| External Camera support      | SDK has some issues and bugs |
|------------------------------|------------------------------|
| Webcam/Simulator Play Mode   | Doesn't support Geo tracking |
|------------------------------|------------------------------|



descr

CONCLUSION :

There are no significant limitations when developing with PTC Vuforia 9.5 compared to ARCore 1.21 and ARKit 4.0. Vuforia is an old great product and it supports a wider list of Apple and Android devices that are not officially supported and, of course, it supports several latest models of AR glasses.

But in my opinion, ARKit 4.0 with a Reality Family toolkit (RealityKit, Reality Composer and Reality Converter) have an extra bunch of useful up-to-date features that Vuforia 9.5 and ARCore 1.21 just partially have. ARKit 4.0 personally has a much greater, if not saying astonishing, short-distance measurement accuracy than ARCore compatible device has, within a room or on a street, without any need for calibration. This is achieved through the use of Apple LiDAR scanner. ARCore 1.21 uses iToF cameras and Depth API, and Vuforia 9.5 adds Occlusion Management feature to a whole picture. That allows you create a high-quality virtual mesh with OcclusionMaterial for real-world surfaces at scene understanding stage. This mesh is ready-for-collision and ready-to-be-lit. ARKit 4.0 now instantly detects nonplanar surfaces and surfaces with no-features-at-all, such as texture-free white walls in a poorly-lit rooms.

Also if you implement iBeacon tools, WorldMaps, and support for GPS – it will help you eliminate any tracking errors accumulated over time. And ARKit's tight integration with Vision and CoreML frameworks makes a huge contribution to a robust AR toolset. Integration with Apple Maps allows ARKit 4.0 to put GPS Location Anchors outdoors with a highest possible precision at the moment.

Vuforia's measurement accuracy greatly depends on what platform you're developing for. Some of the Vuforia features are built on top of the tracking engine available on the specific platform (e.g., ARKit or ARCore) . Even popular Vuforia Chalk application uses Vuforia Fusion, which itself uses ARKit/ARCore positional tracker.