Modal Title
Frontend Development / Software Development

Vision Pro for Devs: Easy to Start, but UI Not Revolutionary

Apple's visionOS, a new platform for Vision Pro, borrows heavily from existing 3D and iOS tooling. Will it be enough to entice developers?
Jun 9th, 2023 9:09am by
Featued image for: Vision Pro for Devs: Easy to Start, but UI Not Revolutionary

“Welcome to the era of spatial computing,” announced Apple as it unveiled its latest device, a pair of mixed-reality goggles called the Vision Pro. CEO Tim Cook described it as “a new kind of computer that augments reality by seamlessly blending the real world with the digital world.” A new operating system powers the device, called visionOS — which Apple says contains “the building blocks of spatial computing.”

If it’s “a new type of computer,” as Apple claims, then that means a new greenfield for developers. So what can devs expect from visionOS and Vision Pro? I watched a WWDC session entitled “Get started with building apps for spatial computing to find out.

“By default, apps launch into the Shared Space,” began Apple’s Jim Tilander, an engineer on the RealityKit team. “This is where apps exist side-by-side, much like multiple apps on a Mac desktop. People remain connected to their surroundings through passthrough.” (Passthrough in this case means to switch attention from the virtual world to the physical world, or vice versa.)

He then introduced three new concepts, all of them SwiftUI scenes: Windows, Volumes, and Spaces. SwiftUI has been around for four years, serving as Apple’s primary user interface framework across its various products. For visionOS, SwiftUI has been bolstered with “all-new 3D capabilities and support for depth, gestures, effects, and immersive scene types.”

Each of the three scenes is self-explanatory, but it’s worth noting that in addition to the “Shared Space” concept, Apple also has “Full Space,” which is when you want “a more immersive experience” for an application and so “only that app’s content will appear.”

It’s interesting to note that Apple appears to have a different definition of “presence” than Meta (née Facebook). Meta defines presence as “high fidelity digital representations of people that create a realistic sense of connection in the virtual world.” In other words, “presence” to Meta means being fully immersed in the virtual world. But based on the following graphic I saw in this session, “presence” to Apple means less immersion — it’s letting the physical world enter the view of your Vision Pro goggles.

Privacy Pros and Cons

Apple claims that the Vision Pro and visionOS platform treat user privacy as a core principle, while also “making it easy for you as a developer to leverage APIs to take advantage of the many capabilities of the device.”

Apple’s solution to preserving user privacy is to curate data and interactions for developers. Tilander gave two interesting examples of this.

“Instead of allowing apps to access data from the sensors directly, the system does that for you and provides apps with events and visual cues. For example, the system knows the eye position and gestures of somebody’s hands in 3D space and delivers that as touch events. Also, the system will render a hover effect on a view when it is the focus of attention, but does not communicate to the app where the person is looking.”

Sometimes “curated” data won’t be enough for developers. Tilander explained that “in those cases where you actually do need access to more sensitive information, the system will ask the people for their permission first.”

Given how potentially invasive the Vision Pro is to peoples’ privacy — including the user, since it has eye-scanning capabilities for login and tracking — the restrictions Apple has imposed on developers sound reasonable.

However, Google developer Brandon Jones pointed out on Twitter that “if you want to do AR apps, you must give Apple full rendering control.” While generally, he thinks this is a good thing — “You don’t want, for example, ads to be able to infer how much time a user spent looking at them” — he isn’t so excited about Apple “quietly re-inventing and side-stepping web standards in order to achieve that.”

In a nutshell, Apple’s privacy restrictions for Vision Pro are implemented at the OS level, giving Apple a great deal of control. Jones admitted that most developers will be comfortable with that, but he correctly noted that “Apple (already notorious for clamping down on what you can do with iOS) is doubling down on restricting the ways you can diverge from their chosen patterns.”

The Tools

“Everything starts with Xcode,” Tilander said, regarding how developers will build apps for visionOS. Xcode is Apple’s integrated development environment (IDE) and it comes with a simulator for Vision Pro and an enhanced “Instruments” performance analysis tool (which includes a new template, RealityKit Trace).

The frameworks to build 3D content are ARKit and RealityKit, which handle tracking, rendering, physics, animations, spatial audio, and more.

For visionOS, Apple is introducing a new editor called Reality Composer Pro, which “allows you to preview and prepare 3D content for your apps.” A Reddit user described it as “like Powerpoint in AR,” so the emphasis is on ease of use.

No doubt realizing that it needed more than just existing Apple developers to start thinking about developing for Vision Pro, Apple has also partnered with Unity, an existing 3D platform. In the WWDC 23 opening keynote, one of the presenters noted that “popular Unity-based games and apps can gain full access to visionOS features, such as passthrough, high-resolution rendering, and native gestures.” Tilander confirmed in his session that no Unity plug-ins would be required, and that developers can simply “bring your existing content over.”

How to Get Started

To begin a new app, in Xcode you can choose the default app template for “xrOS” (apparently the shortened version of visionOS). From there, you select a “scene type,” with the default being “Window.” This is in a Shared Space by default, but you can change that.

“And when you finish the assistant,” continued Tilander, “you are presented with an initial working app in SwiftUI that shows familiar buttons mixed in with a 3D object rendered with RealityKit.”

You can also easily convert iPhone or iPad apps into visionOS apps, noted Tilander.

Developers can expect more resources, including a developer kit, in July. An initial visionOS SDK will be available in Xcode by the end of this month.

Apple Keen for Devs to Jump Into 3D

As usual when Apple announces a new device, a lot of thought has been put into the developer tools and techniques for it. There’s nothing in visionOS that looks out of reach for existing iOS developers, so it’s a fairly seamless transition for Apple’s developer community.

Of course, the drawback is that Apple is enticing developers into yet another closed developer ecosystem. visionOS will have its own App Store, we were told at WWDC 23, but you can guarantee it won’t be any more open than the iOS App Store.

The final thing to note for developers is that the user interface really isn’t that different from iPhone, at least for the first-generation Vision Pro. “They’re still just rectangles on the internet,” as one Twitter user put it. As others have pointed out, this is probably because Apple wants to make it easy for its existing developers to start building on visionOS. Now, from a user point of view, early reports suggest that Vision Pro may indeed be magical. But from a developer point of view, Vision Pro isn’t that revolutionary — yet.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.