Skip to main content

Apples Vision Pro SDK is now available in-person developer labs launch next month

Time to get a jump on all of that spatial computing. Apple today announced that its visionOS software development kit is now available, allowing 3D parties to begin building content for the Vision Pro. The SDK is available at least half a year before the headset officially goes on sale in the U.S., priced at $3,500.

The company is banking on developer interest to help drive excitement around the system, which was met with a lukewarm reception when it was unveiled at WWDC earlier this month. Content has been a major sticking point for years of VR and AR development, but Apple is no doubt banking on a stocked App Store by the time the system arrives in early 2024.

“Developers can get started building visionOS apps using the powerful frameworks they already know, and take their development even further with new innovative tools and technologies like Reality Composer Pro, to design all-new experiences for their users,” VP Susan Prescott said in a release. “By taking advantage of the space around the user, spatial computing unlocks new opportunities for our developers, and enables them to imagine new ways to help their users connect, be productive, and enjoy new types of entertainment.”

The SDK is built on top of the same basic framework as Apple’s various other operating systems, utilizing familiar dev tools, including Xcode, SwiftUI, RealityKit, ARKit and TestFlight. The company is clearly hoping to lower the barrier of entry for existing developers. The path of least resistance seems to be effectively porting existing software over to the new platform (see also the company’s Game Porting Tool Kit for the Mac and iPad).

Image Credits: Apple

Spatial computing windows are built in Swift, for example. Apple notes on its development page:

By default, apps launch into the Shared Space, where they exist side by side — much like multiple apps on a Mac desktop. Apps can use windows and volumes to show content, and the user can reposition these elements wherever they like. For a more immersive experience, an app can open a dedicated Full Space where only that app’s content will appear. Inside a Full Space, an app can use windows and volumes, create unbounded 3D content, open a portal to a different world, or even fully immerse people in an environment.

Questions remain around how effective such ports will ultimately be in a three-dimensional plane or “infinite canvas” — borrowing a phrase from comics scholar Scott McCloud. To ease the growing pains even further, the company will begin opening “developer labs” in a variety of cities next month, including Cupertino, London, Munich, Shanghai, Singapore and Tokyo.

That’s designed, in part, to address one of the biggest pain points at the moment: getting an extremely expensive and unreleased headset in front of developers. Teams will be able to test their app on the hardware at the site, or apply for hardware developer kits to test outside of the official locations.

In addition to existing developer tools, Apple is introducing Reality Composer Pro. The Xcode feature makes it easier to preview 3D models, images, sounds and animation on the headset. There’s a simulator, as well, which offers a virtual approximation without the actual hardware. Unity development tools will be added to the mix starting next month. That’s good news, as gaming experiences were conspicuously missing from the original presentation.

Image Credits: Complete HeartX by Elsevier Health.

Today’s announcement also lends credence to the notion that enterprise is going to be a key focus for the Pro’s first iteration.

“Manufacturers can use AR solutions from PTC to collaborate on critical business problems by bringing interactive 3D content into the real world — from a single product, to an entire production line,” said Stephen Prideaux-Ghee, AR/VR CTO of digital product development firm, PTC. “With Apple Vision Pro, stakeholders across departments and in different locations can review content simultaneously to make design and operation decisions. This capability will unlock a level of collaboration previously not possible.”

Apple has promised more information and tools in the coming months.

Apple’s Vision Pro SDK is now available, in-person developer labs launch next month by Brian Heater originally published on TechCrunch



from TechCrunch https://ift.tt/E4DvmVN
via IFTTT

Comments

Popular posts from this blog

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT