Skip to main content

Building a robot? Occipital wants to provide the eyes

A few years back, Occipital released a sensor that turned your iPad into a portable 3D scanner. Called the Occipital Structure, it packed lasers and cameras into a snap-on package that let the iPad be used for anything from accurately measuring a room’s dimensions to building 3D models for building prosthetic limbs. The catch? For the most part, it only worked with iOS.

Today Occipital is announcing a more flexible (and powerful) alternative: Structure Core. With built-in motion sensors and compatibility with Windows, Linux, Android or macOS, it’s meant for projects where cramming in an iPad just doesn’t make sense. Think robots, or mixed reality headsets.

By blasting out an array of laser dots and using the Core’s onboard cameras to map them, Structure’s SDK is able to map its environment and determine its position within it. You could, for example, use the depth sensors to have your robot build a map of a room, then use the SDK’s built-in route tool to get it from point A to point B on command (without bashing into everything along the way).

Beyond no longer being tied to iOS, Structure Core also beefs things up under the hood. Whereas the original Structure sensor uses USB 2.0/Lightning, Structure Core taps USB 3.0 — which, the company tells me, means the SDK can pull considerably more sensor data, faster. They’ve switched from a rolling shutter to global shutter (meaning every pixel on the sensor is exposed at the same time, helping to stop tearing/distortion on fast moving objects), and the field of view has been greatly improved.

Occipital isn’t the first to dabble in this space, of course. DIY’ers have been repurposing Microsoft’s Kinect hardware (RIP) to give their robots basic vision abilities for years; meanwhile, Intel has an arm it calls RealSense focusing on drop-in vision boards. But with companies like Misty Robotics turning to Occipital to give its robots sight, it made sense to take the iPad out of the equation and offer something that could stand on its own.

Structure Core will come in two forms: enclosed, or bare. The first wraps the chipset/sensors/etc. in aluminum in a way that’s ready to be strapped right into a project; the second sheds the enclosure and gives you on-board mounting points for when you’re looking for something more custom.

Pricing is a bit peculiar for a piece of hardware — the sooner you need it, the more it’ll cost. A limited run of early units will ship in the next few weeks for $600 each; the next batch goes out in January, for $499. By March, they expect the price to officially settle at $399 each. The company also tells me that they’re open to volume pricing if a team needs a bunch of units, though those prices aren’t available yet.



from TechCrunch https://ift.tt/2AySlRH
via IFTTT

Comments

Popular posts from this blog

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT