Skip to main content

Medivis gets FDA approval for its augmented reality surgical planning toolkit

Augmented reality is coming to the operating room theater sooner than anyone may have predicted.

Medivis, which launched its product suite earlier this year, has now received approvals from the Food and Drug Administration and will begin rolling out its service in hospitals around the country.

The SurgicalAR platform is a visualization tool that guides surgical navigation, which the company claims can decrease complications and improve patient outcomes, while lowering surgical costs.

The New York-based company, which was founded by Osamah Choudhry and Christopher Morley who met as senior residents at NYU Medical Center, raised $2.3 million in financing led by Initialized Capital  and has secured partnerships with Dell and Microsoft to supply its hardware.

“Holographic visualization is the final frontier of surgical imaging and navigation,” said Osamah Choudhry, a trained neurosurgeon who serves as the chief executive at Medivis, in a statement. “The surgical world continues to primarily rely on two-dimensional imaging technology to understand and operate on incredibly complex patient pathology. Medivis introduces advancements in holographic visualization and navigation to fundamentally advance surgical intervention, and revolutionize how surgeons safely operate on their patients.”

In addition to its hardware partnership with Microsoft, Medivis has also lined up Verizon (whose media group owns TechCrunch) as a partner for its much ballyhooed 5G network.

The company has also launched a toolkit for educational training in augmented reality. The AnatomyX platform for medical training is available on Hololens and Magic Leap’s devices and is already in use at West Coast University.

Medivis is one of a number of companies that are looking to bring new technologies like AR and VR into the OR.

Vicarious Surgical is another upstart that’s got a vision for medicine’s future that includes augmented or extended reality. That company is combining visualization tools with robotics to enable remote surgeries that could, one day, happen across the country or across globe.

What these technologies have in common, and the reason why Verizon is likely very happy to partner with a company like Medivis, is the huge amounts of bandwidth that are going to be required to make their visions of the future come true.

As high speed networks begin cropping up, the attendant use cases haven’t kept pace. And new visualization tools that hoover up data are just the thing to keep money flowing into my corporate overlord’s pockets.

Not that it’s a bad thing. As Medivis’ chief operating officer, Dr. Christopher Morley said in a statement. “We are achieving this by rethinking core limitations in current medical visualization pipelines, and continuously pushing the limits of what’s possible.”



from TechCrunch https://tcrn.ch/2EIf68C
via IFTTT

Comments

Popular posts from this blog

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT