Skip to main content

Hailo challenges Intel and Google with its new AI modules for edge devices

Hailo, a Tel Aviv-based startup best known for its high-performance AI chips, today announced the launch of its M.2 and Mini PCIe high-AI acceleration modules. Based around its Hailo-8 chip, these new models are meant to be used in edge devices for anything from smart city and smart home solutions to industrial applications.

Today’s announcement comes about half a year after the company announced a $60 million Series B funding round. At the time, Hailo said it was raising those new funds to roll out its new AI chips, and with today’s announcement, it’s making good on this promise. In total, the company has now raised $88 million.

“Manufacturers across industries understand how crucial it is to integrate AI capabilities into their edge devices. Simply put, solutions without AI can no longer compete,” said Orr Danon, CEO of Hailo, in today’s announcement. “Our new Hailo-8 M.2 and Mini PCIe modules will empower companies worldwide to create new powerful, cost-efficient, innovative AI-based products with a short time-to-market – while staying within the systems’ thermal constraints. The high efficiency and top performance of Hailo’s modules are a true gamechanger for the edge market.”

Image Credits: Hailo

Developers can still use frameworks like TensorFlow and ONNX to build their models, and Hailo’s Dataflow compiler will handle the rest. One thing that makes Hailo’s chips different is its architecture, which allows it to automatically adapt to the needs of the neural network running on it.

Hailo is not shy about comparing its solution to that of heavyweights like Intel, Google and Nvidia. With 26 tera-operations per second (TOPS) and power efficiency of 3 TOPS/W, the company claims its edge modules can analyze significantly more frames per second than Intel’s Myriad-X and Google’s Edge TPU modules — all while also being far more energy efficient.

Image Credits: Hailo

The company is already working with Foxconn to integrate the M.2 module into its “BOXiedge” edge computing platform. Because it’s just a standard M.2 module, Foxconn was able to integrate it without any rework. Using the Hailo-8 M.2 solution, this edge computing server can process 20 camera streams at the same time.

“Hailo’s M.2 and Mini PCIe modules, together with the high-performance Hailo-8 AI chip, will allow many rapidly evolving industries to adopt advanced technologies in a very short time, ushering in a new generation of high performance, low power, and smarter AI-based solutions,” said Dr. Gene Liu, VP of Semiconductor Subgroup at Foxconn Technology Group.



from TechCrunch https://ift.tt/30kLHMG
via IFTTT

Comments

Popular posts from this blog

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT