Skip to main content

Trueface raises $3.7M to recognise that gun, as it’s being pulled, in real time

Globally, millions of cameras are in deployed by companies and organizations every year. All you have to do is look up. Yes, there they are! But the petabytes of data collected by these cameras really only become useful after something untoward has occurred. They can very rarely influence an action in “real-time”.

Trueface is a US-based computer vision company that turns camera data into so-called ‘actionable data’ using machine learning and AI by employing partners who can perform facial recognition, threat detection, age and ethnicity detection, license plate recognition, emotion analysis as well as object detection. That means, for instance, recognising a gun, as it’s pulled in a dime store. Yes folks, welcome to your brave new world.

The company has now raised $3.7M from Lavrock Ventures, Scout Ventures, and Advantage Ventures to scale the team growing partnerships and market share.

Trueface claims it can identify enterprises’ employees for access to a building, detect a weapon as it’s being wielded, or stop fraudulent spoofing attempts. Quite some claims.

However, it’s good enough for the US Air Force as it recently partnered with them to enhance base security.

Originally embedded in a hardware access control device, Trueface’s computer vision software inside one of the first ‘intelligent doorbell’, Chui which was covered by TechCrunch’s Anthony Ha in 2014.

Trueface has multiple solutions to run on an array of clients’ infrastructures including a dockerized container, SDKs that partners can use to build their own solutions with, and a plug and play solution that requires no code to get up and running.

The solution can be deployed in various scenarios such as fintech, healthcare, retail to humanitarian aid, age verification, digital identity verification and threat detection. Shaun Moore and Nezare Chafni are the cofounders and CEO and CTO, respectively.

The computer vision market was valued at USD 9.28 billion in 2017 and is now set to reach a valuation of USD 48.32 billion by the end of 2023.

Facial recognition was banned by agency use in the city of San Francisco recently. There are daily news stories about privacy concerns of facial recognition, especially in regards to how China is using computer vision technology.

However, Truface is only deployed ‘on-premise’ and includes features like ‘fleeting data’ and blurring for people who have not opted-in. It’s good to see a company building in such controls, from the word go.

However, it’s then it’s up to the company you work for not to require you to sign a statement saying you are happy to have your face recognized. Interesting times, huh?

And if you want that job, well, that’s a whole other story, as I’m sure you can imagine.



from TechCrunch https://ift.tt/311WKba
via IFTTT

Comments

Popular posts from this blog

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

Max Q: Anomalous

Hello and welcome back to Max Q! Last week wasn’t the most successful for spaceflight missions. We’ll get into that a bit more below. In this issue: First up, a botched launch from Virgin Orbit… …followed by one from ABL Space Systems News from Rocket Lab, World View and more Virgin Orbit’s botched launch highlights shaky financial future After Virgin Orbit’s launch failure last Monday, during which the mission experienced an  “anomaly” that prevented the rocket from reaching orbit, I went back over the company’s financials — and things aren’t looking good. For Virgin Orbit, this year has likely been completely turned on its head. The company was aiming for three launches this year, but everything will remain grounded until the cause of the anomaly has been identified and resolved. It’s unclear how long that will take, but likely at least three months. Add this delay to Virgin’s dwindling cash reserves and you have a foundation that’s suddenly much shakier than before. ...