Skip to main content

First U.S. apps based on Google and Apple Exposure Notification System expected in “coming weeks”

Google Vice President of Engineering Dave Burke provided an update about the Exposure Notifications System (ENS) that Google developed in partnership with Apple, as a way to help public health authorities supplement contact tracing efforts with a connected solution that preserves privacy while alerting people of potential exposure to confirmed cases of COVID-19. In the update, Burke notes that the company expects “to see the first et of these apps roll out in the coming weeks” in the U.S., which may be a tacit response to some critics who have pointed out that we haven’t seen much in the way of actual products being built on the technology that was launched in May.

Burke writes that 20 states and territories across the U.S. are currently “exploring” apps that make use of the ENS system, and that together those represent nearly half (45%) of the overall American populace. He also shared recent updates and improvements made to both the Exposure Notification API, as well as to its surrounding documentation and information that the companies have shared in order to answer questions state health agencies have had, and hopefully make its use and privacy implications more transparent.

The ENS API now supports exposure notifications between countries, which Burke says is a feature added based on nations that have already launched apps based on the tech (that includes Canada, as of today, as well as some European nations). It’s also now better at using Bluetooth values specific to a wider range of devices to improve nearby device detection accuracy. He also says that they’ve improved the reliability for both apps and debugging tools for those working on development, which should help public health authorities and their developer partners more easily build apps that actually use ENS.

Burke continues that there’s been feedback from developers that they’d like more detail about how ENS works under the covers, and so they’ve published public-facing guides that direct health authorities about test verification server creation, code revealing its underlying workings, and information about what data is actually collected (in a de-identified manner) to allow for much more transparent debugging and verification of proper app functioning.

Google also explains why it requires that an Android device’s location setting be turned on to use Exposure Notifications – even though apps built using the API are explicitly forbidden from also collecting location data. Basically, it’s a legacy requirement that Google is removing in Android 11, which is set to be released soon. In the meantime, however, Burke says that even with location services turned off, no app that uses the ENS will actually be able to see or receive any location data.



from TechCrunch https://ift.tt/3fpsNsJ
via IFTTT

Comments

Popular posts from this blog

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT