Skip to main content

Google brings Nvidia’s Tesla V100 GPUs to its cloud

Google today announced that Nvidia’s high-powered Tesla V100 GPUs are now available for workloads on both Compute Engine and Kubernetes Engine. For now, this is only a public beta, but for those who need Google’s full support for their GPU workloads, the company also today announced that the slightly less powerful Nvidia P100 GPUs are now out of beta and publicly available.

The V100 GPUs remain the most powerful chips in Nvidia’s lineup for high-performance computing. They have been around for quite a while now and Google is actually a bit late to the game here. AWS and IBM already offer V100s to their customers; they are currently in private preview on Azure.

While Google stresses that it also uses NVLink, Nvidia’s fast interconnect for multi-GPU processing, it’s worth noting that its competitors do this, too. NVLink promises GPU-to-GPU bandwidth that’s nine times faster than traditional PCIe connections, resulting in a performance boost of up to 40 percent for some workloads, according to Google.

All of that power comes at a price, of course. An hour of V100 usage costs $2.48, while the P100 will set you back $1.46 per hour (these are the standard prices, with the preemptible machines coming in at half that price). In addition, you’ll also need to pay Google to run your regular virtual machine or containers.

V100 machines are now available in two configurations with either one or eight GPUs, with support for two or four attached GPUs coming in the future. The P100 machines come with one, two or four attached GPUs.



from TechCrunch https://ift.tt/2HGBsH4
via IFTTT

Comments

Popular posts from this blog

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT