Skip to main content

Rana el Kaliouby and Alexei Efros will be speaking at TC Sessions: Robotics + AI April 18 at UC Berkeley

TechCrunch’s third robotics event is just over two and a half months away, and it’s already shaping up to be a doozy. We’ve already announced Anca Dragan, Melonee WiseHany Farid and Peter Barrett for our event and have an exciting pair of new names to share with you.

UC Berkeley’s Alexei Efros and Affectiva CEO Rana el Kaliouby will be joining us at Zellerbach Hall on April 18 for TC Sessions: Robotics + AI.

Alexei Efros is a professor in UC Berkeley’s department of Electrical Engineering and Computer Sciences and a member of the school’s Artificial Intelligence Research Lab. His work focuses on computer vision, graphics and computational photography, utilizing visual data to help better understand the world. Efros also researches robotics, machine learning and the use of computer vision in the humanities. Prior to joining UC Berkeley, he was a member of CMU’s Robotics Institute.

Rana el Kaliouby is the co-founder and CEO of Affectiva, an MIT Media Lab spin-off that creates software designed to recognize human emotions. El Kaliouby designed the startup’s underlying technology, which helps bring more depth and understanding to facial recognition. Prior to co-founding the company, she worked as an MIT research scientist, co-founding the school’s Autism & Communication Technology Initiative.

Early-Bird tickets are on sale now for just $249 — that’s $100 off full-priced tickets. Buy yours today here. Students can book a deeply discounted ticket for just $45.



from TechCrunch https://tcrn.ch/2CZBVmr
via IFTTT

Comments

Popular posts from this blog

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT