Skip to main content

‘Lion King’ director Jon Favreau explains why he’s remaking an animated classic

Disney has been cranking out live action remakes of its animated classics for the past few years — in fact, Guy Ritchie’s take on “Aladdin” is currently at the top of the box office.

But these distinctions get tricky with the growing reliance on computer generated visual effects. “The Jungle Book,” for example, mostly features a single live actor interacting with CGI animals. And “The Lion King” (scheduled for release on July 19) takes that approach even further: Everything you seen onscreen has been created on a computer.

I got a chance to visit the “Lion King” set in December 2017, where I participated in a group interview with Jon Favreau, who directed both “The Jungle Book” and this new film. When asked whether he considers this a live action or animated movie, he said, “It’s difficult, because it’s neither, really.”

“There’s no real animals and there’s no real cameras and there’s not even any performance that’s being captured,” Favreau acknowledged. “There’s underlying [performance] data that’s real, but everything is coming through the hands of artists.”

At the same time, he argued that it would be “misleading” to call this an animated film. For one thing, the visuals aren’t stylized in the way you’d expect in a cartoon. Instead, the aim was to create animals that look even more realistic than the ones in “The Jungle Book” — Favreau said the footage should feel like “a BBC documentary,” albeit one where the animals talk and sing.

“Between the quality of the rendering and the techniques we’re using, it starts to hopefully feel like you’re watching something that’s not a visual effects production, but something where you’re just looking into a world that’s very realistic,” he continued. “And emotionally, feels as realistic as if you’re watching live creatures. And that’s kind of the trick here, because I don’t think anybody wants to see another animated ‘Lion King,’ because it still holds up really, really well.”

To achieve this, Favreau said he wanted this to have “the feeling of a live action shoot,” including the way he shot with the actors (Donald Glover plays Simba, Beyoncé plays Nala and James Earl Jones returns as Mufasa). Given the goal of creating realistic animals, Favreau said the traditional motion capture approach didn’t make sense, but he still wanted the actors to “overlap and perform together and improvise and do whatever we want.”

So he brought them to a soundproofed stage, and they performed “standing up, almost like you would in a motion capture stage — except no tracking markers, no data, no metadata’s being recorded, it’s only long-lens video cameras to get their faces and performances.”

Favreau compared this to shooting with Robert Downey Jr. and Gwyneth Paltrow on the original “Iron Man,” where he “tried to have multiple cameras and let Gwyneth and Robert improv when I could, because there’s so much of the movie you can’t change, because it’s visual effects.”

And even when he wasn’t working with actors, Favreau still “shot” the scenes with live action cinematographer Caleb Deschanel. That meant building a virtual world using the Unity game engine, then adding the digital equivalent of real-world production elements like lights and dolly tracks to block and film seen in that world. The filmmakers could use iPads to add, move and eliminate those elements, and could put on Vive VR headsets to explore the world.

“That’s the way I learned how to direct,” Favreau said. “It wasn’t sitting, looking over somebody’s shoulder [on a] computer. It was being in a real location. There’s something about being in a real 3D environment that makes it — I don’t know, just the parts of my brain are firing that fire on a real movie.”

To be clear, those virtual scenes aren’t what you’ll actually seen onscreen. Instead, they provide guidance for the animators to create far more detailed shots. Favreau said that in a sense, he was trying to resist the complete freedom that the computer generated approach can bring.

“I find that what the flexibility of digital production has done is given the opportunity for people to postpone being decisive,” he said. “It used to be if, you know, you built a big animatronic dinosaur, you had to make sure you got that shot right and framed right and it worked … And so, part of this experiment is to see if we really lock in early, as animated films do, and spend all of our time refining.”

As for why he’s going through all this effort to remake a film that, by his own admission, holds up really well, Favreau said he was inspired by the success of the stage play: “People will go see the stage show, and they’ll also see the movie, and you could love both of them and see them as two different things.” Similarly, he said his team set out to “create something that feels like a completely different medium than either of those two.”



from TechCrunch https://tcrn.ch/2wqo2ei
via IFTTT

Comments

Popular posts from this blog

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT