Skip to main content

Deep Render believes AI holds the key to more efficient video compression

Chri Besenbruch, CEO of Deep Render, sees many problems with the way video compression standards are developed today. He thinks they aren’t advancing quickly enough, bemoans the fact that they’re plagued with legal uncertainty and decries their reliance on specialized hardware for acceleration.

“The codec development process is broken,” Besenbruch said in an interview with TechCrunch ahead of Disrupt, where Deep Render is participating in the Disrupt Battlefield 200. “In the compression industry, there is a significant challenge of finding a new way forward and searching for new innovations.”

Seeking a better way, Besenbruch co-founded Deep Render with Arsalan Zafar, whom he met at Imperial College London. At the time, Besenbruch was studying computer science and machine learning. He and Zafar collaborated on a research project involving distributing terabytes of video across a network, during which they say they experienced the shortcomings of compression technology firsthand.

The last time TechCrunch covered Deep Render, the startup had just closed a £1.6 million seed round ($1.81 million) led by Pentech Ventures with participation from Speedinvest. In the roughly two years since then, Deep Render has raised an additional several million dollars from existing investors, bringing its total raised to $5.7 million.

“We thought to ourselves, if the internet pipes are difficult to extend, the only thing we can do is make the data that flows through the pipes smaller,” Besenbruch said. “Hence, we decided to fuse machine learning and AI and compression technology to develop a fundamentally new way of compression data getting significantly better image and video compression ratios.”

Deep Render isn’t the first to apply AI to video compression. Alphabet’s DeepMind adapted a machine learning algorithm originally developed to play board games to the problem of compressing YouTube videos, leading to a 4% reduction in the amount of data the video-sharing service needs to stream to users. Elsewhere, there’s startup WaveOne, which claims its machine learning-based video codec outperforms all existing standards across popular quality metrics.

But Deep Render’s solution is platform-agnostic. To create it, Besenbruch says that the company compiled a data set of over 10 million video sequences on which they trained algorithms to learn to compress video data efficiently. Deep Render used a combination of on-premises and cloud hardware for the training, with the former comprising of over a hundred GPUs.

Deep Render claims the resulting compression standard is 5x better than HEVC, a widely-used codec, and can run in real time on mobile devices with a dedicated AI accelerator chip (e.g. the Apple Neural Engine in modern iPhones). Besenbruch says the company is in talks with three large tech firms — all with market caps over $300 billion — about paid pilots, though he declined to share names.

Eddie Anderson, a founding partner at Pentech and board member at Deep Render, shared via email: “Deep Render’s machine-learning approach to codecs completely disrupts an established market. Not only is it a software route to market, but their [compression] performance is significantly better than the current state of the art. As bandwidth demands continue to increase, their solution has the potential to drive vastly improved commercial performance for current media owners and distributors.”

Deep Render currently employs 20 people. By the end of 2023, Besenbruch expects that number will more than triple to 62.

Deep Render believes AI holds the key to more efficient video compression by Kyle Wiggers originally published on TechCrunch



from TechCrunch https://ift.tt/tzZTqen
via IFTTT

Comments

Popular posts from this blog

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT

Max Q: Anomalous

Hello and welcome back to Max Q! Last week wasn’t the most successful for spaceflight missions. We’ll get into that a bit more below. In this issue: First up, a botched launch from Virgin Orbit… …followed by one from ABL Space Systems News from Rocket Lab, World View and more Virgin Orbit’s botched launch highlights shaky financial future After Virgin Orbit’s launch failure last Monday, during which the mission experienced an  “anomaly” that prevented the rocket from reaching orbit, I went back over the company’s financials — and things aren’t looking good. For Virgin Orbit, this year has likely been completely turned on its head. The company was aiming for three launches this year, but everything will remain grounded until the cause of the anomaly has been identified and resolved. It’s unclear how long that will take, but likely at least three months. Add this delay to Virgin’s dwindling cash reserves and you have a foundation that’s suddenly much shakier than before. ...

What’s Stripe’s deal?

Welcome to  The Interchange ! If you received this in your inbox, thank you for signing up and your vote of confidence. If you’re reading this as a post on our site, sign up  here  so you can receive it directly in the future. Every week, I’ll take a look at the hottest fintech news of the previous week. This will include everything from funding rounds to trends to an analysis of a particular space to hot takes on a particular company or phenomenon. There’s a lot of fintech news out there and it’s my job to stay on top of it — and make sense of it — so you can stay in the know. —  Mary Ann Stripe eyes exit, reportedly tried raising at a lower valuation The big news in fintech this week revolved around payments giant Stripe . On January 26, my Equity Podcast co-host and overall amazingly talented reporter Natasha Mascarenhas and I teamed up to write about how Stripe had set a 12-month deadline for itself to go public, either through a direct listing or by pursuin...