We’ve probably all been there: you’ve been poking around your phone for an hour, deep in some sort of Google research rabbit hole. You finally find a link that almost certainly has the info you’ve been looking for. You tap it.. aaaand it’s a fifty page PDF. Now you get to pinch and zoom your way through a document that’s clearly not meant for a screen that fits in your hand.
Given that the file format is approaching its 30th birthday, it makes sense that PDFs aren’t exactly built for modern mobile devices. But neither PDFs or smartphones are going away anytime soon, so Adobe has been working on a way to make them play nicely together.
This morning Adobe is launching a feature it calls “Liquid Mode.” Liquid Mode taps Adobe’s AI engine, Sensei, to analyze a PDF and automatically rebuild it for mobile devices. It uses machine learning to chew through the PDF and tries to work out what’s what — like the font changes that indicate a new section is starting, or how data is being displayed in a table — and reflow it all for smaller screens.
After a few months of quiet testing, Liquid Mode is being publicly rolled out in Adobe’s Acrobat Reader app for iOS and Android today with plans to bring it to desktops later. Adobe CTO Abhay Parasnis also tells me they’ve been working on an API that’ll allow similar functionality to be rolled into non-Adobe apps down the road.
When you open a PDF in Acrobat Reader, the app will try to determine if it’ll work with Liquid Mode; if so, the Liquid Mode button lights up. Tap the button and the file is sent to Adobe’s Document Cloud for processing. Once complete, users can tweak things like the font size and line spacing to their liking. Liquid Mode will use the headers/structure it detects to build a tappable table of contents where none existed before, allowing you to quickly hop from section to section. The whole thing is non-destructive, so nothing actually changes about the original PDF. Step back out of liquid mode, and you’re back at the original, unmodified PDF.
We first heard about Adobe’s efforts here earlier this year; in an ExtraCrunch interview back in January, Parasnis outlined Adobe’s plans to bring AI and machine learning into just about everything the company does. Parasnis tells me that Liquid Mode is just the first step in giving Sensei an understanding of documents. Later, he notes, they want users to be able to hand Sensei a thirty page PDF and have it return a three page summary.
from TechCrunch https://ift.tt/303wdMF
via IFTTT
No comments:
Post a Comment