Skip to main content

Put down your phone if you want to innovate

We are living in an interstitial period. In the early 1980s we entered an era of desktop computing that culminated in the dot-com crash – a financial bubble that we bolstered with Y2K consulting fees and hardware expenditures alongside irrational exuberance over Pets.com. That last interstitial era, an era during which computers got smaller, weirder, thinner, and more powerful, ushered us, after a long period of boredom, into the mobile era in which we now exist. If you want to help innovate in the next decade, it’s time to admit that phones, like desktop PCs before them, are a dead end.

We create and then brush up against the edges of our creation every decade. The speed at which we improve – but not innovate – is increasing and so the difference between a 2007 iPhone and a modern Pixel 3 is incredible. But what can the Pixel do that the original iPhone or Android phones can’t? Not much.

We are limited by the use cases afforded by our current technology. In 1903, a bike was a bike and could not fly. Until the Wright Brothers and others turned forward mechanical motion into lift were we able to lift off. In 2019 a phone is a phone and cannot truly interact with us as long as it remains a separate part of our bodies. Until someone looks beyond these limitations will we be able to take flight.

While I won’t posit on the future of mobile tech I will note that until we put our phones away and look at the world anew we will do nothing of note. We can take better photos and FaceTime each other but until we see the limitations of these technologies we will be unable to see a world outside of them.

We’re heading into a new year (and a new CES) and we can expect more of the same. It is safe and comfortable to remain in the screen-hand-eye nexus, creating VR devices that are essentially phones slapped to our faces and big computers that now masquerade as TVs. What, however, is the next step? Where do these devices go? How do they change? How to user interfaces compress and morph? Until we actively think about this we will remain stuck.

Perhaps you are. You’d better hurry. If this period ends as swiftly and decisively as the other ones before it, the opportunity available will be limited at best. Why hasn’t VR taken off? Because it is still on the fringes, being explored by people stuck in mobile thinking. Why is machine learning and AI so slow? Because the use cases are aimed at chatbots and better customer interaction. Until we start looking beyond the black mirror (see what I did?) of our phones innovation will fail.

Every app launched, every pictured scrolled, every tap, every hunched-over moment davening to some dumb Facebook improvement, is a brick in bulwark against an unexpected and better future. So put your phone down this year and build something. Soon it might be too late.



from TechCrunch https://tcrn.ch/2GJX0GI
via IFTTT

Comments

Popular posts from this blog

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT