Skip to main content

YouTube to reduce conspiracy theory recommendations in the UK

YouTube is expanding an experimental tweak to its recommendation engine that’s intended to reduce the amplification of conspiracy theories to the UK market.

In January, the video-sharing platform said it was making changes in the US to limit the spread of conspiracy theory content, such as junk science and bogus claims about historical events — following sustained criticism of how its platform accelerates damaging clickbait.

A YouTube spokeswoman confirmed to TechCrunch it is now in the process of rolling out the same update to suppresses conspiracy recommendations in the UK. She said it will take some time to take full effect — without providing detail on when exactly the changes will be fully applied.

The spokeswoman said YouTube acknowledges that it needs to do more to reform a recommendation system that has been shown time and again lifting harmful clickbait and misinformation into mainstream view. Though YouTube claims this negative spiral occurs only sometimes, and says on average its system points users to mainstream videos.

The company calls the type of junk content it’s been experimenting with recommending less often “borderline”, saying it’s stuff that toes the line of its acceptable content policies. In practice this means stuff like videos that make nonsense claims the earth is flat, or blatant lies about historical events such as the 9/11 terror attacks, or promote harmful junk about bogus miracle cures for serious illnesses.

All of which can be filed under misinformation ‘snake oil’. But for YouTube this sort of junk has been very lucrative snake oil as a consequence of Google’s commercial imperative being to keep eyeballs engaged in order to serve more ads.

More recently, though, YouTube has taken a reputational hit as its platform as been blamed for an extremist and radicalizing impact on young and impressionable minds by encouraging users to swallow junk science and worse.

A former Google engineer, Guillaume Chaslot, who worked on the YouTube recommendation algorithms went public last year to condemn what he described as the engine’s “toxic” impact which he said “perverts civic discussion” by encouraging users to create highly engaging borderline content.

Multiple investigations by journalists have also delved into instances where YouTube has been blamed for pushing people, including the young and impressionable, towards far right points of view via its algorithm’s radicalizing rabbit hole — which exposes users to increasingly extreme points of view without providing any context about what it’s encouraging them to view. 

Of course it doesn’t have to be this way. Imagine if a YouTube viewer who sought out at a video produced by a partisan shock jock was suggested less extreme or even an entirely alternative political point of view. Or only saw calming yoga and mindfulness videos in their ‘up next’ feed.

YouTube has eschewed a more balanced approach to the content its algorithms select and recommend for commercial reasons. But it may also have been keen to avoid drawing overt attention to the fact that its algorithms are acting as de facto editors.

And editorial decisions are what media companies make. So it then follows that tech platforms which perform algorithmic content sorting and suggestion should be regulated like media businesses are. (And all tech giants in the user generated content space have been doing their level best to evade that sort of rule of law for years.)

That Google has the power to edit out junk is clear.

A spokeswoman for YouTube told us the US test of a reduction in conspiracy junk recommendations has led to a drop in the number of views from recommendations of more than 50%.

Though she also said the test is still ramping up — suggesting the impact on the viewing and amplification of conspiracy nonsense could be even greater if YouTube were to more aggressively demote this type of BS.

What’s very clear is the company has the power to flick algorithmic levers that determine what billions of people see — even if you don’t believe that might also influence how they feel and what they believe. Which is a concentration of power that should concern people on all sides of the political spectrum.

While YouTube could further limit algorithmically amplified toxicity the problem is its business continues to monetize on engagement, and clickbait’s fantastical nonsense is, by nature, highly engaging. So — for purely commercial reasons — it has a counter incentive not to clear out all YouTube’s crap.

How long the company can keep up this balancing act remains to be seen, though. In recent years some major YouTube advertisers have intervened to make it clear they do not relish their brands being associated with abusive and extremist content. Which does represent a commercial risk to YouTube — if pressure from and on advertisers steps up.

Like all powerful tech platforms, its business is also facing rising scrutiny from politicians and policymakers. And questions about how to ensure such content platforms do not have a deleterious effect on people and societies are now front of mind for governments in some markets around the world.

That political pressure — which is a response to public pressure, after a number of scandals — is unlikely to go away.

So YouTube’s still glacial response to addressing how its population-spanning algorithms negatively select for stuff that’s socially divisive and individually toxic may yet come back to bite it — in the form of laws that put firm limits on its powers to push people’s buttons.



from TechCrunch https://ift.tt/2U97T87
via IFTTT

Comments

Popular posts from this blog

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT