Skip to main content

EU adopts rules on one-hour takedowns for terrorist content

The European Parliament approved a new law on terrorist content takedowns yesterday, paving the way for one-hour removals to become the legal standard across the EU.

The regulation “addressing the dissemination of terrorist content online” will come into force shortly after publication in the EU’s Official Journal — and start applying 12 months after that.

The incoming regime means providers serving users in the region must act on terrorist content removal notices from Member State authorities within one hour of receipt, or else provide an explanation why they have been unable to do so.

There are exceptions for educational, research, artistic and journalistic work — with lawmakers aiming to target terrorism propaganda being spread on online platforms like social media sites.

The types of content they want speedily removed under this regime includes material that incites, solicits or contributes to terrorist offences; provides instructions for such offences; or solicits people to participate in a terrorist group.

Material posted online that provides guidance on how to make and use explosives, firearms or other weapons for terrorist purposes is also in scope.

However concerns have been raised over the impact on online freedom of expression — including if platforms use content filters to shrink their risk, given the tight turnaround times required for removals.

The law does not put a general obligation on platforms to monitor or filter content but it does push service providers to prevent the spread of proscribed content — saying they must take steps to prevent propagation.

It is left up to service providers how exactly they do that, and while there’s no legal obligation to use automated tools it seems likely filters will be what larger providers reach for, with the risk of unjustified, speech chilling takedowns fast-following. 

Another concern is how exactly terrorist content is being defined under the law — with civil rights groups warning that authoritarian governments within Europe might seek to use it to go after critics based elsewhere in the region.

The law does include transparency obligations — meaning providers must publicly report information about content identification and takedown actions annually.

On the sanctions side, Member States are responsible for adopting rules on penalties but the regulation sets a top level of fines for repeatedly failing to comply with provisions at up to 4% of global annual turnover.

EU lawmakers proposed the new rules back in 2018  when concern was riding high over the spread of ISIS content online.

Platforms were pressed to abide by an informal one-hour takedown rule in March of the same year. But within months the Commission came with a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.

Negotiations over the proposal have seen MEPs and Member States (via the Council) tweaking provisions — with the former, for example, pushing for a provision that requires competent authority to contact companies that have never received a removal order a little in advance of issuing the first order to remove content — to provide them with information on procedures and deadlines — so they’re not caught entirely on the hop.

The impact on smaller content providers has continued to be a concern for critics, though.

The Council adopted its final position in March. The approval by the Parliament yesterday concludes the co-legislative process.

Commenting in a statement, MEP Patryk JAKI, the rapporteur for the legislation, said: “Terrorists recruit, share propaganda and coordinate attacks on the internet. Today we have established effective mechanisms allowing member states to remove terrorist content within a maximum of one hour all around the European Union. I strongly believe that what we achieved is a good outcome, which balances security and freedom of speech and expression on the internet, protects legal content and access to information for every citizen in the EU, while fighting terrorism through cooperation and trust between states.”



from TechCrunch https://ift.tt/3xA6wmc
via IFTTT

Comments

Popular posts from this blog

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

Max Q: Anomalous

Hello and welcome back to Max Q! Last week wasn’t the most successful for spaceflight missions. We’ll get into that a bit more below. In this issue: First up, a botched launch from Virgin Orbit… …followed by one from ABL Space Systems News from Rocket Lab, World View and more Virgin Orbit’s botched launch highlights shaky financial future After Virgin Orbit’s launch failure last Monday, during which the mission experienced an  “anomaly” that prevented the rocket from reaching orbit, I went back over the company’s financials — and things aren’t looking good. For Virgin Orbit, this year has likely been completely turned on its head. The company was aiming for three launches this year, but everything will remain grounded until the cause of the anomaly has been identified and resolved. It’s unclear how long that will take, but likely at least three months. Add this delay to Virgin’s dwindling cash reserves and you have a foundation that’s suddenly much shakier than before. ...