Skip to content

Wikimedia Europe

Visual Portfolio, Posts & Image Gallery for WordPress

Michael S Adler, CC BY-SA 4.0, via Wikimedia Commons

Markus Trienke, CC BY-SA 2.0, via Wikimedia Commons

Stefan Krause, Germany, FAL, via Wikimedia Commons

JohnDarrochNZ, CC BY-SA 4.0, via Wikimedia Commons

Benh LIEU SONG (Flickr), CC BY-SA 4.0, via Wikimedia Commons

Charles J. Sharp, CC BY-SA 4.0, via Wikimedia Commons

NASA Goddard Space Flight Center from Greenbelt, MD, USA, Public domain, via Wikimedia Commons

analysis

Visions of AI in Popular Culture: report is out

Activist organisations often have difficulties with raising awareness around the problems that they make it their mission to solve. While lack of adequate expertise or access to funding that could be spent for information campaigns, are among reasons, there is a lot to be said about the messaging and methods we chose. What if we got inspiration from pop-culture and artworks that excell at translating the emerging tendencies and new technologies into the zeitgeist?

These are the droids you’re looking for

Together with SWPS University’s Institute of Humanities in Warsaw, Poland, we delved into exactly this inspiration! Students worked under the direction of the faculty on data collection and report Visions of AI in Popular Culture: Analysis of the Narratives about Artificial Intelligence in Science Fiction Films and Series. The Wikimedia assignment was to examine attitudes and winning narratives pertaining to the key narrative tropes:

Read More »Visions of AI in Popular Culture: report is out

Artificial Intelligence Act: what is the European Union regulating?

Analysis

In this installment of series of longer features on our blog we analyse the scope of the AI Act as proposed by the European Commission and assess it adequacy in the context of impact of AI in practice.

AI is going to shape the Internet more and more and through it access to information and production of knowledge. Wikipedia, Wikimedia Commons and Wikidata are supported by machine learning tools and their role will grow in the following years. We are following the proposal for the Artificial Intelligence Act that, as the first global attempt to legally regulate AI, will have consequences for our projects, our communities and users around the world. What are we really talking about when we speak of AI? And how much of it do we need to regulate?

The devil is in the definition

It is indispensable to define the scope of any matter to be regulated, and in the case of AI that task is no less difficult than for “terrorist content” for example. There are different approaches as to what AI is taken in various debates, from scientific ones to popular public perceptions. When hearing “AI”, some people think of sophisticated algorithms – sometimes inside an android – undertaking complex, conceptual and abstract tasks or even featuring a form of self-consciousness. Some include in the definition algorithms that modify their operations based on comparisons between and against large amounts of data for example, without any abstract extrapolation.

The definition proposed by the European Commission in the AI Act lists software developed with specifically named techniques; among them machine learning approaches including deep learning, logic- and knowledge-based approaches, as well as statistical approaches including Bayesian estimation, search and optimization methods. The list is quite broad and it clearly encompasses a range of technologies used today by companies, internet platforms and public institutions alike.

Read More »Artificial Intelligence Act: what is the European Union regulating?

Antiterrorists in a bike shed – policy and politics of the Terrorist Content Regulation

co-authored by Diego Naranjo, Head of Policy at EDRi

Analysis

In the second installment of series of longer features on our blog we analyse the political process around the terrorist content debates and key factors influencing the outcome.

The short story: an ill-fated law with dubious evidence base, targeting an important modern problem with poorly chosen measures, goes through an exhausting legislative process to be adopted without proper democratic scrutiny due to a procedural peculiarity. How did we manage to end up in this mess? And what does it tell us about the power of agenda setting the name of the “do something” doctrine?

How it started – how it’s going

A lot of bafflement accompanied the release of the Terrorist content regulation proposal. The European Commission published it a few days after the September 2018 deadline to implement the Directive on Combating Terrorism (2015/0625). It is still unclear what the rush was with the regulation if the preceding directive hadn’t got much traction. At that time, only a handful of Member States met the deadline for its implementation (and we don’t see a massive improvement in implementation across the EU to this day). Did it have to do with the bike-shed effect pervading modern policy-making in the EU? Is it easier to agree on sanitation of the internet done mostly by private corporate powers, than to meaningfully improve actions and processes addressing terrorist violence in the Member States?

Read More »Antiterrorists in a bike shed – policy and politics of the Terrorist Content Regulation

Terrorist content and Avia Law – implications of constitutionality of TERREG in France

Analysis

In the first of series of longer features on our blog, we study the implications of national court ruling on the future of an EU regulation: in this case on TERREG.

In June 2020, France’s Constitutional Court issued a decision that contradicts most key aspects of the EU proposal for a regulation on preventing the dissemination of terrorist content online – but also gave EU legislators specific tools to prevent drafting legislations pertaining to content regulation that would directly contradict fundamental rights and national constitutional requirements.

Download as pdf

Introduction

Over the course of the past two years, France had a lively debate on a draft bill to combat hate speech online (the so-called Avia law). The debate mainly revolved around imposing stricter content removal obligations for both platforms and other intermediaries such as hosting providers. The final law, passed in May 2020, included the obligation for hosting providers to remove terrorist content and child sex abuse material within the hour of receiving a blocking order by an administrative authority. The law also foresaw a 24-hour deadline for platforms to remove hate speech content, based on flagging by either a user, or trusted flaggers – based on the platforms’ own judgement and with the help of technical measures. This content removal activity was supposed to be subject to guidelines that were to be established by the French Media Regulator (CSA). 

Read More »Terrorist content and Avia Law – implications of constitutionality of TERREG in France