Skip to content

Wikimedia Europe

Visual Portfolio, Posts & Image Gallery for WordPress

Charles J. Sharp, CC BY-SA 4.0, via Wikimedia Commons

NASA Goddard Space Flight Center from Greenbelt, MD, USA, Public domain, via Wikimedia Commons

Benh LIEU SONG (Flickr), CC BY-SA 4.0, via Wikimedia Commons

Markus Trienke, CC BY-SA 2.0, via Wikimedia Commons

Stefan Krause, Germany, FAL, via Wikimedia Commons

JohnDarrochNZ, CC BY-SA 4.0, via Wikimedia Commons

Michael S Adler, CC BY-SA 4.0, via Wikimedia Commons

privacy

Artificial Intelligence Act: what is the European Union regulating?

Analysis

In this installment of series of longer features on our blog we analyse the scope of the AI Act as proposed by the European Commission and assess it adequacy in the context of impact of AI in practice.

AI is going to shape the Internet more and more and through it access to information and production of knowledge. Wikipedia, Wikimedia Commons and Wikidata are supported by machine learning tools and their role will grow in the following years. We are following the proposal for the Artificial Intelligence Act that, as the first global attempt to legally regulate AI, will have consequences for our projects, our communities and users around the world. What are we really talking about when we speak of AI? And how much of it do we need to regulate?

The devil is in the definition

It is indispensable to define the scope of any matter to be regulated, and in the case of AI that task is no less difficult than for “terrorist content” for example. There are different approaches as to what AI is taken in various debates, from scientific ones to popular public perceptions. When hearing “AI”, some people think of sophisticated algorithms – sometimes inside an android – undertaking complex, conceptual and abstract tasks or even featuring a form of self-consciousness. Some include in the definition algorithms that modify their operations based on comparisons between and against large amounts of data for example, without any abstract extrapolation.

The definition proposed by the European Commission in the AI Act lists software developed with specifically named techniques; among them machine learning approaches including deep learning, logic- and knowledge-based approaches, as well as statistical approaches including Bayesian estimation, search and optimization methods. The list is quite broad and it clearly encompasses a range of technologies used today by companies, internet platforms and public institutions alike.

Read More »Artificial Intelligence Act: what is the European Union regulating?

Retrospective: a year of advocacy at Wikimedia France

Wikimedia France looks back on 2021, a year of advocacy campaigns at national and European level. Bringing the voice of community-governed platforms such as Wikipedia – besides the commercial ones – is not always easy. And while legislators and policy makers receive our arguments and concerns generally positively, there is still a long way to go before our messages and initiatives claim to be embedded in the texts that shape and frame the digital of tomorrow.

Several bills have, this past year, impacted Wikimedia projects and particularly the collaborative online encyclopedia, Wikipedia. Without going into a Prévert-style inventory, Wikimedia France wants to come back to some of its battles that it has carried out relentlessly, in order to defend a vision of a free and open Internet, protecting the rights and freedoms of users.

The Republican Principles bill or “the French DSA”

The bill reinforcing respect for republican principles, originally called “law project against separatism”, was not intended to regulate digital platforms. Indeed, the main objective of this text was to “fight against radical Islam and separatism”. But policy experts ended up qualifying it as a “catch-all”, insofar as a lot of subject matter had been inserted into it, including digital issues.

Read More »Retrospective: a year of advocacy at Wikimedia France

e-Privacy: our quick fix to help nonprofits and protect consent

The ePrivacy Regulation could potentially make communications better by setting a firm standard on how online tools can and cannot be used in profiling and surveilling individuals. We became directly interested in the proposal for a regulation when we realised that the proposed rules on how our chapters and affiliates can communicate with their supporters are ambiguous. Here is the breakdown of the problems and ways out.

How it works now

The Regulation concerning the respect for private life and the protection of personal data in electronic communications (a full name of a Regulation on Privacy and Electronic Communications, or ePrivacy Regulation) is now subject to trilogue negotiations. We specifically look into provisions on the scope of direct marketing. As much as we don’t “market” any services or products for sale to individuals, we all want to keep in touch with our supporters. According to the ePrivacy proposal such communication falls under the definition of direct marketing. This concerns organisations in our movement that contact individuals to solicit donations or to encourage them to volunteer in various ways in support of our movement’s mission. 

Currently in several Member States, based on the ePrivacy Directive and subsequent national laws, nonprofits have the right to contact individuals who they were in touch with before, on an opt-out basis. It means that while they present a new initiative or a fundraising campaign, they need to provide the contacted people with a possibility to refuse receiving such information in the future. 

Read More »e-Privacy: our quick fix to help nonprofits and protect consent

Wikimedia France: new anti-terrorist bill exposes users to mass surveillance

Remember when we learned that Wikipedia was a target of widespread NSA surveillance? Wikimedia Foundation challenged the NSA program siphoning communications directly from the backbone of the Internet in the court. Today in France we may face a similar issue in the form of a new antiterrorist law that would add a grave threat to privacy to the censorship of the Terrorist Content Regulation. 

Protecting Wikipedia from mass surveillance

In May 2013 Edward Snowden revealed the existence of several American and British mass surveillance programs. The Wikimedia Foundation and other non-governmental organizations such as Amnesty International and Human Rights Watch have filed a complaint against the NSA, accusing it of violating the first and fourth amendment of the American Constitution, and of having “exceeded the authority conferred on it by Congress”. 

As a result, on June 12th 2015, the Wikimedia Foundation announced the use of the HTTPS communication protocol for all Wikimedia traffic, with a view to countering the mass surveillance exercised by the NSA, which took advantage in particular of the inadequacies of the non-encrypted communication protocol. 

Now, over to France

The new proposed French anti-terrorism bill fits well in the mass surveillance trend, attacking fundamental rights of online users. Presented by the Minister of the Interior, Gérald Darmanin, on April 28, it proposes a number of security measures inherited from the state of emergency of 2015 and the law of 2017 on internal security and the fight against terrorism. It also validates tools such as “black boxes”, responsible for detecting terrorist threats using user connection data, while expanding their use.

Read More »Wikimedia France: new anti-terrorist bill exposes users to mass surveillance

E-Evidence: trilogues kick off on safeguards vs. efficiency

The Regulation on European production and preservation orders for electronic evidence in criminal matters (E-Evidence) aims to create clear rules on how a judicial authority in one Member State can request electronic evidence from a service provider in another Member State. One such use case would be requesting user data from a platform in another EU country during an investigation. We wrote about our main issues in the past.

What Wikimedia worries about

At Wikimedia we were originally  worried mainly about a new data category – access data. This would mean that prosecutors would be able to demand information such as IP addresses, date and time of use, and the “interface” accessed, without judicial oversight. In the Wikipedia context, however, this information would also reveal which articles a user has read and which images she has looked at. 

The second aspect we care about is whether the service provider’s hosting country’s authority will have the right to intervene in some cases where fundamental rights of its citizens are concerned. We know that unfortunately not all EU Member States have good rule of law records, which calls for safeguards at least  against potential systemic abuse. Again, knowing which Wikipedia articles or which Wikimedia Commons images someone opened is information that should be hard to get and only in rare and well justified cases.

Read More »E-Evidence: trilogues kick off on safeguards vs. efficiency

E-Evidence: Let’s Keep Reader Data Well Protected!

A new EU regulation aims to streamline the process by which a prosecutor from one EU Member State can request electronic evidence from a server in another Member State. As current procedures are messy, this is necessary. But the current proposal would also mean that prosecutors could request data about who has read which Wikipedia article without judicial oversight and without a possibility for the country’s authority that hosts the platform to intervene in case of fundamental rights breaches. That is worrisome!

The Wikimedia Foundation gathers very little about the users and editors on its projects, including Wikipedia. This is how the Wikimedia movement can ensure that everyone is really free to speak their mind and, for instance, share information that may be critical of a government in the country they live in. However, the Foundation’s servers do record the IP addresses of users who have accessed Wikipedia, and the individual articles they have viewed. In accordance with the Wikimedia community’s support for strong privacy protections, the Foundation keeps this information for a few months as part of the way its servers function before it is deleted. Allowing access to these IP addresses and the articles that the users behind those IP addresses have read — without judicial oversight — is the issue with the European Commission and Council proposals for an E-Evidence Regulation.

Read More »E-Evidence: Let’s Keep Reader Data Well Protected!