We understand the concern and are concerned ourselves. Lawmakers worldwide are rightly focused on the effects personal data collecting, algorithm-and-advertising-driven, content pushing platforms have on growing up. Protecting and empowering minors—both online and offline—requires ongoing effort, cooperation, and commitment from the entire society.
The EU has already enacted important legislation, including the Digital Services Act (DSA), the Audiovisual Media Services Directive (AVMSD), and the Artificial Intelligence Act (AIA). These provide a solid framework for safeguarding children’s rights online, but of course could also be in need of updating. Currently several own-initiative reports by the European Parliament, Member States initiatives and the European Commission are considering options.
As such legislation is developed, we need to follow one guiding light: children’s rights and best interest. The challenge is clear: children deserve protection, and we have an obligation to protect them. Yet children also have other rights: to education, to privacy, and to freedom of expression. We must make sure all of these are protected.
Risks in the Real World and Online
In the physical world, children encounter varying levels of risk depending on where they go and what they do. Walking down a street, they might witness inappropriate behavior or imagery. Listening to radio, they might hear disturbing information during the news break. Yet we don’t prevent children from going outside or require age verification for everyone who turns on a radio. Such measures wouldn’t be practical or proportionate. At the same time the most explicit content wouldn’t be allowed on posters or billboards, and the phrasing of even terrible events on the radio newscast won’t be graphic. There is a fine social and legal balance established over decades.
That said, we do have laws requiring age checks for purchasing alcohol and tobacco, viewing adult content, and gambling. This shows that age verification does have its rightful place in the “real world”.
The question for the online world is to now find this fine balance: Where are the greatest risks, and what is the most practical and proportionate way to address them?
Wikipedia and its sister Projects: Lower Risk, But Not Risk-Free
We recognise that Wikipedia and its sister projects could potentially host inappropriate content. This could be content used to depict particular Wikipedia articles, but also uploads of child sexual abuse material (CSAM). We remain fully committed to identifying and reducing these risks. This ongoing work includes conducting Child Rights Impact Assessments to identify improvements, as well as fulfilling Digital Services Act requirements for risk assessment and mitigation measures. The Wikimedia Foundation also scans the projects for CSAM (more below).
That said, Wikipedia and its sister projects are significantly lower-risk than other online services, primarily because of several structural features:
- Content and interaction is not driven by algorithms — Wikimedia projects don’t push content to users or create algorithmic matches between people.
- Transparency is complete, and there is no monetisation, which reduces incentives for bad actors to misuse these services.
- There are no private communication spaces like chats or private messaging.
- Content is narrowly focused on educational material—content without educational value is removed.
- The volunteer community actively enforces rules through peer review and oversight.
- Additionally, the professional Trust and Safety team at the Wikimedia Foundation provides expert oversight on child safety matters.
What Volunteers Do to Protect Children
Wikipedia’s volunteer editing community has put strong protections in place. The community enforces a zero-tolerance policy for inappropriate content and relationships between adults and children, permanently blocking and banning any editors who pursue, facilitate, or advocate for such relationships. These cases are also reported directly to the Wikimedia Foundation, including concerns about images of children.
Experienced editors have the ability to suppress content to protect user privacy, remove inappropriate images, and delete defamatory material from user pages and logs. Beyond enforcement, the community takes a proactive role by offering guidance to parents and resources for young editors. The Wikimedia movement also provides professional training in child protection and CSAM awareness to volunteer administrators.
What the Wikimedia Foundation Does to Protect Children
The Wikimedia Foundation (WMF) protects children through a combination of responsive and preventive actions. Its trained Trust & Safety team responds to reports of CSAM from volunteers, organisations, and authorities, forwarding relevant cases to appropriate institutions. Since 2020, the WMF has used the PhotoDNA technology, which automatically scans all WMF sites against a database of known harmful images and flags matches for human review—eliminating the need for staff to manually view such material.
When CSAM is identified, the system alerts the Trust & Safety and Legal teams for reporting. The Foundation publishes a Transparency Report twice yearly detailing content removal requests related to child safety, and has completed a Child Rights Impact Assessment to evaluate how its projects affect children.
According to Digital Services Act data, the 2024 scan detected 31 CSAM matches. While the system did produce false positives (47.83% standard false positives and 39.13% NSFW false positives), WMF’s mandatory double-review process for all flagged content effectively corrects these errors, ensuring that PhotoDNA meaningfully strengthens child protection.
Age Verification Isn’t a universal solution for all platforms
Age verification has its place in society and it has its rightful place online. Particularly for high risk services. For lower risk spaces, however, it might not be the best approach.
Wikimedia, for instance, collects minimal user data, primarily to protect people’s privacy. Children and adults alike have a right to privacy—and this matters in concrete ways. In many parts of the world, governments restrict access to information and target volunteers with threats and imprisonment. The best way to protect volunteers is to minimise data collection and tracking. Additionally, less data means less risk of worrisome data breaches and cyberattacks.
Even more fundamentally, public interest sites and educational resources must remain freely and easily accessible. As already stated, education, participation and expression are rights, including of children. Implementing age verification would restrict access to knowledge and education and would create additional barriers to participation in public debate, science, and education. It should not be done without very serious consideration and proof of serious risks.








