Event "Person under 18 online: what risks, what protection?" - Speeches by Martin Ajdari, President of Arcom
Check against delivery
Ladies Senators
Ladies and Gentlemen
dear friends,
- The first is the publication in July by the European Commission of its guidelines on the protection of person under 18 online. These guidelines, provided for in Article 28 of the RSN, describe the concrete measures that platforms (the likes of TikTok, Instagram and Snapchat) must implement - age verification, of course, but there are many others.
These measures take up several recommendations made by Arcom in September 2024, a sign that our action is also bearing fruit in Brussels. And I can't stress enough that it's at this level that a large part of the answers to the problems identified lie (design of the most important platforms and their algorithms, definition of what is a social network subject to digital majority, etc.) ; - The second notable contextual element is the very recent publication, Mr. Delaporte, of the report by the commission of inquiry you chaired into the psychological effects of TikTok, a report that brings to light major challenges in terms of platform regulation. I'd also like to pay tribute to the importance of the work carried out by the French senate, in particular the resolution adopted this summer, on the initiative of Catherine Morin-Desailly, who is also with us this afternoon, on exactly the theme that brings us together today.
It is in this context that Arcom is publishing the results of a long-term study conducted over the past year among 2,000 teenagers aged 11 to 17, combining quantitative and qualitative approaches with semiological analysis.
The starting point was the questions many of us have been asking ourselves: how exposed are our children to the risks posed by platforms? Are they aware of them, and how do they protect themselves? What do they (and their parents) expect in terms of prevention?
To find out the answers to these questions, I'd like to hand over to the teams from Arcom's Research, Economics and Forecasting Department, who carried out the study and will be presenting you with a summary.
Many thanks to all three of you.
As you can imagine, the purpose of such a study is not just to provide a snapshot (however accurate) of the situation; it is also to guide the action of the authorities, particularly the regulator.
In this respect, I would like to make three observations:
- The first is the very early and massive use of platforms (60% of children aged 11 use them on a daily basis), which explains the very strong attachment of teenagers to these platforms. They are almost born with them, or at least grow up with them, and socialize with them.
- The second observation is that our children are massively exposed to risks, and they're well aware of it. This awareness often stems from a bad experience with potentially dramatic consequences. And some platforms, whose business model is based on maximizing engagement, do little - at least not everything - to protect them.
- A final observation: teenagers and parents alike are (on average) neither passive nor helpless, and are developing protection strategies (such as parental controls). But too many children remain on the sidelines, totally alone in the face of risk. And these strategies are not enough, as the tools are either difficult to use, unknown or too easily circumvented (62% of teenagers admit to having lied about their date of birth).
These findings suggest a number of important lessons for criminal proceedings:
- The first is that teens and their parents are summoning the authorities to help them make safe, controlled use of the platforms.
- The second lesson is a warning to certain platforms: the status quo is no longer acceptable. Platforms must assume their legal responsibilities and modify, sometimes radically, the design of their services for persons under 18.
- Finally, we need to move away from purely reactive strategies, when the damage has already been done. We need to act preventively, upstream, to reduce as far as possible the exposure of persons under 18 to risk, as part of a preventive strategy.
In my view, this strategy should be based on three pillars:
- First, an effective minimum age. At present, this age - 13 - is made provision for in the platforms' terms of service. But as we've seen, it's not always respected, and the challenge now is to ensure that it's effectively applied when accounts are opened, and even when services are accessed. And to require platforms to fight fight agains circumvention strategies much more effectively, by improving detection of offenders and preventing them from recreating an account.
Of course, we wouldn't want to come under the control of the platforms' terms of service alone. That's what's at stake in the debate currently underway between European and national legislators, with the aim of imposing a minimum legal age (13, 15 or 16 - it's up to politicians to decide), compliance with which will then have to be verified. In the meantime, let's get on with enforcing the age stipulated in the Terms of Service. There's no reason to wait before taking action. - Second pillar: oblige platforms to offer services that are suitable for person under 18s, i.e. purged of anything that could endanger them: content that is offensive to them or illegal (acts of barbarism, risky behavior, pornography, etc.), addictive features such as infinite scrolling, malicious interactions, particularly with adults (in this area, tolerance must be zero). These are not marginal adaptations. Platforms must understand that they can no longer offer the same services and functionalities to adults and persons under 18.
- Third and last pillar: give persons under 18 the means to control their consumption. Our children want to act, but they can't always, or not as easily as they'd like. Platforms must therefore give them the means to do so: simplify content and advertising notice settings, offer more ergonomic tools for blocking or flagging content. Of course, they also need to be supported in this process, which is why digital literacy is so important.
With these observations in mind, everything now hinges on our collective determination to put pressure on the platforms, by adopting a proactive roadmap for the coming months, and focusing on three online service categories:
- First, the very large platforms (TikTok, Instagram, Snapchat...) which, as we've seen, are the most widely used and therefore the riskiest (89% of 14-year-olds make daily use of at least one of them). Between now and the end of 2025, we will be opening a dialogue with these platforms and inviting them to present us with their plans to comply with the measures I have just outlined.
We will then monitor their implementation: by developing testing campaigns; by setting up a panel of youngsters whose feedback and expectations will inform our actions. Because we're here to help them protect themselves, and we want to do it with them. And more broadly, we'll be working with all the relevant players: family and parent associations, trusted flaggers, government departments, researchers and platforms which, even when they don't fall within the direct competence of Arcom, will be sensitive to our ability to alert users, French and European authorities, and more generally public opinion, to the shortcomings observed among bad performers, while at the same time rewarding the good performers - because there are some, and we need to say so. When there are good practices, we need to highlight them so that they inspire other services. - Then, in the second category: pornographic services, we're now focusing on sites that are a little less popular than our first target, but just as harmful to youngsters. And we will apply the same methodical proceedings to them: verification, possible finding of non-compliance, formal notice and, where appropriate, blocking or delisting.
- Finally, we will be paying particular attention to smaller platforms established in France, which we know are likely to expose person under 18s to very serious risks. This is already the case for the Bounty site, which was mentioned publicly at the beginning of the month. Other players are also the subject of strengthening vigilance (we are not naming them, so as not to give them undesirable visibility).
In concrete terms, under the provisions of :
- Between the end of 2025 and the 1st quarter of 2026, we will interview representatives of all the major platforms and launch phase II of age verification checks on users of pornographic sites;
- In spring 2026, we will launch a wave of checks on the commitments made. On the basis of these initial findings, by summer 2026 we'll be able to deploy on our website a transparency space on the compliance of the main platforms with their obligations, as well as an instrument for monitoring the actions undertaken.
In this way, we are rolling out a new ambition, both in France and at European level. The initial successes achieved in terms of access to pornographic sites should encourage us to do more and better, alongside the Government, Parliament and the European Commission. We also need to think about how to adapt our law to technological developments and circumvention strategies.
Not in order to censor platforms, as we sometimes hear, but to make them more accountable and safe spaces of freedom for our children, adapted to their legitimate socialization needs and protected from anything that could endanger their physical and mental health. Freedom, however precious it may be, cannot be exercised to the detriment of safety, especially that of the youngsters.
Thank you for your attention.
Speeches by Martin Ajdari
- 174.12 KB
- in french