Skip to content

The Digital Age of Consent: 2 Years Later

THE LONG READ: On the occasion of the second anniversary of the digital age of consent being set in Ireland as 16, professor Brian O’Neill and Cliona Curley, take stock of its impact on children and asks if anything has fundamentally changed since its introduction. 

The General Data Protection Regulation (GDPR) came into effect across Europe two years ago on May 25th, 2018 and brought the issue of children’s personal data to the fore. 

The GDPR introduced for the first time in the European Union special privacy protections for children. Children were identified in the GDPR as “vulnerable individuals” meriting “special protection” as data subjects. The processing of children’s data in the context of their social media use was a focal point of discussion at that time, giving rise to a heated debate regarding the so-called “digital age of consent” which in Ireland has been set at 16 years.

One year ago, on the first anniversary of the GDPR’s introduction, CyberSafeIreland commissioned research to examine the measures social media services, at least those that are popular with children, had introduced in response, particularly in relation to age verification mechanisms. To coincide with the second anniversary of GDPR, this study has now been repeated and provides an opportunity to see if anything has fundamentally changed.

The continuing impact of COPPA alongside GDPR

Under Article 8(2) of the GDPR, information society services targeted at children under the age of 16 (or other age as set in national legislation between 13 and 16 years) are required to make “reasonable efforts” to verify that parental consent has been given for children to use that service.  In addition, Article 12 stipulates that service providers should ensure that information such as privacy notices or Terms of Use provided to children are concise, transparent and in plain language. The use of child data in marketing or direct profiling of children is also a matter of significant concern and is prohibited under Irish legislation.

So, how is it that children, many of them underage, continue to use social media services in large numbers in contravention of age restrictions and certainly the intent of GDPR? 

In the main, social media services persist with 13 years as the age limit for use of their services, based on the age limit contained in the United States’ Children’s Online Privacy Protection Act or COPPA. COPPA contains quite strict requirements as to how verifiable parental consent should be obtained for any child under 13 who may use the service. It also stipulates what is required in terms of a service’s privacy policy and the protections that must apply in order to comply with COPPA regulations. Due to the costs and complexity involved, many providers choose not to allow children to use such services and restrict access via their terms of use to those over the age of 13.

By contrast, the GDPR does not specify how parental consent should be obtained, nor does it oblige service providers to obtain any additional verification of age once users register, truthfully or otherwise, to access their service.  Such guidance or more specific restrictions are likely to come from codes of conduct which the GDPR encourages member states and supervisory authorities to develop but which are not yet in place.  In the meantime, most social media service providers have responded by limiting the data they collect from users under the digital age of consent, thereby also limiting some functionality, while continuing to place the main emphasis on restricting access to under-13s in order to avoid infringing COPPA regulations.

Underage use of social media

Through its regular surveys of 8 to 12 year olds’ use of digital technologies, CyberSafeIreland has followed closely the use of social media and messaging services by children 13 years and young. Its most recent Annual Report found a high level of underage use with 48% of 8 year olds, 45% of 9 year olds and steadily increasing use from age 10 (55%) up to 13 (96%) active on social media.

The 2020 review of age verification mechanisms for the 10 most popular social media apps used by children found that it is possible to get around age restrictions simply by lying about your age (see Table 1). 

If a user is initially honest in entering their age and tries gain access as an under-13 year old, some  companies, notably TikTok, have made it distinctly harder to go back and try to sign up with a false age. This might be best described as a deterrent (rather than a barrier) to underage users. MessengerFacebook and HouseParty, for instance,  also made it somewhat more difficult to re-register and had to be re-installed before allowing a user to re-start the sign-up process. However, in all cases, the researcher was able to successfully get around the restrictions and set up a user account without encountering any further attempt at age verification.

Regulating for underage use

The purpose of GDPR in relation to children’s data was always a matter of privacy protection rather than safety or regulating against underage use of social media. The debate on the digital age of consent did focus attention on prevailing age restrictions and certainly raised general awareness, including among parents, about the appropriate age for use of digital technologies more generally. However, a key concern is that by incentivising children to lie about their age to access social media (and seeing that is feasible to do so), children are likely to be more vulnerable online as they seek to avoid the constraints that GDPR may entail.

A better approach would be to extend the most restrictive privacy settings by default for any user that declares themselves to be under the age of 18. Users must be incentivised to be honest about their age, with minimal data being collected in all cases. As such, existing social media apps should ensure that a clear, concise and age-appropriate summary of the relevant parts of the app’s Terms of Use is presented to users who sign up and declare their age to be under 18. Affording the maximum level of privacy protection to all users under 18 supports children’s legitimate rights to actively participate in the digital environment.  At the same time, where age restrictions apply, they should be supported by the implementation of realistic as well as robust age verification as an ongoing process beyond the very basic self-declaration process that currently applies. 


The full findings of the 2020 Technical Report: A Review of Age Verification Mechanisms for 10 Social Media Apps are available here.

Professor Brian O’Neill, Technological University of Dublin (TUD), is a researcher of young people’s use of digital technologies, online safety and policy for the digital environment. He is member of the Internet Safety Advisory Board for the Safer Internet Ireland programme. He also leads the EU Kids Online project in Ireland and is a board advisor for CyberSafeIreland. 

Cliona Curley worked in law enforcement in the UK for many years, specialising in cybercrime investigation. She is Programme Advisor of CyberSafeIreland, a not for profit organisation that delivers online safety education to primary school children and their parents and she is also working towards a PhD in the Computer Science Programme at UCD


To see Table 1: Summary of test results for Top 10 Special Media Apps, click here.


Image credit: 

“GDPR & ePrivacy Regulations” by Dennis Convert is licensed under CC BY 2.0

Posted on:

May 25, 2020


CyberSafeKids is an Irish charity, which has been empowering children, parents, schools and businesses to navigate the online world in a safer and more responsible way since 2015.