The changing landscape of social platforms – Alastair Graham, CEO
The tragic death of a 10-year-old girl in Italy may prove to be one of the most significant catalysts for change across Europe when it comes to age verification.
This very sad loss of a young life has been associated with a contest being shared across the social media platform TikTok in which users seek to hold their breath, known as the blackout challenge.
The accident brought an immediate response from Italian authorities and, from tomorrow, TikTok committed to allowing only users based in Italy to use the site after they have verified their age.
How they plan to do this is not yet known but TikTok has talked about relying on artificial intelligence to make a judgement about the likely age of their users. This approach recently termed ‘age assurance’ by the UK government, does not deliver an accurate date of birth. Perhaps the most widely used form of AI is facial analysis. After several years of investment, this can only estimate the age to within approximately two years on either side of the real age. The lack of data on which to train these systems means that the accuracy levels for children are considerably below those for adults. An alternative route which TikTok may have in mind is ‘social proofing’ which analyses users’ behaviours on a social media site, particularly looking at references to friends, perhaps whose age is already known to the site, and references to schools and other interests to see if it is congruent with their claimed age. However, I am unaware of any solution of this sort which is both accurate and cannot be fooled into believing that a 10-year-old who can read websites advising how to circumvent the check, is not in fact 16.
There is also a Catch 22 problem with the use of machine learning techniques to estimate the age of children; namely, that they require a significant volume of the child’s data which must then be processed in order to ascertain its age-range. European data law forbids the processing of children’s data without the permission of their parents. So how would any social media platform know that it needed to secure the permission of a parent, prior to knowing that the user was likely to be a child. And while the relevant authorities may take a view that using data for this purpose alone is captured by the legitimate interests of the platform, the current advice is that those websites should limit the access of the unverified user to only child-friendly content until their age has been established. If all the AI has to go on is their history of visiting harmless content, I struggle to understand how it can draw any conclusions about age.
I said at the outset that this could have a significant impact on Europe. The reason for this is that under the one-stop-shop arrangement agreed between data protection authorities across the European Union, any individual platform is primarily regulated by the member state in which it is most substantively based. In the case of TikTok, they have elected to be established in Ireland so it would be for the Dublin authorities to determine whether any age assurance solution they propose is fit-for-purpose. The Italian authorities have already alerted their Irish counterparts to their concerns. Because so many major global platforms have also chosen Ireland to be their principal home in Europe, any decision made in relation to TikTok will immediately impact Facebook, YouTube, Google and countless others.
For as long as age assurance remains undeveloped, data protection authorities will have to reach the inevitable conclusion that only age verification is going to be good enough for any online service which poses a risk of harm to children.
However, before you imagine every 13-year-old in the continent demanding access to their passport so they can continue to share their dance moves with their school friends, it is worth remembering that age verification can now be achieved through a wide range of methods some of which do not involve any intervention by the user at all. The methods which involve the least friction for the user experience generally rely on age verification providers obtaining access to databases of records, against which to check the claimed age of that user. There has, to date, been considerable reluctance to make children’s official data available for this purpose. We have recently seen a very welcomed innovation by the UK’s HM Passport Office, which is now allowing through a carefully controlled pilot, one-way blind checks of passport data. This is currently rather expensive and open to a very limited number of companies but will prove the concept that it is possible for children to gain access to their own data held by official bodies without any risk to privacy and security.
Only a proportion of children have a passport, so to develop an inclusive solution that will not unfairly prevent some children from accessing content suitable for their age which may be important to their development, more comprehensive data sources are required, such as school records.
We have found ourselves in a position where laws and regulations have overtaken what is currently available for checking the age of children. The European Commission has tacitly acknowledged this by funding a major pan-European project to address child rights and protections in the online sphere, in which AgeChecked expects to be closely involved. This will create an infrastructure not only to facilitate age verification but also to obtain parental consent on an interoperable basis across Europe, including the UK.
But that project is only just about to begin and, as the news from Italy has demonstrated, patience is running out with social media platforms that allow children of any age unfettered access to their communities, and the dangers that lurk amongst them. The precautionary principle for public health would suggest it is better to do something rather than nothing while a risk persists. In this case, platforms could immediately require age verification for adults opening accounts and apply this to all their existing users. Where a user is under 18, insist that an adult who has had their age verified provides consent for that minor. This is clearly not perfect, but it would acknowledge the art-of-the-possible and be a significant first step towards making our children safer online.