
LONDON — Several tech companies are offering new artificial intelligence (AI) solutions to estimate the age of people accessing websites, seeking lucrative opportunities for compliance with the U.K.’s new Online Safety Act and similar legislation moving forward in the EU and the U.S.
Earlier this week, The Telegraph newspaper reported that Google’s AI age estimation system using facial recognition technology has been stealthily approved by the U.K. media regulator Ofcom last month.
Although Google “has never revealed that it plans to use the technology,” The Telegraph noted, “the company has appeared on a registry of providers approved by the Age Check Certification Scheme (ACCS), the UK’s programme for age verification systems.”
According to the conservative newspaper, the tech giant developed its “selfie scanning software” to prepare for a “porn crackdown.”
“It is one of the proposed ways that internet users could verify they are old enough to access adult sites under new online safety laws,” the report explained.
The new technology is described as utilizing phone cameras to capture an image of someone’s face in order to offer an estimation of their likely age.
Google claimed that the technology is 99.9% reliable in identifying that a photo depicts someone under the age of 25. Those under the age of 25 could be asked to provide additional ID.
“The prospect of Google scanning faces to grant access to sensitive websites would likely raise privacy concerns given the trove of data the company already gathers on web habits,” The Telegraph noted.
As XBIZ reported, earlier this month Ofcom — the government authority tasked by the recently enacted Online Safety Act with online content restriction enforcement — issued on its first guidance to adult websites regarding age verification.
The guidance suggested acceptable age verification methods, including open banking; photo identification matching; facial age estimation through some unspecified manner of software; mobile network operator age checks; credit cards checks; and some form of digital identity wallet.
Virtually all online privacy and digital rights groups worldwide have expressed serious concerns about the Online Safety Act and Ofcom’s increased content censorship powers under it.
Abigail Burke, of digital rights nonprofit Open Rights Group, told the Financial Times that guidelines “create serious risks to everyone’s privacy and security.”
The potential consequences of data being leaked, Burke added, “are catastrophic and could include blackmail, fraud, relationship damage and the outing of people’s sexual preferences in very vulnerable circumstances.”
A Gold Rush for Age Estimation Solution Providers
Other AI solution providers have already entered the burgeoning age estimation marketplace. Meta and OnlyFans employ Yoti, which The Telegraph reported “automatically deletes images once their age has been estimated.”
Last week, Persona, a unified identity platform, and Trusted Vision AI provider Paravision unveiled their partnership on an AI age estimation and verification solution.
“Based on Paravision’s AI Principles and Persona’s mission to humanize digital identity, this solution is ethically built and trained on a diverse set of data, as well as rigorously audited to detect and mitigate bias,” the companies touted through a press release.”
Paravision and Persona noted that the need to conduct age verification has expanded from fraud prevention to now “include social networks, gaming, and other online platforms as children spend an increasing amount of time online.”
The companies referred specifically to “new legislation to restrict children’s access to harmful or otherwise inappropriate content has been introduced globally, such as the Kids Online Safety Act (KOSA) in the U.S., a bipartisan bill, and the recently passed Online Safety Act in the U.K.”
Paravision Chief Product Officer Joey Pritikin emphasized that “the need for reliable, responsible age estimation technology has never been more pressing, particularly in light of the growing concerns around children’s online presence as well as leveraging ethical approaches to AI.”
Persona’s Head of Identity Products Daniel Lee enthused, “It is encouraging to see lawmakers pushing platforms, and therefore their identity solution providers, towards greater innovation and responsibility. The mandate is clear: we must balance the delivery of high-assurance, unbiased solutions with safeguarding end user privacy. We believe our industry-leading solution will help our customers better deliver trusted services, while complying with age verification regulations, fighting fraud, and keeping users safe.”