
COLUMBIA, S.C. — South Carolina Attorney General Alan Wilson, along with 25 other attorneys general largely from Republican-controlled states, sent an open letter to Pornhub parent company Aylo on Friday demanding explanations about a supposed “loophole” that allowed users to post CSAM.
The letter was prompted by a controversial hidden-camera report promoted on X.com by a person formerly affiliated with conservative undercover journalism outlet Project Veritas. The report purported to show an employee of Aylo-predecessor MindGeek being filmed without his knowledge in a social setting at an unspecified time, and making claims about the company’s moderation policies.
Basing their query on that questionable source, the 26 attorneys general wrote to Aylo VP of Payments, Trust & Safety Matt Kilicci and Ethical Capital Partners (ECP) VP of Compliance Solomon Friedman “to inquire about a possible ‘loophole’ in your platforms’ moderation practices that potentially permits content creators to publish child sexual abuse material (CSAM) on your platforms, and to determine what is being done to address this possible loophole.”
The attorneys general requested clarification about why “content creators and performers must produce a photo ID to open an account with Pornhub to upload content, but they are not required to show their faces in the content they upload to the site, so there is no way to confirm that the content actually features the performer/content creator that uploads the content.”
The attorneys general expressed their concern that “Aylo and its subsidiary Pornhub, and possibly other subsidiaries, may be proliferating the production and dissemination of CSAM through the ‘loophole’ identified by your employee” in the undercover video recording.
“Please provide us with an explanation of this ‘loophole;’ whether Aylo and its subsidiaries do, in fact, permit content creators and performers to obscure their faces in uploaded content; and, if so, whether Aylo is taking measures to change this policy to ensure that no children or other victims are being abused for profit on any of its platforms,” the 26 conservative AGs wrote, also adding a query about what steps the companies are taking to prevent AI-generated CSAM from being broadcast on their platforms.
Disregard for Section 230 or Privacy of Adults
Aylo was asked to reply to those inquiries within 30 days of this letter.
The attorneys general’s supporting material also includes references to an article by religiously inspired anti-porn crusading lobby National Center on Sexual Exploitation (NCOSE), formerly known as Morality in Media.
The letter by the attorneys general makes no mention of Section 230 protections, which shield companies from liability for user-generated content.
Disregarding First Amendment protections, the attorneys general imply that consenting adults should be told by private companies, under pressure by the state, how to shoot erotic content. The letter hints at mandating that faces be at all times included in any image for content deemed “pornographic” according to unspecified standards.
The letter also makes no attempt to account for the privacy of adult performers that may choose not to show their faces.
Aylo Responds
XBIZ contacted an Aylo spokesperson who offered the following statement:
We are committed to transparency and will respond to the Attorneys General within the required timeline. We are aware that an employee erroneously points to the existence of a supposed ‘loophole’ in the company’s moderation practices. What is being referenced is that Aylo platforms let their content creators and performers choose whether to show or to hide their face in their content.
There are a number of reasons why a verified model may choose to upload content that does not include their face, including their right to privacy. For this reason, we take extra precautions to ensure this content can be uploaded safely.
When a piece of privacy-preserving content is uploaded, an evaluation process is carried out by the moderator to determine if by reviewing the uploader’s other content and ID or other verification documents, each of the performers can be identified. If the performers can be identified, the content may be approved, and if not, the content is escalated to a senior member of the moderation team for a secondary review and to determine whether the content can be approved, rejected, or whether additional documentation is required.
We recognize the evolving challenges posed by user-generated content online. As such, we are constantly evolving and improving safety and security measures, which include, among others, Upload Verification Program, banning downloads, human moderation of all uploaded content, and continuous additions to our suite of automated moderation tools (CSAI Match, Content Safety API, PhotoDNA, Vobile, Safer, Safeguard, and more recently, NCMEC Take It Down and StopNCII).
Main Image: South Carolina AG Alan Wilson