EU DSA Transparency Report – August 2024

Name of the service provider

Aylo Freesites Ltd

Date of the publication of the report

30 August 2024

Service

Punchporn

Reporting period

The following report covers the reporting period of 17 February 2024 to 30 June 2024.

Orders from authorities (Art. 15(1)(a) DSA)

The below table represents the number of orders by law enforcement for immediate removal, per country and type.

CountryNon-Consensual ContentTotal Orders
Austria
Belgium
Bulgaria
Croatia
Cyprus
Czech Republic (Czechia)
Denmark
Estonia
Finland
France
Germany
Greece11
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Malta
Netherlands
Poland
Portugal
Romania
Slovakia
Slovenia
Spain
Sweden
Totals11

We provide an immediate automated response to acknowledge receipt.

The median time to process these requests once full information was received from law enforcement was 12 hours. This does not include the time to intake or follow-up on these requests as needed. We typically provide completed information to law enforcement within 5 business days of receipt, during which time the content and/or account in question is disabled, where appropriate.

The table below indicates the number of information requests from law enforcement relating to individuals/users per country and type.

CountryChild Sexual Abuse MaterialNon-Consensual ContentScams and/or FraudRisk for Public SecurityTotal Number of Requests
Austria
Belgium
Bulgaria
Croatia
Cyprus
Czech Republic (Czechia)
Denmark513110
Estonia
Finland
France11
Germany
Greece34 –7
Hungary1 –1
Ireland1 –1
Italy
Latvia22
Lithuania
Luxembourg
Malta
Netherlands
Poland
Portugal11
Romania
Slovakia22
Slovenia
Spain
Sweden11
Totals7612126

We provide an automated response to acknowledge receipt.

The median time to process these requests once full information was received from law enforcement was 12 hours. This does not include the time to intake or follow-up on these requests as needed. We typically provide completed information to law enforcement within 5 business days of receipt, during which time the content and/or account in question is disabled, where appropriate.

User notices (Art. 15(1)(b) DSA)

Note that the figures provided in this section are for the total number of notices received. A notice may list one or several pieces of content, and one piece of content could be flagged several times.

Content reported by users

The table below indicates the number of user notices submitted by users through all available notification channels on Punchporn, including content removal requests (CRRs) and content flags.

Type of potential violationTotal
Potential Child Sexual Abuse Material         6,097
Non-Consensual Content         1,351
Illegal or Harmful Speech              1,304
Content in violation of the platform’s terms and conditions      25,324
Intellectual property infringements         1,254
Total      35,330

DSA Trusted Flaggers

We did not receive any removal requests from DSA Trusted Flaggers during the reporting period.

Actions taken on user reports

The table below indicates the number of pieces of content removed on the basis of user notices.

Reason for RemovalTotal
Content in violation of the platform’s terms and conditions            2,229
Non-Consensual Behavior             1,547
Potential Child Sexual Abuse Material                  324
Animal Welfare                        2
Bodily Harm/Violence                        4
Intellectual Property Infringements             1,546
Withdrawal of consent6,488
Total          12,140

Notices processed by automated means
All notices are processed by our human moderation team, and we do not utilize automated measures for any requests. Note that content is immediately suspended from public view when reported via our content removal request form, prior to human review, provided that the submitter has validated their email address. If after diligent human review, no illegality or incompatibility with our terms and conditions should be confirmed, the content is reinstated.

Median resolution time 

Reporting sourceTime
Median Time – Content removal request form 0.12 days
Median Time – Content flags 0.25 days
Median Time – Copyright infringement form 2.86 days

Content moderation (Art. 15(1)(c) DSA) & Automated content moderation (Art. 15(1)(e) DSA)

We use a combination of automated tools, artificial intelligence, and human review to help protect our community from illegal content. While all content available on the platform is reviewed by human moderators prior to publishing, we also have additional layers of moderation which audit material on our live platform for any potential violations of our Terms of Service.

The accuracy of content moderation is largely unaffected by Member State language due to our extensive use of automated tools and human moderation. Internal statistics show no significant differences between languages. Offenses are largely language independent.

Automated tools are used to help inform human moderators in making a manual decision. When an applicable automated tool detects a match between an uploaded piece of content to one in a hash list of previously identified illegal material, and that match is confirmed, the content is removed prior to reaching a moderator.

Automated Tools

Punchporn’s content moderation process includes an extensive team of human moderators dedicated to reviewing every single upload before it is published, a thorough system for flagging, reviewing, and removing illegal material, parental controls, and the utilization of a variety of automated detection technologies for known and previously identified, or potentially inappropriate content. Specifically:

Hash-list tools – known illegal material

We use a variety of tools that scan incoming images and videos against hash-lists provided by NGOs. If there is a match, then content is blocked before publication.

  • CSAI Match: YouTube’s proprietary technology for combating Child Sexual Abuse Imagery online.
  • PhotoDNAMicrosoft’s technology that aids in finding and removing known images of child exploitation.
  • Safer: In November 2020, we became the first adult content platform to partner with Thorn, allowing us to begin using its Safer product on Punchporn, adding an additional layer of protection in our robust compliance and content moderation process. Safer joins the list of technologies that Punchporn utilizes to help protect visitors from unwanted or illegal material.
  • Instant Image Identifier: The Centre for Expertise on Online Sexual Child Abuse (Offlimits) tool, commissioned by the European Commission, detects known child abuse imagery using a triple verified database.
  • NCMEC Hash Sharing: NCMEC’s database of known CSAM hashes, including hashes submitted by individuals who fingerprinted their own underage content via NCMEC’s Take It Down service.
  • StopNCII.org: A global initiative (developed by Meta & SWGfL) that prevents the spread of non-consensual intimate images (NCII) online. If any adult (18+) is concerned about their intimate images (or videos) being shared online without consent, they can create a digital fingerprint of their own material and prevent it from being shared across participating platforms.
  • Internet Watch Foundation (IWF) Hash List: IWF’s database of known CSAM, sourced from hotline reports and the UK Home Office’s Child Abuse Image Database.

AI tools – unknown illegal material

We utilise several tools that use AI to estimate the ages of performers. The output from these tools assists content moderators in their decision allow publication of uploaded content. Specifically:

  • Google Content Safety API: Google’s artificial intelligence tool that helps detect illegal imagery.
  • Age Estimation: We also utilize age estimation capabilities to analyze content uploaded to our platform using a combination of internal proprietary software and external technology, provided by AWS and PrivateID to strengthen the varying methods we use to prevent the upload and publication of potential or actual CSAM.

Fingerprinting tools

In addition to hashes received from NGOs, we also use fingerprint databases to prevent previously prohibited material from being re-uploaded. Images and videos removed during the moderation process, or subsequently removed post publication are fingerprinted using the following tools to prevent re-publication. Content may also be proactively fingerprinted with these tools.

  • Safeguard: Safeguard is Aylo’s proprietary image recognition technology designed with the purpose of combatting both child sexual abuse imagery and non-consensual content, by preventing the re-uploading of previously fingerprinted content to our platform.
  • MediaWiseVobile’s fingerprinting software that scans any new uploads for potential matches to unauthorized materials to protect previously fingerprinted  videos from being uploaded/re-uploaded to the platform.

 Moderation / Compliance Content Upload Process

The below chart shows our moderation/compliance process from account creation to publication.


Accuracy & Safeguards

Whilst automated tools assist in screening for, and detecting illegal material, uploaded images and videos cannot be published without being reviewed and approved by our trained staff of moderators. This acts as a quality control mechanism and safeguard for the automated systems.

Video removals from internal moderation

The table below provides the number of videos removed* on the basis of proactive voluntary measures (internal moderation, internal tools, internal audit), broken down by type of removal and total.

Reason for RemovalTotal
Content in violation of the platform’s terms and conditions29,267
Non-Consensual Behavior350
Potential Child Sexual Abuse Material2,804
Animal Welfare      93
Bodily Harm/Violence      271
Illegal or Harmful Speech      6
Total32,791

* Removals in this section may include content already removed in a previous period and reclassified to a different reason code during this reporting because of internal auditing.

Image removals from internal moderation
The table below indicates images removed due to internal means (internal moderation, internal tools, internal audit) broken down by type of removal and total.

Reason for RemovalPieces of Content
Content in violation of the platform’s terms and conditions                                 317,519
Non-Consensual Behavior384
Potential Child Sexual Abuse Material                                            642
Animal Welfare                                               86
Bodily Harm/Violence                                               64
Illegal or Harmful Speech                                                  3
Total                                  318,698

Manual vs automated removals from internal moderation

The table below indicates the pieces of content removed by internal means, broken down by automated (tools) and manual (internal moderation, internal audit). Automated decisions are where an exact binary match is achieved through one of our hashing-tools against known illegal material. Manual decisions are where a human has made a decision with or without the help of assisting tools.

Type of ContentTotal
Videos – Automated335
Videos – Manual32,456
Photos – Automated6
Photos – Manual318,692
Total                             351,489

 User restrictions

The table below indicates the number of users banned based on the source of removal.

Reason for RemovalTotal
Age-specific restrictions concerning minors                  720
Animal Welfare                        6
Content in violation of the platform’s terms and conditions          16,690
Goods/services not permitted to be offered on the platform                  285
Illegal or harmful speech                     53
Impersonation or account hijacking                        1
Inauthentic accounts             2,421
Non-consensual image sharing                  509
Potential Child Sexual Abuse Material*             2,812
Violence1
Total          23,498

Complaints received against decisions (Art. 15(1)(d) DSA)

The table below shows the number of appeals from users against decisions to remove their content or to impose restrictions against their account. Appeals include requests for additional information about the corresponding removal or restriction.

Appeals – Account RestrictionsNumber of Appeals
Total Account Appeals2,631
Decision Upheld2,613
Account Reinstated18


The median time to resolve these complaints was just under 18 days.

Appeals – Content RemovalsNumber of Appeals
Total Content Appeals2,755
Decision Upheld1,871
Content Reinstated884

The median time to resolve these complaints was just under 7 days.

Out-of-court dispute settlement (Art. 24(1)(a) DSA)

To our knowledge, no disputes have been submitted to out-of-court settlement bodies during the reporting period.

Suspensions for misuse (Art. 24(1)(b) DSA)

Number of accounts banned: 23,498

Number of accounts actioned for repeatedly submitting unfounded removal requests: 317


Human resources (Art. 42(2)(a) and (b) DSA

It is vital to note that images and videos are not published on the platform until they have been reviewed by a human moderator, and that our moderators are not subjected to any content review quotas. They are directed to review content and approve it if they’ve determined that the content does not violate our terms of service. Therefore, increasing the number of moderators would primarily impact the speed at which content is published on Punchporn, with little additional effect on the volume of illegal or incompatible content that is actually disseminated.

Qualifications and linguistic expertise of HR dedicated to content moderation

All moderators review and assess content in a wide variety of languages and employ several tools to assess this content. All metadata is scanned against our Banned Word Service which contains a library of over 40,000 banned terms across more than 40 languages (Including 21 EU languages) prior to reaching moderators. Moderators then employ translation tools to evaluate the metadata to ensure that the text is compliant. Audio content is assessed by moderators who either use translation/transcription tools or who understand the spoken language in the content. In cases where the audio content cannot be understood the content is rejected as we are unable to meaningfully evaluate potential compliance issues. In all cases moderation is a collaborative task where moderators are encouraged to solicit opinions from their co-workers, senior team members, leads, and managers when reviewing content.

Training and support given to content moderation HR

All moderators receive extensive training over a 3-month period that involves theoretical and practical exercises, job shadowing, and a final exam that requires a perfect score to pass. Once the fundamentals of the compliance guidelines are confirmed the moderators are then supervised on all their review for a period of time. Any moderation errors are addressed and corrected to ensure consistent application of the guidelines.

We use two different virtual care platforms (North America & Europe) that give moderators access to a variety of health and wellness professionals. We also use an additional program which provides moderators with further, complementary support and tailored wellness programs consisting of fitness/nutrition/life coaches, counsellors, and medical professionals. 


Information on the average monthly recipients of the service for each Member State

Relevant period: 1 February 2024 – 31 July 2024

CountryMonthly Average
Austria549,097
Belgium1,132,161
Bulgaria380,356
Cyprus101,346
Czechia356,668
Germany4,992,442
Denmark541,698
Estonia106,006
Spain3,385,833
Finland294,077
France5,432,986
Greece663,651
Croatia214,982
Hungary502,393
Ireland403,400
Italy2,533,391
Lithuania204,276
Luxembourg23,563
Latvia111,437
Malta70,706
Netherlands2,461,258
Poland1,429,743
Portugal1,199,342
Romania617,934
Sweden1,251,363
Slovenia223,376
Slovakia253,593
EU Total28,137,511

Due primarily to deduplication of individual users who may access the platform in multiple EU Member States during the same month, the total sum of EU Member State counts may not exactly equal the EU number. While we have employed reasonable and rigorous processes to publish the most accurate figures, imperfections are probable.