Our Commitment to Trust and Safety
At Punchporn, nothing is more important than the safety of our community and the trust of our users. Our core values, such as inclusivity and freedom of expression, are only possible when our platform maintains its safety and integrity and can be trusted by our adult users, as well as our business and content partners. We remain steadfast in our commitment to eliminating illegal content, including non-consensual intimate material and child sexual abuse material (CSAM). Every online platform shares this responsibility, and it requires collective action, cooperation, and constant vigilance.
Over the years, we have deployed many measures to protect our platform from illegal and abusive content. We are continually improving our Trust and Safety policies to better identify, remove, review, and report illegal material, both before being made available on our platforms and once reported to us. While leading non-profit and advocacy groups recognize that our efforts have been effective, we understand that our efforts must be ongoing and always strive to innovate.
In early 2020, we voluntarily registered to report to the National Center for Missing and Exploited Children (NCMEC) as part of their ESP program. In the spring of 2021, we issued our first transparency report, which provides more information about our approach to moderation, related policies, partnerships and Trust and Safety efforts. We plan to produce annual Transparency Reports moving forward. Further details on our policies can be found below.
We also took additional steps to further protect our community, including banning content downloads and restricting the ability to upload to verified uploaders only. We continue to build upon our verification and moderation efforts through tools, technology, and partnerships, as they become available.
Members of our Trusted Flagger Program, which now includes dozens of non-profit organizations, may alert us of potential CSAM or non-consensual content. Punchporn users may also alert us to potentially abusive material by using our user, content, and comment flagging features. Users may alert us to content that may be illegal or otherwise violate our Terms of Service by using the flagging feature found below all videos and photos, or by filling out our Content Removal Request (CRR) form. Flags and CRRs are kept confidential, and we review all content that is brought to our attention by users through these means.
1. Verified Content Creators
Only verified content creators may upload content to Punchporn.
Who are “Verified Content Creators?”
Verified content creators include the following groups:
- Verified Models within the Model Program
- Verified Studios within the Content Partner Program
How Can Content Creators Get Verified?
Our Content Partners maintain proof of ID, age and consent for all performers who are featured in content uploaded to Punchporn.
To become a member of the Model Program, the user must verify their age and identity before they are eligible to upload content to Punchporn. Members of the Model Program must also obtain, retain, and provide identification and evidence of consent to record and distribute for every performer appearing in their content before their content is uploaded to Punchporn. This requires the Model or Co-Performer to submit the following prior to upload:
- Government-issued photo identification for all performers
- Consent documentation, such as signed Release Forms, for all performers appearing in content uploaded to Punchporn
To learn more about joining the Model Program, click here.
To learn more about our trusted third-party identification verification service, click here.
2. Banning Downloads
We have removed the ability for users to download content from Punchporn. In tandem with our fingerprinting technology, this will help prevent previously removed content from being re-uploaded.
3. Content Moderation
We use a combination of automated tools, artificial intelligence, and human review to help protect our community from illegal content. While all content is reviewed prior to going live on the platform, we also have additional layers of moderation, which audit our live sites by proactively sweeping content already uploaded for potential violations and identifying any breakdowns in the moderation process that could allow a piece of content that violates our Terms of Service to be available on our platform.
Additionally, while our list of banned keywords on Punchporn is already extensive, we continue to ban new keywords on an ongoing basis. We also regularly monitor search terms within the platform for increases in phrasing that attempt to bypass the safeguards in place and adapt our banned keywords accordingly.
Punchporn’s content moderation process includes an extensive team of human moderators dedicated to manually reviewing every single upload before it is published, a thorough system for flagging, reviewing, and removing illegal material, parental controls, and the utilization of a variety of automated detection technologies for known and previously identified, or potentially inappropriate content. These technologies include:
- CSAI Match: YouTube’s proprietary technology for combating Child Sexual Abuse Imagery online.
- Google Content Safety API: Google’s artificial intelligence tool that helps detect illegal imagery.
- PhotoDNA: Microsoft’s technology that aids in finding and removing known images of child exploitation.
- MediaWise: Vobile’s fingerprinting software that scans any new uploads for potential matches to unauthorized materials to protect against banned videos being re-uploaded to the platform.
- Safer: In November 2020, we became the first adult content platform to partner with Thorn, allowing us to begin using its Safer product on Punchporn, adding an additional layer of protection in our robust compliance and content moderation process. Safer joins the list of technologies that Punchporn utilizes to help protect visitors from unwanted or illegal material.
- Instant Image Identifier: The Centre for Expertise on Online Sexual Child Abuse (Offlimits) tool, commissioned by the European Commission, detects known child abuse imagery using a triple verified database.
- NCMEC Hash Sharing: NCMEC’s database of known CSAM hashes, including hashes submitted by individuals who fingerprinted their own underage content via NCMEC’s Take It Down service.
- Internet Watch Foundation (IWF) Hash List: IWF’s database of known CSAM, sourced from hotline reports and the UK Home Office’s Child Abuse Image Database.
- StopNCII.org: A global initiative (developed by Meta & SWGfL) that prevents the spread of non-consensual intimate images (NCII) online. If any adult (18+) is concerned about their intimate images (or videos) being shared online without consent, they can create a digital fingerprint of their own material and prevent it from being shared across participating platforms.
- Safeguard: Safeguard is Punchporn’s proprietary image recognition technology designed with the purpose of combatting both child sexual abuse imagery and non-consensual content, by preventing the re-uploading of previously fingerprinted content to our platform.
- Age Estimation: We also utilize age estimation capabilities to analyze content uploaded to our platform using a combination of internal proprietary software and external technology in an effort to strengthen the varying methods we use to prevent the upload and publication of potential or actual CSAM.
- Transcription Service: Our transcription service transcribes the audio in uploaded videos to text. Transcribed text is then run against our list of banned words to assist our moderation team in identifying content that may violate our Terms of Service or Community Guidelines .
- Watermark Detection Service: We utilize internal proprietary technology to detect watermarks identified in violative content to assist in identifying and removing new or related uploads of this content.
Users may alert us to content that may be illegal or otherwise violates our Terms of Service by using the flagging feature, found below all videos and photos, or by filling out our Content Removal Request form.
4. Trusted Flagger Program
Our Trusted Flagger Program is an initiative to empower non-profit and NGO partners to alert us of content they believe violates our Terms of Service. The Trusted Flagger Program consists of more than 40 leading organizations in internet and child safety. Our partners have a direct line of access to our moderation team, and any content identified by a Trusted Flagger is immediately disabled. Members of the Trusted Flagger Program include: Cyber Civil Rights Initiative (United States of America), National Center for Missing & Exploited Children (United States of America), Internet Watch Foundation (United Kingdom), Point de Contact (France), Centre for Safer Internet Slovenia (Slovenia), ECPAT (Sweden), ECPAT (Taiwan).
5. NCMEC Reporting
In early 2020, Punchporn voluntarily registered with NCMEC’s ESP program as part of an initiative to set up an automated reporting system for content that violates our CSAM policy. We will also continue to work with law enforcement globally to report and curb any instances involving illegal content.
6. Transparency Report
Our Transparency Reports enable the public to better understand not only how we strive to keep our platform safe, but how we as a company are actively contributing to the fight against illegal and abusive content. These reports expand on our proactive and defensive strategies to prevent, detect, and remove illegal content, as well as content moderation results for the respective year.
Much like Facebook, Instagram, Twitter and other tech platforms, Punchporn seeks to be fully transparent about the data pertaining to violative content uploads and the quantitative results of our Trust and Safety efforts to prevent such material from appearing on our platform. We take pride in the fact that we are the first the adult platform to release a Transparency Report, having published our inaugural report in April 2021. We will continue to release these reports on an annual basis.