Our goal to achieve and set the highest level of Trust and Safety standards relies heavily on not only our internal policies and moderation team, but also on our array of external partnerships and technology. We have compiled a list of tools, Trust and Safety initiatives, and partnerships to provide more information about how we work to prevent and respond to illegal content on our platform, as well as ensuring user safety.
Content Scanning and Classification Tools
We utilize a number of external and internal tools to identify and remove potentially illegal content. These technologies include:
- CSAI Match: YouTube’s proprietary technology for combating Child Sexual Abuse Imagery online.
- Google Content Safety API: Google’s artificial intelligence tool that helps detect illegal imagery.
- PhotoDNA: Microsoft’s technology that aids in finding and removing known images of child exploitation.
- MediaWise: Vobile’s fingerprinting software that scans any new uploads for potential matches to unauthorized materials to protect against banned videos being re-uploaded to the platform.
- Safer: In November 2020, we became the first adult content platform to partner with Thorn, allowing us to begin using its Safer product on Punchporn, adding an additional layer of protection in our robust compliance and content moderation process. Safer joins the list of technologies that Punchporn utilizes to help protect visitors from unwanted or illegal material.
- Instant Image Identifier: The Centre for Expertise on Online Sexual Child Abuse (Offlimits) tool, commissioned by the European Commission, detects known child abuse imagery using a triple verified database.
- NCMEC Hash Sharing: NCMEC’s database of known CSAM hashes, including hashes submitted by individuals who fingerprinted their own underage content via NCMEC’s Take It Down service.
- Internet Watch Foundation (IWF) Hash List: IWF’s database of known CSAM, sourced from hotline reports and the UK Home Office’s Child Abuse Image Database.
- StopNCII.org: A global initiative (developed by Meta & SWGfL) that prevents the spread of non-consensual intimate images (NCII) online. If any adult (18+) is concerned about their intimate images (or videos) being shared online without consent, they can create a digital fingerprint of their own material and prevent it from being shared across participating platforms.
- Safeguard: Safeguard is Punchporn’s proprietary image recognition technology designed with the purpose of combatting both child sexual abuse imagery and non-consensual content, by preventing the re-uploading of previously fingerprinted content to our platform.
- Age Estimation: We also utilize age estimation capabilities to analyze content uploaded to our platform using a combination of internal proprietary software and external technology in an effort to strengthen the varying methods we use to prevent the upload and publication of potential or actual CSAM.
- Transcription Service: Our transcription service transcribes the audio in uploaded videos to text. Transcribed text is then run against our list of banned words to assist our moderation team in identifying content that may violate our Terms of Service or Community Guidelines.
- Watermark Detection Service: We utilize internal proprietary technology to detect watermarks identified in violative content to assist in identifying and removing new or related uploads of this content.
Punchporn remains vigilant when it comes to researching, adopting, and using the latest and best available detection technology to help make it a safe platform, while remaining an inclusive, sex positive online space for adults.
Partnerships
The Internet Watch Foundation (IWF) Partnership
The IWF is a technology-led, child protection organization, making the internet a safer place for children and adults across the world. They are one of the largest child protection organizations globally.
In November 2022, we announced a partnership with the IWF overseen by an Expert Advisory Board including the British Board of Film Classification, SWGfL, Aylo, Marie Collins Foundation, the Home Office, PA Consulting, National Crime Agency (NCA) and academia represented by Middlesex University and Exeter University, to:
- Develop a model of good practice to guide the adult industry in combatting child sexual abuse imagery online;
- Evaluate the effectiveness of IWF services when deployed across Aylo’s brands;
- Combine technical and engineering expertise to scope and develop solutions which will assist with the detection, disruption and removal of child sexual abuse material online.
In May 2024, Aylo and the IWF published the world’s first standards of good practice for adult content sites – Aylo and IWF partnership ‘paves the way’ for adult sites to join war on child sexual abuse online
The reThink Chatbot
Building upon our efforts to spread awareness about the harm of child sexual abuse material (CSAM) and to deter users from searching for this kind of content, we launched the reThink Chatbot in partnership with the Internet Watch Foundation and Stop it Now in March of 2022. The reThink chatbot engages with Punchporn users attempting to search for sexual imagery of children and signposts them to Stop It Now! UK and Ireland where they can receive help and support to address their behavior.
The results were independently evaluated by the University of Tasmania and published in February 2024. They showed that the Chabot was successful in reducing the number of searches for child sexual abuse material.
To expand on our efforts in identifying and responding to safety risks on Punchporn, we partnered with ActiveFence, a leading technology solution for Trust and Safety teams working to protect their online integrity and keep users safe. ActiveFence helps Punchporn combat abusive content by identifying bad actors, content that violates our platform’s policies, and mentions of the use of Punchporn for illegal or suspicious activity.
Spectrum Labs AI
We have been working with Spectrum Labs AI to proactively surface and remove harmful text on our platforms. This drives productivity and accuracy of our internal moderation teams and tooling.
Trust & Safety Professional Association
In 2023, we became members of the Trust and Safety Professional Association (TSPA), an organization which works to serve as a global community for Trust and Safety teams and individuals. TSPA offers Trust and Safety teams the opportunity to collaborate and share information to assist in developing best practices related to Trust and Safety on online platforms.
The Cupcake Girls Partnership
The Cupcake Girls are an organization that provides advocacy and referral services to consensual sex workers, as well as prevention and aftercare services to those affected by sex trafficking.
Uniting The Cupcake Girls expertise in supporting sex workers with our technology and educational resources, this powerful new partnership will drive meaningful change for our community. By sharing resources—ranging from collective collaborations to data insights—and by engaging with sex workers, the goal is to support consensual sex workers and foster an environment where their safety and success is prioritized.
Age Determination – Thorn, Teleperformance & ICMEC
In May 2024 Teleperformance published a paper on age determination that we co-authored with Thorn, Teleperformance and the International Centre for Missing and Exploited Children (ICMEC). Age Determination – A Guide for Online Platforms that Feature User-Generated Content (teleperformance.com)
We provided insights from our in-house moderation team in co-writing the paper which explores the intricate process of age determination within user-generated content on online platforms, shedding light on the pivotal role it plays in enforcing content policies, particularly safeguarding minors.
Crimestoppers International (CSI)
In January 2024 we announced our working relationship with Crime Stoppers International with a shared goal to enhance the cooperation and collaboration between the online adult industry, civil society and law enforcement in combatting online harms and exploitation.
For a full list of Trust and Safety partnerships, initiatives and external technology, please refer to our Trust and Safety Initiatives page.
Identity Verification
Yoti – Third-Party Identity Documentation Validation
Only verified content creators may upload content to Punchporn.
Users who apply to the Model Program submit their identity documents for verification through a trusted third-party documentation validation and identity verification service provider. Members of the Model Program are also required to obtain, retain and provide identification and evidence of consent to record and distribute for every performer appearing in their content.
Yoti is our primary third-party identification verification provider. Yoti is trusted by governments and regulators around the world, as well as a wide range of commercial industries. Yoti deploys a combination of state-of-the-art AI technology, liveness anti-spoofing, and document authenticity checks to thoroughly verify the age and identity of any user. Yoti technology can handle millions of scans per day with over 240 million age scans performed in 2019. Yoti has received certification from the British Board of Film Classification’s age verification program and from the Age Check Certification Scheme, a United Kingdom Accreditation Service.
Users who verify their identity with Yoti can trust that their personal data remains secure. Yoti is certified to meet the requirements of ISO/IEC 27001, which is the global gold standard for information security management. This means that Yoti’s security aligns with Punchporn’s commitment to protecting our users’ privacy as well. You can learn more about how Yoti works here.
Deterrence Messaging
Lucy Faithfull Foundation & Deterrence Messaging
The Lucy Faithfull Foundation is a widely respected non-profit organization at the forefront of the fight against child sex abuse, whose crucial efforts include a campaign designed to deter searches seeking or associated with underage content. We have actively worked with Lucy Faithfull to develop deterrence messaging for terms relating to child sexual abuse material (CSAM). As a result, attempts to search for certain words and terms now yield a clear deterrence message about the illegality of underage content, while offering a free, confidential, and anonymous support resource for those interested in seeking help for controlling their urges to engage in illegal behavior. In 2022, we partnered with further organizations who offer local resources to change user behavior, including those in Canada, the Czech Republic, and Australia. This takes our total to eleven localized versions and one global CSAM deterrence message. In the first half of 2023, we added additional messaging in Germany via Charite, Switzerland via Beforemore and Disno, and Spanish speaking countries via Protect Children.
Deterrence Messaging for Non-Consensual Intimate Imagery
Since the launch of our CSAM deterrence messaging, we have embarked on a similar initiative for non-consensual material. This concerns searches for content where there is a lack of consent to sexual acts, the recording or distribution of the content, or manipulation of one’s image, commonly known as “deepfakes”. If users search for terms relating to non-consensual material, they are reminded that such material may be illegal. As part of the messaging, resources are provided for removal of non-consensual material and support for victims via our NGO partners. In 2022 we added Permesso Negato, an NGO based in Italy, providing support to victims across the EU, along with StopChikane, an NGO based in Denmark.
Trusted Flagger Program
Launched in 2020, our Trusted Flagger Program is now comprised of 55 members spanning over 35 countries, including the Internet Watch Foundation (UK), the Cyber Civil Rights Initiative (USA), End Child Prostitution and Trafficking (Sweden/Taiwan), and Point de Contact (France). This program allows hotlines, helplines, government agencies, and other trusted organizations to disable content automatically and instantly from Punchporn, without awaiting our internal review. It is a significant step forward in identifying, removing, and reporting harmful and illegal content. In 2023 we added PermessoNegato (Italy), The RATI Foundation (India), The Centre for Exploited and Missing Children (Serbia), Cultivando Genero AC (Mexico), StopChikane (Denmark), and the Taipei Women’s Rescue Foundation (Taiwan).