If you opened Spotify to listen to Cardi B and were suddenly asked to verify your age, you might wonder: Why do I need to show ID just to play a song? The same kinds of checks are increasingly appearing on platforms like YouTube, where certain videos are locked behind verification, and in games like Fortnite, where community features may require proof of age. The answer lies in the UK’s Online Safety Act (OSA), one of the most sweeping digital regulations introduced in recent years. It reshapes how platforms—from music apps to gaming marketplaces—will need to handle online safety moving forward, especially for children.
Below, we’ll break down what the law requires, how it’s changing the way companies operate, and why we believe compliance of this scale is too complex to manage manually.
What we cover
What is the Online Safety Act?
The UK’s Online Safety Act 2023 became law in October 2023 and came into force in July 2025. It gives Ofcom—the UK’s communications regulator—broad powers to hold online services accountable for harmful or illegal content. The law applies to user-to-user platforms (such as social media, streaming, and gaming services) and search engines that are “likely to be accessed by children.”
Platforms in scope must:
- Conduct risk assessments of harmful content (such as pornography, self-harm, eating disorders)
- Implement “Highly Effective Age Assurance” (HEAA) measures for medium- and high-risk services to restrict under-18s from encountering harmful content
- Enforce these duties under threat of steep penalties: fines up to £18 million or 10% of global annual turnover
What does “Highly Effective Age Assurance” entail?
Ofcom defines “Highly Effective Age Assurance” (HEAA) as age checks that are sufficiently reliable to prevent children from encountering harmful content in practice. According to its guidance, this means measures must go beyond simple self-declaration of age or easily bypassed checks. Instead, services must adopt approaches that can provide a high degree of confidence about whether a user is a child or an adult.
HEAA can involve different methods, such as digital ID verification, checks through trusted third parties, or technologies like facial age estimation.
Ofcom notes that no single method is mandated, but platforms must demonstrate that their chosen system is robust, accurate, and proportionate to the risks on their service. Importantly, HEAA must also respect data protection and privacy standards: services should only collect and retain the minimum information necessary to establish age, and they must explain their methods clearly to users and parents. In short, “highly effective” means assurance systems that genuinely reduce children’s exposure to harmful material while being practical, transparent, and privacy-conscious.
How does the OSA impact companies across industries?
While pornography sites have received much attention in media coverage, the OSA applies far more broadly. Streaming platforms, music services, and creator marketplaces all face the same regulatory expectations.
Music companies
Spotify
Spotify has explicitly acknowledged the requirements of the Online Safety Act, publishing an age assurance page to explain its approach. In practice, this means Spotify will:
- Introduce systems to sometimes check whether a listener is a child or an adult if the user interacts with content tagged as 18+
- Limit or block access to content flagged as 18+ if the user chooses not to complete the age assurance check or if they are confirmed as being underage
- Be able to demonstrate compliance to Ofcom with verifiable assurance processes
Spotify’s challenge is unique in that music itself isn’t inherently harmful, but the platform distributes a huge catalog where lyrics, album art, or podcasts may carry explicit content. Spotify already uses parental advisories (the “E” explicit label) and content filters, but under the OSA it must go further—implementing systematic age checks that can distinguish between children and adults, not just tag content.
For a platform with hundreds of millions of global users, manual ID checks aren’t feasible. This is why Spotify has explored technology-driven approaches like automated age verification and partnerships with assurance providers.
YouTube
YouTube has long implemented age restrictions, but the OSA raises the bar. Ofcom guidance now expects platforms to implement robust, verifiable assurance, not just rely on self-declared birthdates. YouTube already uses machine learning to detect potentially harmful content and applies stricter parental controls through YouTube Kids and supervised accounts.
It has begun rolling out an AI-driven age inference model that assesses user behavior as part of its age verification process. If the model flags someone as a minor, YouTube then prompts for additional verification—such as uploading a government ID, credit card, or a selfie—before lifting restrictions. Under the OSA, YouTube must show these measures are sufficiently effective to keep under-18s away from inappropriate videos.
Given its global footprint, YouTube also faces complexity in aligning UK-specific compliance with existing EU and US frameworks, meaning a patchwork of different standards to manage.
Creator marketplaces
Epic Games
Gaming platforms like Epic Games’ Fortnite fall under the category of “user-to-user” services, since players interact via chat, voice, and community spaces. Because Fortnite has a massive under-18 audience, Epic must now assess and mitigate risks of harmful interactions—bullying, grooming, or exposure to adult content. The OSA also requires Epic to ensure age assurance at sign-up or when accessing sensitive features.
Epic has some prior groundwork: it already deploys SuperAwesome’s parental consent and child-safe advertising tools. These systems allow developers to integrate child-appropriate experiences. Under the OSA, however, Epic must expand these safeguards beyond advertising—covering in-game chat, purchases, and community interactions.
Epic relies on Trolley to handle verification and identity assurance in its compliance stack—Trolley’s platform helps Epic automate user trust checks seamlessly at critical touchpoints.
Envato
Envato, a leading digital marketplace for creative assets such as templates, audio, graphics, and more, also falls under the obligations of the Online Safety Act. Unlike Epic, where age verification serves to gate content and interactions, Envato uses Trolley primarily to verify creators.
Envato uses Trolley’s identity verification tools during author onboarding. Trolley integrates directly with Envato’s system to collect payout methods, tax Information, and verify identities in one streamlined flow. This not only ensures that creators meet the Online Safety Act’s identity standards, but also keeps friction low, optimized to reduce drop-off rates while still ensuring complete and accurate author validation.
What platforms are doing to comply
The scope of the OSA makes one thing clear: compliance at the scale of modern online platforms cannot be managed manually. For platforms with thousands or even millions of users, individually verifying users’ identifying information is an enormous undertaking. At scale, manual checks also raise serious data‑handling and privacy concerns. As a result, companies are turning to a new generation of identity and age‑assurance providers.
Some platforms are turning to facial age estimation and digital ID checking technology that allows people to prove they are under or over a certain age without sharing full identity documents.
Others are opting to leverage solutions that tap into online banking infrastructure for age verification. This model is attractive because it leverages trusted institutions and removes the need for platforms to store sensitive ID data themselves. By authenticating through banks, users can prove their age quickly and securely, while platforms demonstrate compliance with Ofcom’s HEAA standard.
Rather than treating age assurance as a one‑off check, platforms are beginning to integrate it into their broader compliance stack—linking user onboarding, parental consent flows, and content access restrictions. The trend is clear: scalable compliance now depends on specialist partners and interoperable technology, not manual processes. As these providers gain traction, regulators worldwide are beginning to adopt similar standards.
Global trends: How regulation is spreading
The UK’s Online Safety Act is increasingly seen as a blueprint for similar laws elsewhere, with significant momentum across the EU, US, and Australia.
European Union
In the European Union, regulators are layering new requirements on top of the Digital Services Act (DSA). In July 2025, the European Commission published final guidelines on protecting minors under Article 28 of the DSA, which require platforms to implement stronger privacy defaults, age‑appropriate design, limits on profiling‑based ads to minors, and effective age‑assurance processes.
At the same time, the EU Digital Identity Wallet (EUDI) is advancing, with implementing regulations adopted in July 2025 that define how verifiable age attributes can be standardized across member states.
Together, these developments mean platforms serving EU users should expect DSA‑aligned child‑protection controls combined with wallet‑compatible age claims as the default for privacy‑preserving verification.
United States
In the United States, progress is more fragmented.
At the federal level, the Kids Online Safety Act (KOSA), which would ban social media platforms from allowing children under 13 to create or maintain accounts, was reintroduced in Congress in May 2025, though it remains under debate.
Meanwhile, over 20 states have enacted some form of age‑verification law covering adult content, creating a patchwork of requirements. Some platforms have even withdrawn from certain states. And in March 2025, Utah went further by requiring app stores themselves to verify age before downloads.
Australia
In Australia, Parliament passed the Online Safety Amendment (Social Media Minimum Age) Act in 2024, establishing a world-first minimum age of 16 for social media accounts. From December 2025, platforms like Facebook, Instagram, TikTok, Snapchat, X, and YouTube must take “reasonable steps” to prevent under-16s from creating or maintaining accounts, with penalties of up to A$49.5 million for breaches.
Note: Educational, messaging, and health-related services (e.g., Google Classroom, Messenger Kids, Kids Helpline) are exempt.
What to expect next
Around the world, regulation of online safety and age assurance is tightening rapidly. In the EU, platforms should expect DSA-style obligations, in the US, a fast-moving, state-driven patchwork, and in Australia, a framework mandating age assurance across multiple content types. And age assurance regulation isn’t limited to these regions. Product teams should build systems that are comprehensive across global regulations, privacy‑preserving, and configurable by jurisdiction, with audit trails to demonstrate effectiveness to regulators.
Meet global compliance standards with Trolley
Managing age assurance and online safety compliance is complex, fragmented, and constantly evolving across jurisdictions. Many platforms are realizing that manual approaches simply don’t scale—and that’s where partners like Trolley come in.
Trolley Trust makes compliance with the Online Safety Act as simple as a single click. After collecting a user’s name and date of birth directly from their ID, we run a suite of fraud checks to confirm authenticity. Designed for global use, Trolley partners with region-specific verifiers so that performance is best-in-class in any jurisdiction.
We standardize data fields so customers can integrate once and receive seamless access to the optimal provider in each region. All results are processed automatically: Trolley handles matching verification responses with user profiles and executes age-verification logic, ensuring platforms stay compliant without manual steps.
Think Trolley could be the solution to your platform’s OSA age-assurance compliance? Get in touch with us today and we’ll walk you through a demo and answer all your questions.