Today, on May 19th, President Trump signed the Take It Down Act into law, starting the one-year clock to May 20, 2026, when covered platforms will be required to implement the required notice and takedown mechanisms. The Act is intended to address the spread of non-consensual intimate imagery (NCII), commonly termed “revenge porn,” establishes criminal liability for individuals who knowingly publish NCII, including “deepfake” NCII that is created or enhanced by AI. The legislation also includes new requirements for online services to implement notice-and-takedown mechanisms for “intimate visual depictions” of identifiable individuals that are published without consent.
The new law applies to publicly available websites, apps, and platforms that host user-generated content or regularly make such imagery available. Covered platforms must remove qualifying content within 48 hours of receiving a valid request, and make “reasonable efforts” to take down identical content across the platform. The Federal Trade Commission is charged with the enforcement of the notice and takedown requirements under its consumer protection authorities.
Critics of the Take it Down Act note that the 48-hour mandatory timeframe for takedown and “good faith” removal safe harbor could incentivize over-compliance and censorship, rather than careful content moderation aimed at removing violative content but preserving free speech rights. Critics also note that unlike other statutes providing for notice-and-takedown mechanisms – such as the Digital Millennium Copyright Act (DMCA) – the law lacks safeguards to protect against erroneous, fraudulent, or unfounded notices, such as counter-notice rights or penalties for knowing material misrepresentations in notices.
Overview
Covered Entities: The Act applies to a broad range of online services that serve the public and either (1) primarily host user-generated content or (2) in the ordinary course of business, publish or make available non-consensual intimate imagery. Exemptions apply to broadband internet providers, standalone email services, and platforms offering preselected content with only incidental user interaction, unless the service regularly handles the covered imagery. The law does not explicitly provide an exemption for end-to-end encrypted (E2EE) services, such as messaging or cloud storage services.
Covered Content: The law applies broadly to intimate visual depictions shared without consent, including synthetic and AI-generated imagery. The Act expressly reaches AI-generated content that appears indistinguishable from authentic imagery, reflecting the growing concern over deepfakes and manipulated media. It applies to images of identifiable individuals, adults or minors, engaged in nudity or sexual conduct. Exceptions are provided for legal, medical, or educational use.
Notice-and-Takedown Requirements: Covered platforms must provide a public mechanism for individuals, or guardians of minors, to request removal of covered imagery. A valid request must include a signature, information to locate the content, a sworn statement of non-consent, and contact details.
Upon receipt of a valid request, the platform must remove the content within 48 hours and take reasonable steps to prevent reuploads, including removal of identical copies. The Act includes a limited safe harbor for platforms that act in good faith to meet the notice and takedown compliance obligations.
Penalties: Noncompliance may be treated as an unfair or deceptive act under Section 5 of the FTC Act. Covered platforms found noncompliant may face:
- Civil penalties, including fines;
- Injunctive relief or mandated changes;
- Ongoing compliance monitoring under FTC orders.
Section 5 enforcement often involves extensive investigations and long-term oversight. In assessing compliance, the FTC is expected to examine that takedown mechanisms are present, and to review whether those mechanisms are effective and responsive to statutory obligations.
Challenges and Areas of Uncertainty
While the policy goals of addressing NCII – including synthetic or “deepfake” NCII – received widespread bipartisan support, critics note several legal, compliance and interpretive challenges associated with the law, including:
- The statute changes the incentives for platform moderation of NCII. The mandatory 48 hour takedown requirement is triggered by the receipt of a statutorily complete notice, as opposed to a platform’s own determination about whether an image is NCII. And the statute heavily incentivizes platforms taking down the content – in which case they will benefit from the statute’s safe harbor for good-faith compliance – rather than devoting resources to review incoming notices and assess whether the underlying content is actually NCII.
- The statute does not explicitly provide for safeguards against unfounded or incorrect notices, unlike in other notice-and-takedown regimes like the DMCA. For example, DMCA provides the ability for a content creator to submit a “counter-notice,” requires notices and counternotices to be filed under penalty of perjury, and establishes penalties for knowing material misrepresentations. The absence of a counter-notice process, perjury standard, or penalties for false claims leaves to platforms difficult questions about whether and how they should balance timely takedowns with protections for lawful expression.
- The inclusion of AI-generated content may pose challenges given current technological limitations. The law requires platforms to make reasonable efforts to take down identical content, but AI tools could enable posting at scale or create content with subtle differences or altered identifiers that make functionally identical copies difficult to identify and take down with automated tools.
- The statute likely requires significant net-new operational and technological resourcing. The 48-hour removal turnaround time requirement and obligation to also take down copies of reported violative images may demand significant operational resources and new technological builds, especially for platforms with limited moderation infrastructure.
- End-to-end encrypted services face significant uncertainty about the scope of their obligations under the statute. While this issue was discussed in the legislative process, Congress did not explicitly exclude encrypted services from the Take It Down Act, nor did it limit the applicability of the Act’s requirements to shared or publicly communicated content.
What Companies Should Do Now
Companies evaluating their exposure under the Act should begin to consider details relevant to design and implementation decisions for compliance, including:
- Where user-generated content that may constitute “intimate visual depictions” surface on the platform, and how the platform currently facilitates reporting about such content;
- The platform’s existing capabilities to identify and remove duplicates of violative content;
- Current operational turnaround times for actioning takedown requests, and how those turnaround times are likely to change if and when takedown requests are submitted at scale;
- Existing training materials and protocols for reports and removal of NCII, including deepfake NCII; and
- Documentation, retention, and recordkeeping practices around sensitive takedown request data, correspondence with users, hashes for identified violative content, etc.
While the Take It Down Act represents the most prescriptive federal framework to date, its core objectives are broadly aligned with existing regulatory regimes in the UK and EU, including the UK’s Online Safety Act and the EU’s Digital Services Act, which impose related (albeit less specific) obligations concerning non-consensual and AI-generated content. And we expect at least some jurisdictions, including the UK, may consider imposing more prescriptive requirements aligned to the Take It Down Act in the near term. As for further developments in the US, as the FTC and other agencies issue implementing guidance, companies should remain engaged in the rulemaking process, including to highlight operational challenges and interpretive difficulties.