The Administration’s AI Policy Should Prioritize Safety for Women and Girls
from Women Around the World and Women and Foreign Policy Program
from Women Around the World and Women and Foreign Policy Program

The Administration’s AI Policy Should Prioritize Safety for Women and Girls

U.S. President Donald Trump shows the signed bill during the ceremony for the Take it Down Act, in the Rose Garden of the White House in Washington, D.C., U.S., May 19, 2025.
U.S. President Donald Trump shows the signed bill during the ceremony for the Take it Down Act, in the Rose Garden of the White House in Washington, D.C., U.S., May 19, 2025. REUTERS/Kevin Lamarque

Cailin Crockett is a former National Security Council director and White House Gender Policy Council senior advisor. She is a visiting scholar at American University and serves on the Advisory Committee of the Cyber Civil Rights Initiative.

August 5, 2025 4:15 pm (EST)

U.S. President Donald Trump shows the signed bill during the ceremony for the Take it Down Act, in the Rose Garden of the White House in Washington, D.C., U.S., May 19, 2025.
U.S. President Donald Trump shows the signed bill during the ceremony for the Take it Down Act, in the Rose Garden of the White House in Washington, D.C., U.S., May 19, 2025. REUTERS/Kevin Lamarque
Post
Blog posts represent the views of CFR fellows and staff and not those of CFR, which takes no institutional positions.

In the recently published AI Action Plan, the White House outlines a series of directives to federal agencies aimed at ensuring America “wins” the artificial intelligence (AI) race. Notably absent from the plan and the three executive orders that accompany it is a recognition of the societal harms posed by the rapidly developing technology, including a concrete plan to address the disproportionate impacts on women and children targeted by deepfakes. This is an important omission, and a surprising one given that just two months earlier, President Donald Trump signed into law the Take It Down Act. The Act represents a rare bipartisan agreement to address the weaponization of AI tools to generate image-based sexual abuse. Publicly backed by First Lady Melania Trump, the law criminalizes real and synthetic non-consensual intimate images of adults and minors, and requires covered platforms to remove images published without the subject’s consent within forty-eight hours. These first steps to rein in one of the most visible harms posed by AI would be immeasurably enhanced with guidance to federal agencies on the law’s implementation, and adding safety measures to the AI plan that casts regulation as the enemy of innovation and embraces open-source and open-weight models without guarding against their risks.  

Synthetic Images, Real Harm 

As testimony leading to passage of the new law makes clear, AI-generated, non-consensual images create real harm. Survivors can experience a range of impacts, including depression, anxiety, post-traumatic stress disorder, self-harm, and even suicide. Although women and girls are the majority of victims (one study suggests 96 percent of deepfakes are non-consensual porn, and a more recent report estimates 99 percent of deepfake porn targets women), adolescent boys have been targeted by financial sextortion schemes, aided by AI-powered “nudify” apps used by predators to coerce and harass them into paying money. As a senior advisor in the White House Gender Policy Council, I heard from hundreds of survivors of online harassment and abuse, including courageous young women like Francesca Mani. At fourteen, Mani was the subject of a deepfake nude image circulated by a fellow classmate without her consent, and stood next to Trump in May at the Rose Garden signing of Take It Down. In the president’s own words at the bill signing, “With the rise of AI image generation, countless women have been harassed with deepfakes and other explicit images distributed against their will…it's gone on at levels that nobody has ever seen before.”  

More on:

Artificial Intelligence (AI)

Technology and Innovation

Trump

Sexual Violence

Inequality

The president is right to blame AI for the growth in image-based sexual abuse. According to the National Center for Missing and Exploited Children (NCMEC), reports to its cyber tipline for AI-generated child sexual abuse material (CSAM) have increased more than sevenfold in the first half of the year, compared to reporting from all of 2024. Open-source and open-weight AI models that can generate text, image and voice outputs, have become easier than ever to access, and can be downloaded and fine-tuned specifically to create image-based sexual abuse, spurring the rise of “nudification” apps and sites dedicated to the commercialization of non-consensual content, generating $36 million a year, according to new research. Instead of treating open-source and open-weight models with the caution they merit (others have warned of their national security implications), the Trump administration is advocating for more models to be distributed in this way, citing their “unique value for innovation.” At a time when AI-generated image-based sexual abuse is on the rise, doing so is like pouring gasoline on a burning flame. 

What the Administration Can Do: Take It Down Act Implementation 

There are several actions the White House can take to advance the implementation of the Take It Down Act and mitigate this growing harm, which one in eight teens report personally knowing someone to have experienced. Now that image-based sexual abuse is a federal offense, more survivors will come forward seeking help and the need for dedicated services, like the National Image Abuse Helpline, funded by a grant from the Department of Justice and operated by the Cyber Civil Rights Initiative, will only continue to grow. The Trump administration should ensure the continuation of this program, as well as the cyber tipline and other services provided by NCMEC for child victims of image-based sexual abuse so that survivors can access compassionate, expert care.  

As technology continues to evolve, law enforcement agencies and attorneys will need further training to effectively investigate and prosecute cybercrimes like image-based sexual abuse. Rather than scale back funding for law enforcement to address technology-facilitated gender-based violence, as President Trump’s 2026 budget proposes to do, the Trump administration should call for more resources to equip police and prosecutors to hold offenders accountable. At a minimum, the White House should uphold grant funding for the National Resource Center on Cybercrimes Against Individuals, which Congress authorized under the bipartisan 2022 reauthorization of Violence Against Women Act.  

Engaging the Private Sector on Responsible AI 

There is also more to be done to prevent the misuse of AI to generate image-based sexual abuse in the first place, and that must involve the private sector. Here, the Trump administration could strategically deploy federal agencies’ procurement power to incentivize responsible AI practices, like testing and safeguards against the generation of image-based sexual abuse as a pre-requisite for vendors seeking government contracts—something the Biden-Harris administration did under the umbrella of the executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which the Trump administration rescinded on day one.   

Further, as covered platforms have a year until the law’s forty-eight-hour takedown requirements go into effect, the Trump administration could call on companies to improve transparency around their policies for content removal, reducing barriers to reporting abuse. Moreover, platforms that haven’t yet invested in tools like StopNCII.org and TakeItDown.org, which use image-hashing technology to detect and prevent the sharing of non-consensual photos and videos, should opt in and help expand the reach of these resources for survivors to take back control of their images. The First Lady’s advocacy for Take It Down is noteworthy and could be channeled towards these goals.  

More on:

Artificial Intelligence (AI)

Technology and Innovation

Trump

Sexual Violence

Inequality

The White House would do well to assert U.S. leadership in an affirmative agenda for AI safety that preserves privacy and individual rights over this rapidly evolving technology, as the European Union has recently done. By framing regulation as the antithesis of innovation—a concerning position that echoes the prior Silicon Valley ethos of “move fast and break things,” it cedes policy and power to companies rather than incentivizing them to innovate for safety and privacy-by-design. The result will likely be a worrisome acceleration of the serious risks that AI models—designed by developers that lack the motivation and lived experience to prioritize the safety of vulnerable communities—already pose to women and girls, at a time when every day the frequency and capability of generative AI to create image-based sexual abuse is becoming more pronounced.  

President Trump announced that his administration “will not tolerate online sexual exploitation” in signing the Take It Down Act, and to do so, it needs to ensure that federal resources remain in place to provide survivors with specialized support and hold offenders accountable. The president has bipartisan support to do so, and survivors and their families are watching. Winning the AI race should not come at the expense of safety for women and girls.  

 

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail
Close