Deep Fakes: An overview of civil and criminal remedies



What are deepfakes?

If you’ve heard about doctored photos and videos that look real, you’ve probably asked yourself: “What are deepfakes?” Deepfakes are videos and pictures created using computer software that look almost indistinguishable from the real deal. Learning how to recognize deepfakes and when they might be used as a part of a scam is the first step in protecting your digital existence. Then, get a powerful online security app to help you protect your identity, block hackers, and stay online safer.

How do deepfakes work?

Deepfakes combine existing images, video, or audio of a person in AI-powered deep learning software that allows for manipulating this information into new, fake pictures, videos, and audio recordings. The software is fed images, video, and voice clips of people that are processed to “learn'' what makes a person unique (similar to training facial recognition software). Deepfake technology then applies that information to other clips (substituting one person for another) or as the basis of fully new clips.

Why are deep fakes significant

Deep fakes and fraud
A finance worker at a Chinese based multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call. The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations. In addition to direct attacks, companies are increasingly worried about other ways deepfake photos, videos or speeches of their higher-ups could be used in fraud schemes.

Romance scammers use face-swapping tech in video chats — all to swindle love-seekers online
With a laptop or a couple of smartphones, cons transform their looks and voices entirely with stock-and-trade AI tools. In real time, they become someone else entirely, with AI mirroring every expression they make as they chat on a video call. It all appears quite real.


Amanda Aguilar was stunned to see a deep fake of her used in a 
catfishing scheme --Posted by NBC4 Washington Nov 26, 2024
Pornography
Deep fake porn is a type of synthetic pornography that is created via altering already-existing photographs or video by applying deepfake technology to the images of the participants. The use of deepfake pornography has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.

Laws about deep fakes

Nationwide
The TAKE IT DOWN Act is a U.S. law signed on May 19, 2025, that criminalizes the sharing of non-consensual intimate images, including deepfakes, and requires online platforms to remove such content within 48 hours of a victim's request. It aims to protect victims of digital exploitation and holds perpetrators accountable for their actions.

Federal Trade Commission Act (FTC Act)
The FTC Act (§ 5) prohibits “unfair or deceptive acts or practices” in commerce. This can include overpromising AI capabilities of service providers or using AI for malicious deception using deepfakes.

The FTC can take enforcement actions against entities or individuals using deepfakes or AI-generated media to engage in fraud, false advertising, or scams. 

US Criminal Code (18 U.S.C.)
Fraud and Computer Misuse:18 U.S.C. § 1030 (Computer Fraud and Abuse Act – CFAA) is a federal law that prohibits computer fraud and abuse. It Criminalizes unauthorized computer access, which can extend to the malicious creation or deployment of deepfake content used for fraud. The CFAA aims to prevent computer crime and balance the interests of the federal government and the states.
18 U.S.C. § 1343: Wire fraud statutes could also apply to deepfake-based scams, such as fake audio or video used to defraud victims.

Defamation and Non-Consensual Content:Deepfakes used in harassment or defamation may fall under laws addressing cyberstalking and harassment, particularly under statutes like 18 U.S.C. § 2261A (stalking).

National Defense Authorization Act (NDAA)
In 2024, updates to the NDAA outlined measures to address deepfake technology in cybersecurity and military operations, with focus on its potential use in misinformation campaigns by adversarial nations. The Act includes a National DeepFake Detection Program, Election Security Provisions, Criminalization of Malicious Deepfakes, and formalizes the NIST AI Risk Management Framework.

State By State

Alabama
HB161 prohibits two crimes: “(1) distributing a private image of an individual without their consent, and (2) creating a private image of an individual without their consent.”

HB171 makes it “illegal to distribute AI-generated deceptive media if the distributor knows it falsely represents a person and intends to influence an election or harm a candidate’s electoral prospects.”

Arizona
HB 2394 makes the “digital impersonation of a candidate for public or political party office, without consent, unlawful.”

Election Communications and Deep Fakes and Prohibition means that people cannot create and distribute a synthetic media message that they know is deceptive of fraudulent of a candidate on a ballot unless the media includes a clear disclosure that media includes content generated by artificial intelligence.

California
Has enacted multiple laws addressing transparency in AI use, creative rights, election-related threats, and privacy harms related to deepfakes. Defending Democracy from Deepfake Deception Act of 2024, introduced by Senator Aisha Wahab, makes it a crime to create and distribute such content if it's intended to cause serious emotional distress to the individual depicted. This law also requires AI-generated content to include watermarks. Additionally, California's existing privacy laws, like the California Consumer Privacy Act and the California Invasion of Privacy Act, are being applied to AI and deepfake technologies.

The California AI Transparency Act  requires businesses that create generative AI systems with over 1 million monthly users to provide a publicly accessible AI detection tool. This tool allows consumers to determine if content was generated by AI, promoting transparency and consumer protection in interactions involving AI-generated content.

AB-2839 makes it illegal to knowingly share fake or altered videos, images, or audio in ads or election messages if the goal is to mislead voters or raise money for a campaign. If someone breaks this rule, affected parties can take legal action to stop the content and seek damages.

AB-2355 AI in political ads. Specific rules apply if an image, video, or audio is completely made or significantly changed by AI that could mislead people’s opinion compared to the original.

Colorado

HB24-1147 or  Candidate Election Deepfake Disclosures addresses the implications of utilizing deepfake technology in communications concerning candidates for elective office. It outlines the necessity for disclosure regarding the use of such technology, establishes mechanisms for enforcement, and introduces a private cause of action for candidates affected by deepfake misuse.

Florida
CS/HB 919 or Artificial Intelligence Use in Political Advertising mandates that specific political advertisements, electioneering communications, and miscellaneous advertisements must include a designated disclaimer. It outlines the requirements for this disclaimer, establishes both criminal and civil penalties for non-compliance, and permits any individual to file relevant complaints. Additionally, the regulation provides for expedited hearings to address these complaints.

The name "Brooke's Law" is associated with House Bill 1161, recently became law. This law aims to protect victims of digital sexual abuse, specifically those impacted by "deepfake" imagery. Brooke's Law requires platforms to remove altered sexual depictions: It mandates that "covered platforms," like websites and online services, must remove "altered sexual depictions and copies of such depictions" within 48 hours upon request from the victim. And does the following:
  • Establishes a duty of care for platforms: It holds platforms accountable by establishing a "basic duty of care" for those that profit from user-generated content.
  • Provides remedies for non-compliance: Platforms that fail to comply with the law may be subject to lawsuits.
  • Effective date: The law takes effect on December 31, 2025.
The law is named after Brooke Curry, the daughter of former Jacksonville Mayor Lenny Curry, who was a victim of AI-generated explicit imagery.


Hawaii
SB 2687 or Elections; Materially Deceptive Media; Artificial Intelligence; Deepfake Technology; Prohibition; Penalty prohibits any individual, with specific exceptions, from recklessly distributing or entering into agreements to distribute materially deceptive media from the first working day of February in every even-numbered year until the next general election. Such actions must be conducted with reckless disregard for the potential harm to a candidate’s reputation or electoral prospects, as well as the impact on voter behavior. The provision also establishes criminal penalties and remedies for affected parties.

Idaho
HB 575  Disclosing Explicit Synthetic Media states that an individual is guilty of disclosing explicit synthetic media if they knowingly disclose such media and are aware, or reasonably should be aware, that an identifiable person depicted in the media did not consent to its disclosure. The disclosure is likely to cause substantial emotional distress to the identifiable person. This outlines criminal penalties for violations and specifies certain exceptions.

HB 664 or the Freedom From AI-Rigged (FAIR) Elections Act allows a candidate whose actions or speech are misrepresented through synthetic media in election-related communications to seek injunctive or other equitable relief to prevent the publication of such media. That candidate may pursue a damages claim for the deceptive representation.

Illinois
HB 4762 or The Digital Voice and Likeness Protection Act is designed to safeguard individuals from unauthorized use of their digital replicas. It addresses the concern of misuse of digital likenesses created through technologies like generative AI.

HB 4875 is an amendment to the Right of Publicity Act that enhances enforcement rights and remedies specifically for recording artists. It establishes liability for any individual who materially contributes to, induces, or facilitates a violation of the act by another party, provided that the individual had actual knowledge that the work contains an unauthorized digital replica

Indiana
HB 1133 or Digitally Altered Media in Elections act defines fabricated media and stipulates that if a campaign communication features fabricated media depicting a candidate, the individual or entity that financed the communication must include a disclaimer that is distinct from any other disclaimers. Furthermore, a candidate who is portrayed in fabricated media within a campaign communication that lacks the required disclaimer has the right to initiate a civil action.

Louisiana
Louisiana's RS 14:73.13 law primarily address unlawful deepfakes involving minors and sexually explicit material. It is illegal to create, possess, advertise, distribute, exhibit, exchange, promote, or sell deepfakes depicting a minor or another person, without their consent, engaging in sexual conduct.
Penalties for these offenses can be severe. 

Creating or possessing such a deepfake depicting a minor can result in imprisonment at hard labor for five to twenty years, a fine of up to $10,000, or both, with a minimum of five years served without parole. Advertising or distributing these deepfakes can lead to imprisonment at hard labor for ten to thirty years, a fine up to $50,000, or both, with a minimum of ten years served without parole if a minor is involved. 

Louisiana law also prohibits the unlawful use of a digital image and likeness, which is using an individual's digital image and likeness for commercial or non-commercial purposes without their written consent. Violations can result in a fine of up to $5,000 per violation.

Maryland
HB 333  Election Misinformation and Election Disinformation creates a portal on the election board website for the public to report election misinformation and disinformation. The Board will periodically review submissions and, if necessary, provide corrective information or refer cases to the State Prosecutor. Additionally, “influence” is defined for legal provisions prohibiting improper voting influence.

Michigan
HB 5141 requires that if an individual, committee, or entity creates, publishes, or distributes a qualified political advertisement, it must clearly state that the advertisement was generated wholly or largely by AI, if applicable. This statement must be presented in a clear and conspicuous manner, meeting specified requirements.

Minnesota
H 4772 addresses elections by implementing policy and technical changes related to election administration, campaign finance, lobbying, and census redistricting. It establishes the Minnesota Voting Rights Act and modifies the offense of using deepfakes to influence elections.

Mississippi
S 2577  Wrongful Dissemination of Digitizations imposes criminal penalties for the unauthorized sharing of digitizations.

New Hampshire
HB 1432  addresses the crime of fraudulent use of deepfakes and outlines associated penalties. It establishes a legal cause of action for such fraudulent use and prohibits the registration of lobbyists found guilty of using deepfakes in specific cases. A person will be charged with a Class B felony if they knowingly create, distribute, or present a deepfake—defined as any video, audio, or other media likeness of an identifiable individual—intended for certain purposes.

HB 1688 addresses the use of AI by state agencies, prohibiting them from manipulating, discriminating against, or surveilling the public.

New Mexico
HB 182 otr the Campaign Reporting Act mandates that any advertisement containing materially deceptive media, including AI-generated content, must include a disclaimer. It also establishes penalties for distributing or agreeing to distribute such deceptive media to mislead voters, with both civil and criminal consequences.

New York
S 9678 or Materially Deceptive Media in Political Communications act is a law that addresses deceptive media in political communications. It requires any individual or organization that distributes political content containing deceptive media, and is aware of its misleading nature, to disclose this information. There are exceptions for bona fide news entities distributing such media for specific purposes.

Tennessee
Passed the ELVIS Act to protect musicians' voices from AI-generated impersonation, marking a broader use of deepfake laws beyond elections and sexual content.The ELVIS Act (Ensuring Likeness Voice and Image Security). Enacted on March 21, 2024, this act protects individuals, especially artists and public figures, from unauthorized use of their voice, likeness, and image via AI and deepfakes.

ELVIS expands the existing "right of publicity" to include protection of an individual's "voice" from AI-generated impersonations. Using AI to replicate an artist's voice or likeness without consent is illegal. Violations can result in civil lawsuits for damages and criminal charges, including a Class A misdemeanor. The act also establishes liability for those who create tools specifically designed to produce unauthorized AI-generated content. 

Preventing Deepfake Images Act (HB 1299/SB 1346), passed in April 2025, this act creates civil and criminal actions for individuals whose intimate digital depictions are disclosed without their consent. The act defines "deepfakes" as artificially generated or manipulated media and "intimate digital depiction" as realistic digital images depicting nudity or sexual conduct. Victims can sue for various damages, including actual and liquidated damages. Intentionally disclosing intimate digital depictions with harmful intent can lead to felony charges. The act includes protections for online platforms and exceptions for good-faith disclosures. This act becomes effective on July 1, 2025.

Texas
Texas has enacted laws to regulate deepfakes, particularly targeting those created to harm political candidates or influence elections, as well as non-consensual sexually explicit deepfakes. Texas Penal Code 21.165 criminalizes the creation and distribution of such content, with specific provisions to protect individuals from malicious use of this technology.

Utah
SB 131 or Information Technology Act Amendments relates to audio or visual communications aimed at influencing votes for or against candidates or ballot propositions in state elections. It mandates that audio communications using synthetic media must clearly state specific words at the beginning and end, while visual communications must display these words during segments that include synthetic media.

Wisconsin
AB 684  or the Artificial Intelligence Content Disclosure act addresses the disclosure of AI-generated content in political ads, grants rule-making authority, and establishes penalties.