This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

A Fresh Take

Insights on US legal developments

| 4 minute read

FTC and DOJ Crack Down on Deepfake Fraud May Indicate Future Liability for Corporations

“Deepfakes” are a form of multimedia that use machine or deep-learning technology, subfields of artificial intelligence (AI), to synthetically create or manipulate content. Some experts predict that 90 percent of online content could be synthetically generated within a few years, and corporations can be sure that there will be an accompanying growth in legal risk related to such uses of AI. Recent announcements by the Federal Trade Commission (FTC or the Commission) and the US Department of Justice (DOJ) highlight the government’s increasing interest in regulating such technology.   

Climate of Concern

The FTC and DOJ are responding to a general climate of concern in the US government regarding the potential risks of AI, primarily centering on impersonation fraud. There are a growing number of examples of the types of abuses driving FTC and DOJ activities. Recently, an employee at a multi-national company in Hong Kong fell victim to a business email compromise (BEC) scheme in which they wired $25M to fraudsters on the basis of a video call with familiar colleagues and the firm’s CFO. However, everyone the worker saw on the call was fake, according to police. Others committing romance fraud are using AI-generated images to create unique dating profiles, and then they build a relationship to try to swindle victims out of as much money as possible. In 2022, romance scam victims in the US reported $1.3 billion in losses.

The FTC Targets Developers in Their Newest Rule Proposal

On February 15, 2024, the FTC finalized a new rule related to the regulation of deepfakes and floated the possibility of future rules that would create liability for companies with products that allow people to generate video, audio, and image impersonations, if such dupes lead to fraud.

  • The new rule (the Impersonation Rule), 16 CFR Part 461, promulgated under the FTC Act, prohibits the impersonation of the government, businesses, and their officials or agents to commit fraud. The rule will take effect 30 days after it is published in the Federal Register. 
  • Simultaneously, however, the FTC issued a notice of proposed rulemaking to amend this new issuance, seeking public comment on whether it should also:
    • add a prohibition on the impersonation of any individual (beyond impersonation of individuals associated with the government or businesses); and
    • extend liability for violations of the Rule to “parties who provide goods and services with knowledge or reason to know that those goods or services will be used in impersonations of the kind that are themselves unlawful under the Rule.”

Prior to the proposed FTC rule, platforms have invoked Section 230 of the Communications Decency Act, which offered immunity for how users created content using their products and services. It is unclear how Section 230 would shield AI-product providers in light of the proposed rulemaking, comments for which will be due 60 days following the date it is published in the Federal Register. AI-product providers must begin to contemplate the liability scope of the new rule and governance structures that can help them defend against potential claims arising from the proposed changes.  

DOJ Announces Sentencing Enhancement for Use of AI in Offenses

DOJ has also recently targeted the enhanced dangers associated with AI. On February 14, 2024, Deputy Attorney General Lisa Monaco gave a speech at the University of Oxford announcing a policy that, going forward, the DOJ would encourage prosecutors nationwide to seek sentencing enhancements for offenses involving the misuse of AI, which would include the use of deepfakes to perpetuate fraud schemes and other offenses.

  • The United States Sentencing Guidelines seek to provide a uniform sentencing policy for federal crimes, which provides recommended sentencing ranges that federal judges are required to analyze and consider during the sentencing process. Sentencing enhancements, which raise recommended sentencing ranges, are common where factual circumstances of the offense conduct would enhance danger or target vulnerabilities.
  • Monaco’s speech indicated that this approach will “deepen accountability[,] exert deterrence,” and highlights that the DOJ considers AI to be a tool that, like a gun, can magnify the danger of an offense. Monaco is asserting that using AI to further criminal conduct is deserving of higher punishment for retributive purposes and that DOJ should seek to deter its use in offenses through stricter sentencing. 

In addition to addressing the increased criminal threats posed by AI through sentencing policy, DOJ may also seek to follow in the FTC’s footsteps and target the means and instrumentalities of such crimes, which could include technology that permits the production of deepfakes—potentially charging AI-platforms with aiding and abetting criminal conduct. Those theories would require DOJ to prove a concrete nexus between the platform and the offense in question as required under common law aiding and abetting principles and would likely require evidence of either knowledge or willful blindness to the conduct at issue. However, DOJ is likely to consider those theories where supported by the facts and should be expected to continue to explore novel theories for liability. Preparing governance structures in light of this new environment is a prudent step for AI companies, as noted above.

Other Recent Government Actions Regulating AI

  • In August 2023, the Federal Election Commission began a process to potentially regulate AI-generated deepfakes in political ads in advance of the 2024 election.
  • On February 8, 2024, the Federal Communications Commission banned the use of AI-generated voices in robocalls.
  • Also in February 2024, the US House of Representatives launched a bipartisan AI task force, which will produce a comprehensive report with guiding principles, recommendations, and policy proposals.
  • In October 2023, President Biden issued an Executive Order on AI which directed various government agencies to establish new standards for AI safety, security, privacy, and other concerns.

Conclusion

The regulatory landscape around AI is shifting rapidly as the technology evolves and improves, but improvements in AI-created deepfakes has created a danger that government actors are eager to address. Their response could implicate companies in a number of ways as regulators and agencies consider attributing civil and criminal liability to the enabling platforms. Companies that have AI-based goods and services should be aware of the rapidly evolving regulatory and legal landscape.

Tags

cybersecurity, data protection, white-collar defense