January 31, 2025
Companies are racing to harness the power of artificial intelligence (AI). Insurers are using AI to replace administrative overhead and assist with claims processing. Software developers are incorporating AI to add value to their enterprise products. Even law firms are leaning into AI for research and writing, client correspondence, and document review.
But in the race to stay competitive, companies must also be careful to avoid potential exposure to civil and criminal penalties. Every company using AI should be aware of risks related to cybersecurity, confidentiality, and contractual requirements, as those issues have been widely covered. Companies could also face exposure under the False Claims Act (FCA) if their clients include government agencies or contracts.
To date, most federal enforcement actions involving AI have fit squarely in the FCA’s traditional use: allegations that a contractor misrepresented the product it sold to the government. In these cases, AI is part of the software or technology product sold to the government—usually for military defense or espionage purposes. Nonetheless, these cases are a stark reminder that while some executives may consider misleading statements about their software’s capabilities to be ambitious advertising, the government could later cast those representations as fraud against the government.
The government’s scope of enforcement, however, will extend beyond traditional FCA cases involving AI products. In September 2024, the Department of Justice announced an update to its Evaluation of Corporate Compliance Programs (ECCP). This guidance is used by federal prosecutors to determine the effectiveness of a corporation’s compliance program at the time of an offense and can impact resolution, monetary penalties, and imposed compliance obligations. This analysis may inquire into the following:
- Is management of risks related to use of AI and other new technologies integrated into broader enterprise risk management strategies?
- How is the company curbing any potential negative or unintended consequences resulting from the use of technologies, both in its commercial business and in its compliance program?
- Do controls exist to ensure that the technology is used only for its intended purposes?
The ECCP is a clear signal from the Department of Justice that corporate AI programs are under scrutiny. For example, the government has pursued enforcement actions against health care providers who use algorithms or AI to suggest diagnosis codes or treatments when those suggestions result in medically unnecessary claims submitted to Medicare or Medicaid. The government’s theory—that false claims can result from inaccurate suggestions made by algorithms or other software, despite the intervening medical judgment of a physician—has been accepted by one district court in United States ex rel. Osinek v. Permanente Medical Group, but has otherwise avoided judicial scrutiny.
Other medical providers have run afoul of regulators by using AI to suggest diagnostic codes later determined to be improper, and by failing to follow suggestions of AI software when it encouraged revisiting prior diagnoses. In essence, the government expects companies will use AI to reduce the monetary value of Medicare claims when appropriate but may demand companies avoid following advice from AI if it results in higher bills to the government. As AI becomes further integrated into medical practices, workflows, and electronic health records, companies must be extremely careful to avoid any suggestion that the software has improperly influenced the medical judgment of treating physicians.
It is no secret that AI is here to stay. Companies should carefully consider the risks of implementing AI products if they have any government contracts or revenue sources. Ensuring, at the outset, that AI products are implemented subject to controls like those recommended by the ECCP can help avoid the costs and burden of subsequent government scrutiny.
The article in its original form can be found here.
Arthur Fritzinger (LAW ’10) is a member at Cozen O’Connor who specializes in white collar defense and focuses his practice primarily on False Claims Act (FCA) defenses.
James Mahady (LAW ’24) is a litigation associate at Cozen O’Connor.