BlogLine

DOJ delivers new guidance on AI in compliance programs

10/8/24

complicance

By: Matthew P. Delfino

On Monday, September 23, 2024, the Department of Justice (“DOJ”) released updated guidelines for prosecutors evaluating corporate compliance programs, including those employing AI systems. This update follows the public statements made by Deputy Attorney General Lisa Monaco in March earlier this year, outlining the DOJ’s position that current laws apply the use of AI, even without a federal law directly addressing it. 

Moreover, as the use of AI in compliance grows, oversight from regulatory bodies should be expected. Last October, the Biden administration issued Executive Order 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” EO 14110 highlighted the importance of adopting adequate cybersecurity and privacy measures in AI programs. Other federal agencies, including the Office of Management and Budget, have echoed their support for robust AI governance through their own directives as well.  

To get ahead of the coming regulatory enforcement of AI governance, companies can look to the updated guidance issued by the DOJ on Monday. That guidance prompts prosecutors to ask three “fundamental questions” as they examine a corporation’s compliance program: 

  1. Is the corporation’s compliance program well-designed
  1. Is the program being applied earnestly and in good faith
  1. Does the corporation’s compliance program work in practice

A Well-Designed Compliance Program 

A well-designed compliance program will “detect and prevent the particular types of misconduct most likely to occur” in that specific company’s operations and regulatory environment. The DOJ’s guidance explicitly focuses prosecutors on a company’s use of “new technologies, such as artificial intelligence” in their evaluation of a compliance program. AI is a powerful tool for developing a company’s risk profile and solving compliance issues, such as detecting fraud and reporting obligations. However, the DOJ’s guidance communicates that a compliance program still requires adequate human oversight. Companies deploying AI in their compliance program must assess the risks of using AI and take the appropriate mitigation steps. In their evaluation, prosecutors will consider whether controls exist to ensure the reliability and proper application of AI. Employee training and accountability monitoring on the use of AI are also crucial elements of a well-designed compliance program. Furthermore, prosecutors are instructed to investigate how a company manages its relationships with third parties. This is particularly relevant for the use of AI because many companies contract with third-party developers for their AI-powered platforms and solutions. Establishing controls to monitor the compliance and behavior of third-party providers would befit companies utilizing AI tools.  

Good-Faith Application of a Compliance Program

A compliance program must be “adequately resourced and empowered to function effectively.” The DOJ’s guidance makes clear that “paper programs” are insufficient. Perhaps the most critical way to ensure a compliance program is adequately resourced is appropriate staffing. Regarding the role of AI in a compliance program, companies need qualified individuals who thoroughly understand the proper purposes of their AI systems, how they work, and the risks associated with them. These employees should be empowered by upper-level management to enforce and contribute to the company’s policies. Periodically reviewing and updating training programs on the use of AI also helps ensure an earnest application of a compliance program. Ultimately, companies should balance resources allocated for identifying and exploiting growth opportunities using AI with those used for identifying and mitigating the risks associated with AI.  

A Compliance Program that Works in Practice

The DOJ’s guidance charges prosecutors with determining whether the compliance program was working effectively at the time of the offense. An effective compliance program not only dedicates resources to detecting and investigating misconduct but “evolve[s] over time to address existing and changing compliance risks.” Prosecutors will specifically probe whether a company using AI in its operations or compliance program is continuously monitoring and testing those technologies to confirm they are “functioning as intended and consistent with the company’s code of conduct.” While AI can automate many decisions in a compliance program, the organization must be able to quickly detect and correct decisions made by AI that do not align with the organization’s values or policies.  

Moving Forward with AI in Compliance 

Companies that embrace the remarkable benefits of employing artificial intelligence in their compliance programs should also be aware of the growing interest from regulatory bodies in overseeing the use of these technologies. The recently updated guidelines discussed above reveal that the DOJ expects companies to stay well-informed on the developments, risks, and regulations applicable to the use of AI in compliance. At FMG, we offer clients meaningful AI guidance to assist your organization in navigating what is a rapidly changing legal landscape.

For more information or questions please contact Matthew P. Delfino of FMG’s Data Security, Privacy & Technology Practice Section by email at matthew.delfino@fmglaw.com or your local FMG attorney