AI Governance in the Crosshairs: Navigating the DOJ’s Updated Compliance Guidance
Never miss a thing.
Sign up to receive our insights newsletter.

How the DOJ’s Updated Compliance Guidance Impacts AI Governance
In the latest revision to the Evaluation of Corporate Compliance Programs (ECCP), the U.S. Department of Justice (DOJ) made one thing clear: Companies must be thinking critically and proactively about the risks associated with emerging technologies, especially artificial intelligence (AI). It’s no longer enough to ride the wave of innovation. The DOJ now expects organizations to demonstrate that they are actively managing AI-related risks as part of a mature compliance program.
This shift reflects a growing concern that rapid adoption of AI could outpace companies’ ability to govern it responsibly. For commercial businesses integrating AI into operations, this guidance serves as both an alert and a roadmap.
DOJ’s Criteria for Assessing AI Risk Management
Prosecutors are encouraged to consider key areas when evaluating a company’s compliance efforts:
- AI risk assessments: Companies must evaluate how their use of AI could create legal, ethical or operational risks. This includes misuse of data, biased decision making or a lack of transparency in automated processes.
- Integration into enterprise risk management: AI oversight can’t be siloed. It should be woven into the broader risk management strategy, with regular reporting and cross-functional accountability.
- Governance structures: Companies should have clear roles and responsibilities around AI usage, such as who builds, monitors and approves. This includes implementing guardrails to ensure responsible development and deployment.
- Training and culture: All employees, not just data scientists, need to understand how AI is being used in the business and what to do if they spot something that doesn’t look right. Training and communication are critical in this area.
- Access to data and tools: Compliance teams should be equipped with tools to monitor AI activity like any other business unit. The DOJ is emphasizing parity across departments, especially when it comes to accessing internal data.
What the Guidance Means for Commercial Organizations
For many commercial organizations, this guidance may feel ahead of where they currently stand. For these organizations, AI may be in the early stages of deployment or managed more by IT and operations departments rather than legal or compliance functions. This validates why now is the time to act. Organizations can:
- Draft AI-specific governance policies: Treat AI like any other high-risk technology. Define acceptable use, assign ownership and document oversight protocols.
- Build a cross-functional risk review process: Bring compliance, IT, legal and business leaders together to evaluate where AI is being used and what risks it may pose.
- Develop a response plan for AI misuse: Don’t wait for an incident to happen. Be proactive and create clear escalation paths for reporting and investigating concerns tied to AI-driven processes.
- Encourage a speak-up culture: Reinforce to employees that concerns around automation, algorithms or data use are as valid as any other compliance issue. Whistleblower protections should extend to the AI space as well.
Key Takeaways
The DOJ’s new focus on AI in compliance programs signals a broader shift: emerging technology is no longer just an IT issue, it’s a governance issue. Compliance leaders who take steps now to build transparency, accountability and controls centered around AI will be far better positioned to respond to regulatory scrutiny in the future. More importantly, they’ll be helping their organizations navigate innovation with integrity.
Weaver’s professionals are prepared to help organizations assess emerging risks, align controls with evolving threats and provide strategic GRC guidance. Contact us today for tailored insights or support with risk-focused audit planning.
Authored by David Lange and Michael Rusk
©2025