Agentic AI as Evidence: When Autonomous Systems Become Witnesses in Investigations

April 13, 2026

By Chris Riper

Artificial intelligence in 2026 is defined by the rise of “agentic AI,” or “AI agents”, systems with varying levels of autonomy that are “able to perceive, reason, and act on their own.”[1] Companies are rapidly deploying AI agents to automate and enhance tasks traditionally performed by humans, in some cases tying reductions in headcount to this technological shift.[2] As these agents assume operational roles once reserved for employees by approving transactions, executing workflows, and interacting with third parties, they are increasingly embedded in the evidentiary record of alleged misconduct.

AI governance is still catching up. Traditional IT governance, built for predictability, transparency, and linear systems is “ill-suited to AI and machine learning models’ dynamic, opaque, and constantly evolving nature.”[3] Despite their apparent ability to perceive and reason, much of an AI agent’s “thinking” occurs in a black box, and over half of companies admit to a lack of monitoring and security controls over AI agents.[4] This lack of oversight creates not only the risk of AI agents “going rogue,”[5] but also creates opportunities for fraudsters to exploit them. AI agents may unwittingly find themselves as witnesses to, or even complicit in, fraud, raising new challenges for white-collar crime investigations. In that sense, AI agents do not merely process evidence; they create it through decisions, communications, and by leaving behind machine‑native, often persistent traces of reasoning and action.

This agent‑generated evidence forces investigators to rethink the traditional “wh-” questions – who, what, where, when, why, and how. Answers are no longer contained in a single human narrative, but distributed across configuration files, prompts, logs, and model outputs.[6]

When the Employee You Need to Interview is an AI Agent

Witness interviews are a critical component of any fraud investigation and are typically staged. Information about systems and processes, policies, and people potentially relevant to an investigation is gleaned from initial employee interviews, helping investigators develop a roadmap to focus their efforts and obtain additional evidence. Final interviews are typically reserved for those directly implicated in the alleged fraud, where a witness can be presented with and questioned about evidence developed in earlier stages of the investigation.

Somewhere in the middle are employees who may have participated in the alleged fraud. These are employees whose job function enabled them to, for example, make changes to accounting entries or approve vendor invoices, but who have little culpability because they were working under the direction of a superior.

What happens if those employees are not humans, but are AI agents?

AI agents can use many of the same electronic, internet-based tools humans use today: they send and receive money or cryptocurrency, research subjects on the internet, interact with a variety of software applications through APIs, and even communicate with other AI agents (or humans) through email and chat platforms. In short, AI agents may be performing the very tasks at issue in a fraud investigation. However, the relevance of these AI agents is not their culpability, but their proximity to the conduct at issue.

Key Questions for AI Agents

Can you interview an AI agent? If an AI agent can perceive and reason, could it also serve as a useful witness given the right set of prompts? In principle, yes. But any “interview” should be used to identify underlying artifacts (logs, prompts, tool calls, and approvals), since agent outputs can be non-deterministic and may include hallucinations, omissions, or post-hoc rationalizations that don’t reflect what actually occurred. An AI agent’s output must be corroborated with other evidence and considered in the overall context of an investigation.

AI agents resemble human employees in key ways:

  • They have defined job functions.
  • They access specific systems.
  • They operate within approval limits.
  • They generate and receive electronic communications.

Each of these characteristics carries evidentiary consequences. Like human employees, AI agents leave contemporaneous records of what they were authorized to do, what they did, and what information they relied upon in doing so. Much of this appears as traditional forms of electronically stored information (ESI) like messages, system access (including API keys), and audit logs. But investigators must probe further, examining how the AI agent was designed and deployed:

  • Who configured the agent’s authority?
  • What data did it have access to at the time?
  • Were prompts static or dynamic?
  • Can its actions be replayed or reproduced?
  • Were AI guardrails in place?
  • What human approvals existed, if any?

Investigators need to compare agent-generated evidence to what it was configured to do, and determine where culpability may lie, ranging from inadvertent authority granted during configuration to deliberate manipulation by a fraudster. Investigators must also consider how to present agent-generated evidence and actions when discussing the investigation with stakeholders (including management, boards, and regulators), and how to demonstrate remediation and future oversight.

Why This Matters in Litigation

If an investigation leads to litigation, AI agents and agent-generated evidence raise additional complexity:

  • Discovery expands. Counsel may need to build their evidentiary record through additional electronic evidence rather than, or in advance of, traditional witness depositions.
  • Attribution disputes become more complex. When an AI agent is involved, assigning human responsibility becomes more difficult.
  • Experts become essential. As AI agents become more integrated in everyday life, explaining an AI agent’s design and why outcomes occurred, especially in an industry-specific context, will be important.
  • Machine narratives emerge. Courts will be confronted with machine-generated narratives. How will a jury react to the defense that “it’s the robot’s fault”? Judges will continue to play an important role of gatekeeper. An amendment to the Federal Rules of Evidence was recently proposed to ensure machine and AI-generated evidence is “properly regulated for reliability and authenticity” when offered without expert testimony.[7]

These, and other legal issues surrounding artificial intelligence will continue to play out over time, including whether communications with AI tools, or privileged communications uploaded to them, retain privilege protection. As Judge Rakoff of the Southern District of New York recently ruled, they do not.[8]

Conclusion

If you feel odd asking yourself these kinds of questions, then good: it probably means you’re human (at least as of the time this article was written). Agent‑generated evidence sits at the intersection of business records, system logs, and reconstructed decision‑making, challenging traditional evidentiary categories. Ultimately, AI agents adjacent to fraud are not so different from human witnesses. They perform roles like human employees and generate similar types of electronic evidence. Investigators need to determine why what should have happened diverges from what actually happened. But if the answer is obscured because the AI agent’s “thinking” is kept in a black box, the more immediate question may be whether a human should be back in the loop.


[1] Stackpole, B. (2026). Agentic AI, explained. MIT Sloan. https://mitsloan.mit.edu/ideas-made-to-matter/agentic-ai-explained.

[2] Q4 2025 Shareholder Letter investors.block.xyz. https://s29.q4cdn.com/628966176/files/doc_financials/2025/q4/Q4-2025-Shareholder-Letter_Block.pdf; Hart, C. (2026, January 27). Pinterest to Lay Off Up to 15% of Workforce in Restructuring. The Wall Street Journal. https://www.wsj.com/business/pinterest-to-lay-off-up-to-15-of-workforce-in-restructuring-66a62170; Luca Ittimani. (2026, March 12). “Devastating blow”: Atlassian lays off 1,600 workers ahead of AI push. The Guardian. https://www.theguardian.com/technology/2026/mar/12/atlassian-layoffs-software-technology-ai-push-mike-cannon-brookes-asx.

[3] Tweed, L. (2025, August 26). Bridging the Governance Gap: AI, Risk, and Enterprise Innovation. IEEE Computer Society. https://www.computer.org/publications/tech-news/trends/ai-risk-and-enterprise-innovation.

[4] Barker, P. (2026, February 5). 1.5 million AI agents are at risk of going rogue. CIO. https://www.cio.com/article/4127774/1-5-million-ai-agents-are-at-risk-of-going-rogue-2.html.

[5] Autonomously acting in ways its human overseers never envisioned, such as leaking confidential information or deleting data.

[6] The author recognizes AI agents may also be used by investigators as a tool to augment or accelerate an investigation, raising numerous questions around reliability, controls, data privacy, etc., and which are not discussed in this article.

[7] Committee on Rules of Practice and Procedure. (2026, January 6). Agenda Book (p. 75). Judicial Conference of the United States. https://www.uscourts.gov/sites/default/files/document/2026-01_standing_committee_agenda_book_final_0.pdf. The proposed Rule 707, titled “Machine-Generated Evidence,” currently reads: “When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702. This rule does not apply to the output of simple scientific instruments.”

[8] Transcript of Pretrial Conference, United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 10, 2026).

Latest Insights

Talk to Our Insightful Experts