
If you were accused of a crime or even had a simple encounter with police, would you trust an artificial intelligence software to give an accurate account of what happened? Once that would have been a fantasy scenario, but in just the last year, police departments around the U.S. have begun using AI models to produce the reports officers write to document every encounter between citizens and law enforcement. An accurate report can speed the justice process, while an inaccurate or biased report could change the scope, severity and basis of a criminal case. On January 9, 2026, ESAL hosted a panel discussion exploring how these programs, such as Draft One and Truleo, work, whether they help police do their jobs or whether they might introduce new challenges into the justice system.
The panel, moderated by Tony Van Witsen, brought together three experts: Deputy Chief Burke Farrah of the Fresno Police Department in California, an early adopter of Draft One; Professor Ian Adams of the University of South Carolina, a criminologist researching the effectiveness of these tools; and Avneet Chattha, a public defender in Los Angeles County.
How the Technology Works
Deputy Chief Farrah was quick to address what he saw as a fundamental misconception. "We have not abdicated the role of writing a police report to a computer or to AI," he emphasized. "Our officers are engaged at every step of the process." He explained that Draft One accesses body-worn camera footage from police encounters, transcribes everything captured on audio, and then uses AI to analyze that dialogue and create an initial draft of a police report.
But that’s just the beginning. Officers must review the document, edit it, correct errors, amplify relevant details, and delete irrelevant information before submitting it to a supervisor for further review. "When I sign my name at the bottom of that report, I'm taking responsibility for that investigation," Farrah said. He stressed the multiple layers of accountability built into the system.
The Fresno Police Department has been using Draft One for nearly two years, starting with minor misdemeanor crimes and general incident reports. As of the third quarter of 2025, they had used it over 17,000 times. Officers reported saving an average of 23 minutes per report, and the reports rated about 4.3 on a 5-point quality scale.
The Research Reality Check
As part of Professor Adams’ research program to study the use of AI in police departments, he designed a randomized controlled trial in Manchester, New Hampshire. Over six weeks, half of the patrol officers used Draft One and the other half wrote manual reports. His team measured how long it took officers to write reports from start to finish. The perhaps surprising results: there was no difference in the average time it took officers to write reports with or without AI assistance.
Yet when Adams surveyed the officers afterward, half of those who had used the technology insisted it was saving them tremendous amounts of time. "You should expect a fairly large perception-to-data gap when it comes to AI tools," he said. "We have this weird thing about us as human beings that if somebody hands us a magic pencil and says this is a magic pencil, some of us will just believe that what happens after that is magical."
While Adams calls himself "a technology optimist" who sees value in these tools, he emphasized that policing isn't fundamentally about efficiency. "We swear an oath to the Constitution, not the bottom line," he said, referring to his own experience as a police officer. The profession's highest goals are sanctity of life, preservation of law and order, community safety, and civil rights. If the technology isn't delivering on efficiency, the question becomes what value it actually provides.
Defense Attorney's Concerns
Avneet Chattha reviews police reports critically, looking for anything that might affect a client's case. He explained that roughly 95% of criminal charges are filed based solely on what the police report says, making these documents critical to the justice system. He has serious concerns about AI-generated reports. "The problem is fundamentally with AI—it's not necessarily with law enforcement. We don't know enough about AI and how it comes to its conclusions. AI is essentially a black box."
The bigger concern for Chattha was transparency. Law enforcement agencies don't always want to disclose the tools they're using, and vendors consider their systems proprietary. When agencies use tools like Palantir's AI to identify suspects, defense attorneys often can't get information about how those determinations were made.
Debate Over Disclosure
A major point centered on what information should be disclosed when AI is used. California recently enacted a law requiring disclosure when AI assists in writing police reports. Professor Adams explained that there are actually three pieces of information involved: the transcript from the body camera, the initial draft created by AI, and the final report edited by the officer. Currently, only the transcript and final report are part of the evidentiary record. "I would like, when I see a report, to know how much of this was AI-created and how much of it was officer-created," Adams said.
The problem, he explained, is that vendors treat the custom instructions they create for AI as their intellectual property. "Maybe it is; I'm not an IP attorney. But when we're balancing the public interest and the public good, something needs to give way a little bit."
Chattha confirmed that defense attorneys don't currently have access to that initial AI-generated draft. Representatives of Axon, creator of Draft One, have said they don't want to share it because it's essentially scrapped once the final report is generated. "We would love to get access to it," Chattha said, noting that it's the vendor, not law enforcement agencies, blocking this transparency.
Deputy Chief Farrah raised an analogy, asking whether officers' handwritten notes have traditionally been discoverable. In his experience, they haven't been. But Chattha disagreed, noting that judges have ordered notes turned over, particularly when officers reference them in court testimony. The debate highlighted how AI-generated reports represent something genuinely new that doesn't fit neatly into existing legal frameworks.
Practical Problems and Concerns
Adams shared a concerning news story from Oklahoma City where an officer described how the AI caught his partner mentioning suspects were in a red car—something the officer himself didn't remember hearing. Adams found this troubling. "An officer's report is supposed to be their perceptions of what their investigations and the facts that they discovered," he noted. The incident raised questions about how officers should document technology-assisted memory versus their own actual recollections, especially when they might not testify about an incident until years later.
Looking Forward
Concerning the future, Professor Adams acknowledged the reports "read better" from a grammar and composition standpoint, but still wants to know whether they're actually better in terms of accuracy, completeness, and fitness. His research aims to determine whether report experts can detect improvements in what he called "the inherent policeness" of reports written with AI assistance.
The panel concluded that AI-generated police reports represent both promise and peril. The technology can capture details that might otherwise be forgotten, potentially improving report thoroughness, and perhaps freeing officers for other duties. But significant questions remain about efficiency, transparency, cost-effectiveness, and the fundamental nature of police testimony and accountability. The panelists agreed that rather than rejecting the technology or embracing it uncritically, the path forward requires careful policy development, rigorous research, and ongoing dialogue among law enforcement, the defense bar, vendors, and the public to ensure these tools serve justice rather than undermine it.