
Artificial intelligence (AI) agents are transforming yet another aspect of modern life–law enforcement–as large language models (LLMs) begin to assist police officers write the reports most departments require them to file after any citizen encounter.
If you were accused of a crime, would you trust AI to give the prosecutor and your own lawyer an accurate account of what happened? While some police departments have eagerly embraced AI as a time saver, other observers of the justice system worry about changing the humble police report, the basis of everything that follows in the legal system.
Can AI-assisted police reports be trusted? Hear three authorities discuss this critical topic in a streaming panel sponsored by Engineers and Scientists Acting Locally (ESAL).
Ian Adams, Professor of Criminology and Criminal Justice, University of South Carolina
Sgt. Steven Casto, Fresno CA Police Department
Avneet Chattha, Deputy Public Defender, Los Angeles County CA
The discussion will include questions about how models are trained, errors, word choices, hallucinations and bias, and how the final report will be affected by the prompts used. Defense lawyers might try to question AI-generated reports by inquiring into how the AI agent was created, which private tech firms are not required to disclose. This raises questions about the right of the accused to confront their accusers. Because decisions about AI-assisted reports are made by each individual police department, this issue offers an opportunity to influence law enforcement policy in your own community.