Reporting Live: How AI Can Help Police Get Down the Facts
During Axon Week in Miami Beach, Florida in April, the company unveiled a new AI-assisted software product it hopes to solve one of the most tedious tasks for law enforcement officers— writing reports. Draft One uses auto-transcribed body-worn camera audio to generate police report narratives. Although Draft One leverages Artificial Intelligence to create the reports, safeguards are in place, requiring every report to be reviewed and approved by an officer for accuracy and accountability of the information before they are submitted.
This article appeared in the May/June issue of OFFICER Magazine. Click Here to subscribe to OFFICER Magazine.
During the conference, Noah Spitzer-Williams, Senior Principal Product Manager—Draft One for Axon, hosted a panel session discussing the ins and outs of the product, and also spoke to OFFICER Magazine about what went into creating it.
What has gone in creating Draft One?
Since we started working on this product a little over a year ago, we’ve been working with lots of different stakeholders, lots of different law enforcement customers, prosecutors defense attorneys and community groups to just really make sure that we’re threading the needle between providing value to everyone involved, but also making sure that it’s used responsibly and that there is not going to be any negative, unintended consequences. That’s just a key part of how we develop all of our products at Axon, but really, I think Draft One sort of required us to focus on that even more.
What are the specifics of some of the components built around Draft One?
There are basically three core pillars to the responsible side of Draft One. The first one is officer training. The second one is fair and quality drafts. The third one is a set of product safeguards. Then what we do is test all of those end-to-end to ensure high-quality reports are being submitted. When it comes to training, with great power comes great responsibility. Just like all of the training we do with TASER, we want to do the same for Draft One. Up live now on Axon Academy, our virtual training platform, is a whole bunch of videos, not only for administrators, but also for officers on how to use Draft One. It goes into the workflows, what the strengths and weaknesses of using AI are for report writing, and talks about how to look for things like bias in the reports that the officer may have to address and edit.
In terms of fair and quality drafts, this is where we probably spent the most time as a team. If the initial draft we’re providing is riddled with mistakes or riddled with racial bias, that is going to be terrible and the product will never work. So we did a few things to combat that and also measure and see where we’re out. Number one, for anyone who has used ChatGPT, you are familiar with some of the downsides of it. We use a lot of the same underlying technology as ChatCPT, but we turned off the creativity. What we basically tell it to do is just stick to the facts and that is the transcript. It’s a record of what happened.
RELATED:
- Command Q&A: AI in Law Enforcement
- Artificial Intelligence in 9-1-1 Communications Center
- AI in Patrol & Emergency Lighting
- AI Demystified: How LE Agencies are Leveraging Artificial Intelligence
We’ve also done a lot of work to test for biases, specifically racial bias. We worked with a number of customers who shared with us several hundred real-world body camera videos and the reports that those officers wrote. We tested a bunch of things. We have a set of metrics that we measure against to look at things like did any of those metrics move just because the race of the subject is different. We did another experiment where when a race was mentioned in the transcript, we went in and swapped out that race to all of the other different and measured it against our metrics. What we found was that there were no statistically significant differences across races, which is great. What we want to make sure of is that it’s just based on whatever actually happened and no other factor.
Finally, we built in a bunch of safeguards on top. The officer is in control the entire time and we sprinkle in "insert placeholders" throughout. It cues the officers to add a little bit of information and also serves as a proofreading check in case they were to submit the report without having addressed any of them. There is also a set of other additional safeguards that agencies can work with. You can control what incident types you are comfortable using Draft One on. We even have a feature where we can sprinkle in obvious errors into the report the officer has to remove before they submit the report.
What sized departments can benefit from Draft One?
I think that’s kind of one of the beauties is that every department regardless of size is writing incident reports. It’s just like core part and even across departments, the incident reports are very similar as well, which is what has allowed us to scale very quickly. Whether you’re a giant department or a tiny one, it works just as well.
How does the advancement of AI improve Draft One in the future?
It’s not learning from customers data, from agency data, but the way to think about it is that Draft One consists of two parts. There’s the large language model from OpenAI. And then we built the law enforcement layer on top. The large language model from OpenAI will continue to get better and will come out with different versions. We might do additional things on top to be more law enforcement specific to improve performance even further. That’s the nice thing. We can do all of that without compromising sensitive data or infringing on people’s privacy.
How much control currently does the officer have over Draft One?
Report narratives are drafted strictly from the audio transcript from the body-worn camera recording. Draft One generates a first draft of the report and includes a range of critical safeguards, requiring every report to be reviewed and approved by a human officer, ensuring accuracy and accountability of the information before reports are submitted. Because we focused this first version primarily on the traditional incident report, we’ve been pleased that we have had a number of detectives say it works decently well for interviews. It hasn’t been designed for that yet, but that’s definitely an area that we are going to be looking at investing in very short order. Just to adjust the format, pull in more details and pull in more quotes.
Some of the officers in the session mentioned they have been more descriptive on camera.
I think it is a fantastic thing. I think it may be the thing we’re most ultimately proud of at the end of the day in terms of the interactions between police and the public. It’s really hard to change behavior of anyone, not just officers. The fact that officers are changing this behavior on their own. Many departments didn’t necessarily tell them to do this, but they sort of just intuitively figured out that, ‘If I’m more verbal before, during and after these interactions, it benefits me.’
If you’re a department that has not adopted body cameras yet for whatever reason, now suddenly the body camera is a tool to help the officer get their job done.
How long have you guys been working on this project?
A little over a year specifically on this, but again like this is standing on the shoulders of lots of other investments. It’s sort of cool that it’s culminated in this, but a lot of stuff had to go into it to be able to pull it off in sort of the seamless way that we’ve been able to do it.
Paul Peluso | Editor
Paul Peluso is the Managing Editor of OFFICER Magazine and has been with the Officer Media Group since 2006. He began as an Associate Editor, writing and editing content for Officer.com. Previously, Paul worked as a reporter for several newspapers in the suburbs of Baltimore, MD.