OnPolicing Blog

Large Language Models: Using ChatGPT for Police Leaders

December 4, 2024

ChiefJasonPotts_200

Chief Jason Potts

Sergeant Michael Billera

ChrisCatren

Chief Chris Catren (Ret.)

ChiefJasonPotts_200

Chief Jason Potts

Sergeant Michael Billera

ChrisCatren

Chief Chris Catren (Ret.)

TravisMartinez

Deputy Chief Travis Martinez

Earlier this year, one of NPI’s Executive Fellows, Chief Jason Potts, published an article on PoliceOne exploring the use of ChatGPT in police writing. This month, we are adding this article to our OnPolicing Blog with recent updates from the chief and his fellow authors, Sergeant Michael Billera, Chief Chris Catren (ret.), and Deputy Chief Travis Martinez. Explore their insights into the benefits and considerations of applying ChatGPT in agency operations.

Overview

Large Language Models (LLMs), such as ChatGPT and other advanced artificial intelligence (AI) systems, have promising and even remarkable capabilities when understanding and generating human-like text. As these models continue to evolve, the benefits across various industries, including law enforcement, are becoming more and more obvious. This paper delves into potential applications of LLMs, with a particular emphasis on their role in policing — especially writing reports. It also asks another question: How can LLMs allow police to be much more effective, just, and empathetic?

Introduction

In November 2022, OpenAI launched ChatGPT, a natural language processing tool driven by AI technology that enables human-like conversations and inquiry responses. The LLM is based on Generative Pre-trained Transformer (GPT) models, also known as generative AI. This potentially impactful tool can answer questions and assist users in a variety of ways, emphasizing efficiency, from writing articles, speeches, and commendation letters to creating fitness plans, inventing new colors, or developing vacation itineraries.

The advent of Large Language Models (LLMs) like ChatGPT has revolutionized the field of natural language processing (NLP). These models, trained on vast amounts of data, generate coherent and contextually relevant text, making them invaluable across various sectors, including law enforcement. Elon Musk, whose vision for AI extends to creating a platform like X (formerly Twitter) as an “everything app” integrated with advanced AI capabilities, highlights the transformative potential of these technologies across industries (Conger & Griffith, 2022). Futurist Ray Kurzweil anticipates a future where AI and human intelligence merge, suggesting that such advancements will enhance human capabilities and fundamentally transform societal operations (Kurzweil, 2005). However, Peter Thiel warns of the potential societal disruptions posed by AI, particularly its impact on employment in fields reliant on analytical skills, emphasizing the need for careful adoption (Thiel, 2014).

Overview of Large Language Models

LLMs, such as OpenAI’s GPT-4 or ChatGPT, are trained on diverse Internet text but do not possess the ability to think or understand content in the way humans do. Instead, they identify patterns in data they were trained on and generate outputs based on those patterns. Musk’s vision of leveraging AI for advanced, integrated systems complements the potential applications of LLMs in law enforcement, where their ability to process large amounts of information quickly and accurately can aid in both operational efficiency and strategic decision-making.

One of the most promising areas of application for LLMs in policing is report writing. Officers spend significant amounts of time drafting reports. LLMs can assist officers by generating coherent and detailed reports based on the inputs provided by the officers. Kurzweil’s notion of AI augmenting human capabilities aligns with this, as these tools can reduce repetitive tasks, freeing officers to focus on more critical responsibilities. However, echoing Thiel’s caution, it is essential to ensure that the adoption of such tools does not inadvertently lead to job displacement, unintended consequences, or over-reliance on technology without adequate oversight.

Colloquially known as “routine reports” in the profession, officers can efficiently input data for thefts, trespassing, drug possession, simple assault, vandalism, and certain other low-level crime violations. A system to report more complex crimes, however, can also be developed, further extending the potential benefits of LLMs in enhancing the efficiency and effectiveness of law enforcement operations.

Alternative: A Template-Based Approach

A template-based approach is particularly helpful for routine reports, such as simple thefts, drug possessions, and assaults. It provides a structured format for officers to follow, ensuring that all essential information is captured (Adams, 2023). Note: A representative list of LLM templates can be downloaded using the link at the end of this section.

According to a conversation with Professor Ian Adams of the University of South Carolina, he stressed the importance of utilizing the following considerations when using ChatGPT. They include:

  • Select a template: An officer or a community services officer uses a pre-formatted template that matches the incident they are reporting. These templates are designed with efficiency in mind, with clear sections for all the important information such as incident details, people involved, evidence, and the story of what happened.
  • Fill out details: Next, the officer fills in the basic information into the correct spots in the template. This helps give GPT a bit of context and keeps the report organized.
  • Request AI assistance: Requesting assistance is the cornerstone of AI’s functionality. GPT analyzes the information the officer has put into the template and any other pertinent details the officer has provided, then generates content that fits right into the template. This is extremely helpful for officers when crafting a narrative.
  • Review and edit AI content: This is a critically important step. The officer reviews the AI-generated content to ensure that the narrative is accurate and clear. If there are any errors, the officer may request changes using the AI system.
  • Combine template and AI generative responses: Once the content is acceptable for submission, the officer will combine it with the original completed template. The officer will then print out copies of the AI-generated narrative, as well as the template, to go in the report. This ensures that the incident is documented in a detailed, well-organized way and keeps the reporting integrity intact.
  • Finalize and submit: Lastly, the officer makes a final review of the whole report, including the template and narrative parts, and makes any last-minute fixes. The officer then concludes the report and submits it in accordance with departmental policies.

This link includes a representative list of potential LLM templates that can be established and used while pairing with LLMs and a ChatGPT-type program — one hopefully integrated with an established RMS.

Enhancing Police Effectiveness

LLMs can be used to automate report writing, a task that often consumes significant police time. By providing a detailed, structured narrative of events based on data, LLMs can generate comprehensive reports, freeing up officers for more critical tasks. Moreover, LLMs can assist in analyzing large volumes of data to identify patterns or trends that might be overlooked by humans, potentially aiding in crime prediction and prevention.

Chatbots and other artificial intelligence can be used to organize and coordinate non-emergency calls where they can then be triaged into an officer’s mobile data computer through computer-aided dispatch (CAD).

Promoting or Hindering Justice

Bias in law enforcement is a persistent issue at the forefront of our profession. LLMs, when paired with a diverse and balanced dataset, can help mitigate human bias. They can provide an unbiased assessment of situations based on the data provided, aiding in fair decision-making. Further, LLMs can be used for training, simulating various scenarios to help officers understand and manage their biases.

Fostering Empathy

LLMs can be programmed to demonstrate empathy, providing a more human-like interaction. This will be useful in situations where individuals are in distress and need reassurance. For instance, LLMs can be used in hotlines to provide immediate, empathetic responses while the caller waits for a human operator.

LLMs can be used in the following additional ways:

  • Data Analysis: LLMs can analyze vast amounts of data to identify patterns, which can be crucial in crime prediction and prevention. Officers are able to patrol in hot places and during hot times to have an impact on hot people based on the identified patterns.
  • Surveillance: With the integration of LLMs in surveillance systems via the use of computer vision, there is potential for real-time analysis of surveillance footage, aiding in quicker response times.
  • Training: LLMs can be used to create realistic training scenarios for officers, aiding and providing better preparation for real-life situations. Tabletop exercises based on previous incidents with known outcomes can be incorporated to further “bake” and infuse the training with well-known checklists.

Challenges and Ethical Considerations

While LLMs offer numerous benefits, there are still challenges and ethical considerations. Concerns about privacy, potential misuse, and the reliability of AI-generated content must be addressed. Further, in the context of law enforcement, ensuring that LLMs do not perpetuate biases is crucial.

The evolution of LLMs presents vast opportunities across a wide array of industries, including law enforcement. As these models become more integrated into daily life, it is essential to harness their potential with measured responsibility, ensuring that they are used ethically and effectively.

This rapidly expanding technology requires continued training and analysis to ensure that responses are accurate. Additionally, what experts call hallucinations, can occur when the chatbot delivers responses that can appear to be true, but are fabricated by the AI. Further, inputs or data entered into an AI chatbot immediately become accessible to its owner — Open AI.

Personnel must strictly abide by regulations and procedures to safeguard and protect critical information. Most law enforcement agencies likely have certain guidelines that apply equally to all modes of online activity, including unofficial and personal use of the Internet. This may include the use of LLMs. If such a policy does not exist or appears vague, however, it is imperative that privacy officers guide personnel regarding the impact of any text, imagery, or video content on operational or information security before posting online.

Chain-of-Thought (CoT) Reasoning and Its Exponential Growth

Chain-of-Thought (CoT) reasoning represents an innovative approach in policing through artificial intelligence (AI), enabling systems like GPT to break complex problems into sequential, logical steps. Already used in such systems as Microsoft Copilot and Axon Draft One, the structured framework mimics human reasoning, making CoT an invaluable tool for multi-step tasks like planning, problem-solving, and decision-making (Wei et al., 2022).

CoT reasoning has shown transformative potential across fields, particularly in law enforcement, where it can enhance training, strategic thinking, and resource allocation. Despite some claims, AI systems leveraging CoT are not inherently smarter than humans, as they presently lack creativity, emotional intelligence, and self-reflection capabilities (Bubeck et al., 2023). Instead, they complement human expertise by providing high-speed, logical analysis while relying on human oversight to guide their use in complex real-world scenarios. Nonetheless, the implications for police training, incentives, early-warning systems, policy creation, jail classifications, crime analytical deployments, and accountability — or hitting on how we measure, reward, and hold people to account — suggest new levels to enhance culture and efficiency. Despite all of these promises, however, there are limitations as we still need a human to verify, reflect, and reason — all within emotionally intelligent parameters.

Implications for Training in Policing

CoT frameworks offer significant advancements in police training by simulating realistic scenarios and fostering critical decision-making skills. For instance, CoT can support tabletop exercises where officers navigate step-by-step through active shooter scenarios or disaster responses. Such support could include incident command systems, task and checklists for complex critical incidents, or community conflict resolution. By breaking down these situations into logical components, officers can better understand both immediate actions and long-term implications (Koper et al., 2014).

CoT-driven simulations may also enhance empathy and procedural justice training by guiding officers through de-escalation techniques and fair communication strategies (Ferguson, 2024). This structured reasoning helps officers internalize equitable practices, improve public trust, and reduce biases during community interactions. The implications may even be more powerful when paired with reality-based and virtual reality training sets. Some agencies are already using AI for call-taking training for dispatchers.

Enhancing Deployments with Crime Analytics and CAD Systems

CoT reasoning can revolutionize crime analysis and police deployments by integrating data-driven strategies, such as the Koper Curve, into CAD systems. By analyzing historical crime data, environmental factors, and real-time intelligence, CoT-enabled systems may optimize patrols for violent micro-hotspots. The Koper Curve’s principle of high-visibility patrols lasting 10-15 minutes can be encoded into CAD systems in an automated fashion, ensuring randomized and efficient scheduling that prevents predictability and maximizes deterrence (Koper, 1995). Additionally, crime analysts can use CoT frameworks to predict high-risk times and areas, enabling preemptive deployments such as targeted precise patrols or community outreach.

When policing is precise, it is akin to fishing with a spear instead of a net to avoid disparate outcomes. Integrating CoT reasoning into policing strategies fosters a dynamic and responsive model that balances deterrence with fairness and proactive enforcement. This approach aligns with traditional psychological principles, particularly operant conditioning, which posits that behavior is influenced by its consequences — reinforcements and punishments. The “carrot and stick” method exemplifies this, using rewards to encourage desired behaviors and penalties to discourage undesired ones. By applying these principles, law enforcement can develop strategies that not only deter criminal activity but also promote positive community interactions, leading to more effective and equitable policing outcomes (Skinner, 1953).

Limitations of CoT Reasoning in Policing

Despite its promise, CoT reasoning has limitations that impact its application in policing. CoT models rely on static training data and presently cannot independently recognize or correct errors in real time, making them prone to perpetuating flawed logic or biases embedded in their datasets (Wei et al., 2022). Unlike humans, who can be introspective and adapt based on experience, CoT systems cannot “learn” from past mistakes without external intervention, such as fine-tuning or human-led feedback loops (Bubeck et al., 2023). They also obviously cannot perceive and feel in real time. These limitations restrict CoT’s ability to handle abstract or novel challenges, emphasizing the need for rigorous oversight and continuous improvement when deploying AI in law enforcement.

While LLMs hold promise for enhancing police report writing, empirical research suggests their impact on efficiency may be more nuanced than initially marketed. A randomized controlled trial (Adams, 2024) revealed no significant reduction in report-writing time among officers using AI-assisted tools. These findings challenge the notion of substantial time savings, emphasizing that the rigid templates and meticulous data entry required in police reporting often limit AI’s efficiency gains.

LLMs may still enhance report quality, however, by structuring narratives clearly and ensuring adherence to reporting standards. As seen in other law enforcement technologies, such as body-worn cameras, the real value of AI may lie in its ability to improve accuracy and documentation over time rather than simply accelerating workflows.

Policy Implications for Policing

The adoption of CoT-enabled systems in policing requires thoughtful policy considerations to ensure ethical and effective use. Agencies must implement safeguards to prevent bias, enforce transparency in AI-driven decisions, and uphold such constitutional protections as privacy (i.e., inputting personal information into a third-party system) and due process (Ferguson, 2024). Clear guidelines should govern the integration of CoT in training, CAD systems, and crime analysis, emphasizing collaboration among technologists, crime analysts, and policymakers.

Additionally, mechanisms to evaluate the effectiveness of CoT applications — such as error rates, officer feedback, and community outcomes  are essential for building public trust. By addressing these considerations, law enforcement can harness CoT’s potential while minimizing risks and maximizing societal benefits.

Taking the pros and cons into account, it is highly advisable to consult with your local city and county counsel to ensure that this technology is rolled out responsibly. It is no different than when officers started using social media, license plate readers, biometrics, drones (still evolving), and body-worn cameras — there is undoubtedly a learning curve.

Finally, it is advisable to anonymize the data by stripping personally identifiable information (PII) such as names, addresses, and case numbers before inputting data into GPT systems. For example, officers should use pseudonyms or unique identifiers for individuals mentioned in prompts or outputs and ensure that they use secure servers and networks to minimize the risk of unauthorized access.


Transparency

It is best to be transparent when using GPT to generate reports, emails, briefs, or data sets. Transparency fosters trust, accountability, and ethical use of AI, particularly in such sensitive fields as policing. Adopting a policy of disclosure ensures that stakeholders, including supervisors, colleagues, and the public, are aware of when and how AI is being used. Further, transparency aligns with ethical principles of openness and honesty. It allows for informed scrutiny of AI-generated content and acknowledgment of accuracy and accountability, which emphasizes the need for human oversight, given the model’s potential for errors or “hallucinations” (fabricated outputs).

Identifying GPT as the source ensures that users critically evaluate the AI-generated material before relying on it for decisions. The best approach to using GPT, therefore, is to indicate clearly when a document, report, or email was partially or fully generated by GPT. Here is some suggested text: “This document was generated with the assistance of GPT, an AI-based text-generation tool, and has been reviewed by [Officer/Analyst Name] to ensure accuracy and compliance with department standards.”

 

References

  • Adams, I. T. (2023). Working Paper. LLMs and AI for Police Executives. University of South Carolina.
  • Adams, I. T., Barter, M., McLean, K., Boehme, H. M., & Geary, I. A. (2024). No man’s hand: Artificial intelligence does not improve police report writing speed. Policing: An International Journal.
  • Baker, E. M. (2021). I’ve got my AI on you: Artificial Intelligence in the law enforcement domain (Doctoral dissertation, Monterey, CA; Naval Postgraduate School). Dressel, J. and Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, Abstract.
  • Bubeck, S., et a. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv preprint.
  • Conger, K., & Griffith, E. (2022, October 27). Elon Musk completes $44 billion deal to own Twitter. The New York Times. Retrieved from https://www.nytimes.com/2022/10/27/technology/elon-musk-twitter-deal-complete.html
  • Ferguson, A. G. (2024). The ethical implications of AI in law enforcement. Harvard Law Review, 137(2), 412-431.
  • Koper, C. S. (1995). Just enough police presence: Reducing crime and disorderly behavior by optimizing patrol time in crime hotspots. Justice Quarterly, 12(4), 649-672.
  • Koper, C. S., Lum, C., & Willis, J. J. (2014). Evidence-based policing and the challenge of implementation. Policing: A Journal of Policy and Practice, 8(3), 226-241.
  • Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Penguin.
  • Rajkomar, A., Dean, J., & Kohane, I. (2019). Machine learning in medicine. New England Journal of Medicine, 380(14), 1347-1358.
  • Skinner, B. F. (1953). Science and human behavior. Macmillan.
  • Thiel, P. (2014). Zero to one: Notes on startups, or how to build the future. Crown Business.
  • Wei, J., Wang, X., Schuurmans, D., et al. (2022). Chain-of-thought prompting elicits reasoning in large language models. Nature Machine Intelligence, 4(1), 1-9.
  • Yong, E. (2018, January 17). A popular algorithm is no better at predicting crimes than random people. The Atlantic. https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/

Leave a Comment





This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Disclaimer: The points of view or opinions expressed in this article are those of the author(s) and do not necessarily represent the official position of the National Policing Institute.

Written by

ChiefJasonPotts_200

Chief Jason Potts

Sergeant Michael Billera

ChrisCatren

Chief Chris Catren (Ret.)

TravisMartinez

Deputy Chief Travis Martinez

For general inquiries, please contact us at info@policefoundation.org.

For general inquiries, please contact us at info@policefoundation.org

If you are interested in submitting an essay for inclusion in our OnPolicing blog, please contact Erica Richardson.