Home/Blog/AI in Local Government: Where It Helps, Where It Hurts

AI in Local Government: Where It Helps, Where It Hurts

AI is going to write your staff reports whether you like it or not. The interesting question is which uses save the city money and which create new liability.

The conversation about AI in local government is mostly noise. Vendors are pitching transformation; skeptics are pitching prohibition; meanwhile clerks are quietly using ChatGPT to draft staff reports because they have 11 things to do before Friday.

The interesting question isn’t whether AI belongs in city hall. It’s already there. The question is which uses save staff time and which create new exposure — and the answer is more nuanced than either side will admit.

What AI is already doing in cities

Walk into any small-city clerk’s office right now and you’ll find at least three of these in casual use:

  • Drafting staff reports from bullet-point notes
  • Summarizing long correspondence for the council packet
  • Translating notice materials into Spanish (and sometimes other languages)
  • Producing first-draft minutes from meeting recordings
  • Generating agenda item descriptions from staff intake forms
  • Drafting responses to public records requests
  • Writing the city manager’s weekly memo

Most of this is happening on consumer ChatGPT or Claude accounts. None of it has been formally adopted by the city. Most council members don’t know it’s happening.

The uses that work

AI is good at three things in a municipal context: drafting from a clear brief, summarizing dense source material, and translating across languages or registers. These map cleanly to clerk and analyst work that’s historically been time-consuming.

First-draft minutes. AI can take a 90-minute meeting recording and produce a 70%-correct draft in under five minutes. The clerk reviews and corrects. Net time savings: 4–6 hours per meeting. This is the single highest-ROI AI use in municipal government and it’s mature enough to deploy now.

Public-comment summarization. When a controversial item generates 200 public comments, AI can produce a summary that captures the substantive positions, the rough sentiment distribution, and any specific factual claims that need response. This is editorial work; the body still reads the comments themselves. But the summary saves an analyst a day.

Translation and language access. SB 707’s language-access requirements are easier to meet when you’re not paying for human translation of every notice. AI translation is now good enough for routine notices in major languages; specialized or sensitive content still warrants human review.

Agenda-description adequacy checking. The line between an adequate item description and a vague one is well-defined in case law. AI can flag descriptions that fail the specificity test before the agenda is published. CivicCA does this; so do a few other platforms.

The uses that don’t work (yet)

Three categories where AI is consistently overpromised and underdelivered:

Legal opinions. AI can sometimes produce text that looks like legal analysis. The cases are real, the citations are real, the reasoning is sometimes real. The hallucinations are also real and sometimes invisible. No municipal lawyer should rely on AI legal output without independent verification, and the verification typically takes longer than writing it from scratch.

Resident-facing chatbots. Most are bad. The good ones are scoped narrowly to specific question categories (“what day is trash pickup?”) and have good fallback to human service. Open-ended chatbots that try to answer any city-related question fail in ways that generate news stories and complaints.

Decision-making support. AI cannot tell the council whether to approve a contract, fire a manager, or rezone a parcel. The current generation of models is good at organizing information; they are not good at making contested judgment calls, and there’s no version of “the AI told us to” that survives council debate or court review.

The exposure questions

Before deploying AI in any municipal workflow, three questions matter more than the technology choice:

Who reviews the output? AI produces drafts, not finished work. The clerk who reviews and signs off is still responsible for accuracy. Workflows that allow AI output to publish without review are the workflows that produce the embarrassing outcomes.

Where does the data go? Consumer AI accounts (free ChatGPT, free Claude) typically train on user inputs. Pasting confidential personnel discussions or attorney-client communications into a consumer account is a disclosure. Enterprise accounts with appropriate data-use settings are better. Government-cleared platforms (FedRAMP, CJIS-compliant) are the cleanest answer for sensitive material.

What does the public records law say? AI prompts and outputs may be public records under the CPRA, especially if used to draft or generate official materials. Most agencies have not thought through retention schedules for AI interactions. Get ahead of this before a request arrives.

The policy framework

Cities adopting AI policy fall into three patterns:

  • Permissive with disclosure. Staff may use AI; AI use must be disclosed in materials produced; certain categories (legal opinions, personnel decisions) prohibited. This is where most early adopters land.
  • Approved tools only. Staff may use AI but only specified, vetted tools. This works when IT can keep up; it strangles flexibility when the tool list lags the technology.
  • Quiet allowance. No formal policy; staff use AI quietly. This is most cities right now, and it’s the most exposure-creating posture because problems surface as scandals rather than learning opportunities.

The League of California Cities has published model AI-use policies; most agencies adopting policy in 2026 are working from variants of those.

The clerk’s reality

Whatever the council decides about AI policy, the clerk’s reality is unchanged: more work than time, accountability for accuracy, and a meeting Tuesday night. AI tools that genuinely save 4–6 hours per meeting on minutes alone are going to get used — through formal procurement or otherwise.

The cities making the smoothest transition are the ones that picked the highest-ROI use cases (minutes drafting, agenda compliance checking, translation), procured tools formally, set clear review requirements, and trained staff on what AI is and isn’t good for. The cities that will struggle are the ones treating AI as a future question rather than a current one.

What to do this quarter

Three concrete steps that are appropriate for almost any city:

  1. Survey staff on what AI tools they’re currently using, informally. You’ll be surprised by the answer.
  2. Pilot one high-ROI use case formally — AI-drafted minutes is the obvious choice. Measure the time savings.
  3. Adopt a one-page AI-use policy. It doesn’t have to be comprehensive; it has to exist and address disclosure, prohibited uses, and approved tools.

The technology will continue to change. The procedural and policy framework you put in place now — even if imperfect — is what gives your agency room to use the next generation of tools without scrambling.

Run compliant meetings without the spreadsheet. Try CivicCA.