
Last week, Mayor Bruce Harrell announced the City’s new AI Plan, which looks to harness the current artificial intelligence (AI) boom in the tech sector by integrating AI into the City’s operations, public services, and civic engagement. But critics worry about the ethical implications of AI use in public services, as well as its potential impacts on public workers and documented environmental harms.
“AI will be harnessed to accelerate permitting and housing, improve public safety, enhance the responsiveness of services for Seattle residents, and enable more accessible and plain-language interactions to remove barriers,” the plan states.
The plan includes an AI training program for City employees and partnerships with academia, industry, and communities. Its definition of AI is broad, and because of its high level nature, it often lacks specifics.
The City will also be adding a new AI Leadership position to its Information Technology Department (ITD).
“We are trying to be very intentional about positioning Seattle as a national leader in responsible artificial intelligence implementation, make no mistake about that,” Harrell said at last week’s press event. “We believe it will position us to be not only a strong city in terms of our values, but as a port city, as a maritime city, as a biotech leader, as a high tech leader, really fits into our fabric as a city.”
The City also unveiled its new associated AI Policy, which lays out rules for City employees’ use of AI, including a list of prohibited uses. It requires attribution to an AI tool for generated images, videos, and source code; attribution for text generated by AI is required if it is “used substantively in a final product.”
The policy says employees must conduct a racial equity toolkit before using AI in a new way, and departments are supposed to work to understand environmental impacts of specific tools prior to procurement. How these rules will work in practice, and whether they will dissuade certain AI use cases, remains unclear.
The Chief Technology Officer, currently Rob Lloyd, is responsible for enforcement of this policy.
Seattle’s many AI pilots
The City has engaged in 42 AI pilots thus far, 24 of which are currently ongoing.
Of the 18 completed pilots, the majority were designated for “general” use and included AI applications such as Fathom AI, Beautiful.ai, Otter.ai, Hootsuite, AI Calendar, etcetera, that perform basic functions such as transcription, calendar coordination, and content creation and coordination.

One completed pilot by the Seattle Police Department (SPD) tested CaseGuard for redacting body-worn video for public disclosure. The Seattle Fire Department (SFD) tested the tool Corti to both provide audio analysis of non-emergency calls and for “Public Safety Triaging, Documentation, Quality Improvement & Training.”
Of the 24 ongoing pilots, 11 are being conducted within ITD. Four are currently running within the Seattle Department of Construction & Inspections (SDCI) with the goal of streamlining and speeding up the permitting process. The Seattle Department of Transportation (SDOT) is testing software to identify intersections where engineering improvements can reduce the risk of crashes.
ITD spokesperson Megan Ebb told The Urbanist that the Community Assisted Response & Engagement (CARE) department is also looking into using AI tools to analyze call prioritization data, which might help them better dispatch the right resources to the right call. However, this is not yet an active pilot.
SPD has two new pilots: one is an Amazon Q/Bedrock business chatbot “for general business, ’low-risk’ use cases” and the other is with C3 AI, which ITD says is being used to more easily reference policy information. Ebb said C3 AI is not being used as a tool to assist basic criminal investigations at this time.
“With AI being a new technology, SPD is gradually working on responsible governance for the use of these tools in the future,” Ebb told The Urbanist. “As part of this process, the department is testing platforms in limited use cases, such as summarizing survey results to help present information in easier to understand ways.”
Amazon published a blog post earlier this summer about possible public safety applications of Bedrock, focusing on the tool receiving noise complaint calls and automatically generating incident reports. Other use cases mentioned in the post include reporting of traffic incidents without injuries, abandoned vehicle reporting, and graffiti and vandalism reporting.
ITD did not share any specific current use cases with The Urbanist.
When asked more about the Amazon AI pilot, Ben Dalgetty, a spokesperson for Mayor Harrell, said SPD is currently in the process of developing policy and technical guardrails for this technology.
“Future uses of this technology may have implications for the identification of complex trends and patterns used to prevent crime but a lot of work needs to be done before SPD is comfortable employing it in this way,” Dalgetty said.
This description sounds dangerously close to AI-fueled predictive policing, which studies have shown can lead to the disproportionate targeting of minority communities. Depending on the data used by the AI tool, this bias can be further exacerbated in a number of ways.
A number of U.S. Senators sent a letter to the U.S. Department of Justice at the beginning of 2024, asking for grants funding predictive policing projects to be halted.
“Mounting evidence indicates that predictive policing technologies do not reduce crime. Instead, they worsen the unequal treatment of Americans of color by law enforcement,” the senators wrote. And later, “The continued use of such systems creates a dangerous feedback loop: biased predictions are used to justify disproportionate stops and arrests in minority neighborhoods, which further biases statistics on where crimes are happening.”
Meanwhile, ITD’s recent annual learning conference featured two Oracle executives demonstrating an AI software interface for police case record management. The software involved AI transcription of audio and video files as well as summaries of text.
“There’s tremendous issues here, especially in the policing context, especially when we’re talking about going from something that’s originally an audio recording,” said Dr. Emily Bender, a linguistics professor at the University of Washington who specializes in computational linguistics.
Bender explained that AI transcription technology doesn’t work as well when the language used diverges from the “prestige” standard, which can then cause more errors. AI models also have biases in the data on which they’re trained.
“The generative AI system is set up to basically output plausible-looking text based on the immediate input plus all of its pre-training data, and so it is very likely to output additional details that just look plausible in context but bear no resemblance to what happened,” Bender said.
Officers might not be given the time and space to correct transcription and other errors in such an AI system after the fact. Such a system would also harm accountability, as reports it generates could be presented in a court of law as an account from the involved officer, when it didn’t begin as a personal account from the officer about what actually happened.
There has recently been community concern around SPD’s use of generative AI in communicating with the public. PubliCola reported on an Office of Police Accountability (OPA) complaint from an anonymous community member alleging that SPD used AI to generate public-facing materials, including blog posts and a statement from new Chief Shon Barnes. While a well-known AI detection tool found the above examples to have been likely completely or partially written by AI, SPD denied using generative AI in “a substantive way” for communications.
This example shows a weakness in the City’s AI Policy, which is vague about what a substantive use of AI text generation entails. If the SPD communications above really were generated with AI tools, there should have been attribution acknowledging such. As PubliCola reported, the OPA referred the complaint as a “supervisor action,” meaning that there will be no consequences beyond training or coaching.
Cascade PBS recently reported that in Everett, even after the city adopted an AI policy requiring attribution of AI-generated text, city staff hasn’t always followed the guidelines.
In addition to AI tools sometimes providing incorrect information (sometimes known as “hallucinations”), the use of AI could further erode trust in government, a particular problem for SPD as it seeks to rebuild trust in the community. Some critics have argued AI-generated text has no place in government communications.
“There are no appropriate use cases for synthetic text,” said Bender. “Setting up a system that is just designed to mimic the way people use language can only be harmful.”
Other concerns about City use of AI
The example of cities like Everett call into question the actual implementation of the City’s AI Plan.
The recent release of OpenAI’s newest chatbot model, ChatGPT-5, which upon launch was unable to draw an accurate map of the United States or create an accurate list of U.S. Presidents, has accelerated criticism of generative AI and raised questions about the future of this technology.
Even early into its deployment, AI has faced criticism for its environmental impacts. The data centers that power AI require large amounts of freshwater for cooling, and Bloomberg News found that two thirds of data centers built since 2022 are in locations already experiencing water stress. The power needs of data centers are also expected to grow, with McKinsey writing that power needs will be three times higher than current capacity within five years in the United States, increasing reliance on fossil fuels.
Promises that AI can readily replace human work have not necessarily materialized in practice. For example, the Swedish company Klarna began laying off about 700 employees in 2022 in order to replace them with AI, only to decide to rehire human employees this spring.
Harrell emphasized that the City doesn’t intend to use AI to replace its workers and pledged to keep union leaders in the loop.
“We work collaboratively with our labor partners,” Harrell said. “And as we look at certain tasks that could possibly be replaced by AI, we always make sure, as we sit with a human-centered approach, we work with our labor partners and make sure that these discussions are open and transparent.”
The AI Plan adds that “[a]s intelligent systems begin to automate routine and/or administrative tasks, job roles will indeed refocus on higher-value, creative, people-facing, and decision-making responsibilities.”
The Urbanist reached out to Protec 17, the union that represents many of the City’s workers. “We have recently been made aware of the City’s AI plan, and are in the process of gathering information and analyzing the impacts to PROTEC17 members,” their Executive Director Karen Estevenin said. “While we are concerned with many aspects around how AI could impact the workforce — including any reduction in positions — we also are interested in exploring smart, safe, and effective uses for AI that could support the work and improve the working conditions of City employees.”
Another issue that could impact City workers is the use of AI leading to deskilling workers. A recent paper in The Lancet found that enterologists who regularly used AI for polyp detection during colonoscopy became deskilled within six months, with the rate of a certain type of polyp detection without AI usage dropping from 28% to 22%.
Earlier this summer, MIT’s Media Lab found that the use of large language models (LLMs) for essay writing “came at a cognitive cost” and used weaker cognitive engagement compared to people using only their brains or a search engine.
“If the idea is to do these pilots and evaluate, then the City really ought to be evaluating impacts on deskilling of the workforce and the quality of service that can be offered,” said Bender.
There is also the question of accountability when using AI in the public sector. While both Harrell and the AI Plan are clear as to keeping a human in the loop when using these tools, these workers could potentially be used as moral crumple zones, taking the hit for AI errors.
“Sometimes the person who is the human in the loop ends up taking the impact when something goes wrong, and they are effectively protecting the larger organization that decided to do the automation,” said Bender.
The push to adopt AI
In spite of the possible perils, Harrell is enthusiastic about incorporating AI into the City of Seattle.
“Artificial intelligence is more than just a buzzword in Seattle – it’s a powerful tool we are harnessing to build a better city for all,” said Harrell. “By using this technology intentionally and responsibly, we are fostering a nation-leading AI economy, creating jobs and opportunities for our residents, while making progress on creating a more innovative, equitable, and efficient future for everyone.”
Last year Harrell was invited to serve on the Office of Homeland Security’s Artificial Intelligence Safety and Security Board, whose stated purpose is to develop recommendations for infrastructure stakeholders to leverage AI responsibly and prevent and prepare for AI-related disruptions to critical services. Harrell also serves as the Chair of the U.S. Conference of Mayors’ Standing Committee on Technology and Innovation.
During his speech at the State of Downtown event hosted by the Downtown Seattle Association this February, Harrell spoke of his concern around cyber security and AI, while also seeming to praise right-wing technology leaders who funded Trump’s campaign.
“We know that our current president surrounds himself by some of the smartest innovators around,” Harrell said. “When we drop names like Andreessen or Peter Thiel or David Sacks or Elon Musk, these are smart innovators.”
Ironically, Musk presents a strong cautionary tale for the use of AI LLMs in government with his work earlier this year at the Department of Government Efficiency (DOGE). In spite of Musk and DOGE’s stated goal of efficiency, The Atlantic found that the U.S. government actually spent more money this February and March than they did in the same months last year.
Instead, DOGE and its role in facilitating the adoption of LLMs at the federal level have handed unprecedented power to the wealthy tech CEOs who control the contracted models: namely, Mark Zuckerberg (Meta), Peter Thiel (Palantir), and Musk (xAI). The AI systems currently in use by the federal government are leading to less accountability, less transparency, and the creation of a vast surveillance state.
In contrast, Harrell is selling Seattle as a leader in responsible AI implementation.
“You see the controls that we’ve put in for our AI policy, and our plan is to say that it has to go through responsible use,” said Lloyd. “There is a security process, there is a privacy consideration. And as we go through that, we are also saying that we will enable AI to make the City of Seattle able to solve civic challenges.”
The AI Plan calls for the creation of a Citywide AI Governance Group that will be responsible for providing input and guidance on the direction of the AI Plan and its priorities. This group will be convened later this fall after the AI Leadership position is filled.
The Office of Civil Rights did not participate in the development of the AI Plan. However, they will be asked to participate in the governance group, Ebb said.
Bender said it will be important for the City to get specific about what they want to automate, avoid synthetic text machines, and choose use cases in a well-tested and sensible way that provides for clear accountability.
Meanwhile, with the City facing a structural budget deficit, it could be tempting to use AI tools as a Band-Aid to keep things running. But that strategy is likely to create greater disparities.
“If you think about who has the ability to opt out of poorly provisioned city services, it’s the wealthy,” Bender said. “And everybody else is going to be stuck with what we’re doing collectively. So why don’t we do it well?”
Correction: The original version mistakenly identified Ben Dalgetty as a spokesperson for the City’s IT Department. He is a spokesperson for the Mayor’s Office.
Amy Sundberg is the publisher of Notes from the Emerald City, a weekly newsletter on Seattle politics and policy with a particular focus on public safety, police accountability, and the criminal legal system. She also writes science fiction, fantasy, and horror novels. She is particularly fond of Seattle’s parks, where she can often be found walking her little dog.