The Capitol building in Olympia is marble colored and include pillars and a dome in the classic style.
In the 2025 session, the Washington State Legislature mostly punted on large structural budget issues, which is compounding the damage that Trump cuts could cause. (Stephen Fesler)

As generative artificial intelligence (AI) continues to dominate the global technology scene, driven by the hype and investments from big tech companies, Washington state lawmakers are moving to regulate the increasing use of AI in our daily lives.

With at least 14 bills in play this legislative session that pertain to AI, the focus on regulating this technology is as strong as it’s ever been. 

Jon Pincus, the founder of Nexus of Privacy, said legislators are grappling with the implications of AI all over the country.

“AI is moving really fast, and no matter what you think of it, it’s really changing things,” Pincus said. “And states are all wrestling with how to regulate it. Like any industry, tech doesn’t want to be regulated at all. There’s a sort of grudging admission that, well, okay, maybe something needs to be done. They’re pushing for things to be as weak as possible. I’m certainly in favor of strong regulations, but there is this real complexity that we don’t want to just kill promising technology.”

Tee Sannon, the the ACLU of Washington’s Technology Policy Program Director, said the ACLU has been advocating for meaningful regulation of AI for some time now. 

“There’s a slew of AI-related bills that we’re seeing introduced, and I think it speaks to the point that we do really need AI regulations,” Sannon told The Urbanist. “And currently this is an unregulated space, and we’re seeing the risks and harms that come from that. But at the same time, we also have to be careful to make sure that those protections are meaningful.”

Washington State could be uniquely positioned to lead in AI regulation. The state is one of the country’s big technical hubs, is the home to the world-class research institution the University of Washington, and has a strong network of tech organizers. For example, in 2023, the legislature passed the landmark “My Health My Data” Act, which the Electronic Frontier Foundation called “one of the strongest consumer data privacy laws in recent years.” Washington was the first state to tackle issues around the gaps in protection of consumer health care data.

“I feel like we’ve arrived at this moment where many, many people are waking up to the fact that an unregulated tech industry has sort of brought us to this place of urgency and crisis in terms of what we’ve allowed to happen with big tech in our lives,” Maya Morales, founder of Washington People’s Privacy, said. “We’re now having this fraught and violent and disruptive kind of world environment, of many people coming to a realization all at once that we are literally creating a tech war against ourselves if we do not step up and regulate.”

Concerns around AI abound, from harms to the environment from increased energy and water usage to power massive data centers to the fear of AI taking away people’s jobs. Because AI is trained on real-world data, the bias contained in that data is baked straight into AI, which can then potentially make decisions that repeat that bias, leading to algorithmic discrimination. AI’s use in various systems has also raised privacy concerns – since AIs tend to collect a huge amount of data as they’re being used – and criticism over a lack of transparency, since people may not know they’re using a system that involves AI or consuming a product (or information) that was created by AI. 

Meanwhile, the Trump administration has staked out a strongly anti-regulation stance. Although their bid to include a 10-year moratorium on states regulating AI was removed from last summer’s “One Big Beautiful Bill,” H.R.1, by a vote from the Senate, President Donald Trump issued an executive order in December to “check the most onerous and excessive laws emerging from the States that threaten to stymie innovation.” The order instructs the formation of an AI litigation taskforce to challenge state-level AI laws and threatens to withhold federal broadband funding for states that do not comply.

Morales spoke of the dangers of big tech billionaires and their credo of technofascism, which is furthered by a lack of regulation, to democracy itself. 

“It’s understanding that if you don’t regulate big tech, then you’re looking at this massive sort of techno class divide, where you literally have a ruling high tech class and you have this populace that doesn’t have its needs met,” Morales said. “Many people have been speaking about this on world and national stages, but ultimately, there is really this risk of building this techno future that isn’t even really built for humans.”

Regulating AI in schools

Senate Bill 5956, sponsored by Senator T’wina Nobles (D-28th Legislative District, Tacoma), sets statewide rules for student privacy protections, regulating the use of AI in school discipline and surveillance.

Picture of T'wina Nobles
State Senator T’wina Nobles won election in 2020 in the 28th Legislative District. (Nobles campaign)

The bill prohibits using automated decision systems and surveillance systems from being the sole basis for disciplinary decisions, using student risk scores, and using biometrics to infer students’ sensitive psychological or personal characteristics. It prohibits school use of facial recognition technologies, which suffer from accuracy issues, especially for people of color. 

The bill also protects student data by setting limitations on when it can be turned over to law enforcement.

Derek Harrison, the executive director of Black Education Strategy Roundtable, testified at the bill’s hearing, telling the story of a student in Maryland last year who was holding a bag of chips that a gun detection system misidentified as a gun. 

“Even here in Washington State, we see that Black students, students with different needs, we see LGBTQIA+ students, students in foster care, are disproportionately impacted by discipline rates,” Harrison testified. “And so, SB 5956 protects student safety and civil rights.” 

The bill directs the Washington State School Directors Association to develop a model policy that addresses human oversight of AI systems and automated decision systems strategies to avoid discrimination and harm to students. The development of such a policy is likely to add a small fiscal note to the bill, which was passed out of committee on January 22.

“Across the country, we are seeing more automated systems used in schools with attempts to keep students safe or to predict student behavior, to assign risk scores, to increase surveillance. Often, these systems are without transparency or context or meaningful human judgment,” Nobles said. “Most importantly, this bill affirms a simple principle that decisions that affect our students’ future must be made by people and not by algorithms.”

AI surveillance pricing for retail goods

House Bill 2481, sponsored by Rep. Mary Fosse, (D-38 LD, Everett), takes on the recent practice of grocery stores and other retailers to enact the widespread practices of surveillance and surge pricing. 

Surveillance pricing is the practice of retailers building up a profile on a customer and then charging what an AI thinks they will pay. Surge pricing is the practice of changing the price of a product due to demand (think of how taking a rideshare home on the night of a popular event costs twice as much, and apply that principle to other goods and services). 

A report from the Federal Trade Commission released last year found that surveillance pricing happens “frequently,” taking into account factors such as location, demographics, cursor movements, and what you left behind in your cart. 

Consumer Reports collaborated with Groundwork Collaborative to conduct an investigative study on Instacart and found the price variation between customers for the same item could range as much as 23%. Instacart has since said it no longer offers the technology that allows this surveillance pricing. 

Derek Kravitz, an investigative reporter and deputy editor at Consumer Reports, testified at the bill’s hearing last week. 

“What we found was not an isolated event in our testing,” Kravtiz said. “Every single shopper using the Instacart app was an unwitting participant in the company’s pricing experiments. None of the volunteers we spoke to had any idea Instacart was running these algorithmic tests. When we told them, they repeatedly referred to the practice as unfair and manipulative.”

The bill would also institute a four-year moratorium on the use of electronic shelf labels in retail stores that are 15,000 square feet or larger. These labels aid companies in collecting data about their customers and allow for instantaneous price changes.

Lobbyists from the Washington Food Industry Association and the Northwest Grocery Retail Association oppose the bill. UFCW 3000 is in support. 

“The need for state action is especially clear in this case,” Pincus testified. “As an earlier panelist said, food is not a luxury. Purchases of food are necessary for survival. Treating violations as an unfair, deceptive business practice under the Consumer Protection Act is appropriate.”

This bill is scheduled for executive session on Wednesday. An amendment is expected.

The risk of AI chatbots

One of Governor Bob Ferguson’s legislative priorities this session is Senate Bill 5984 (and companion bill House Bill 2225). Sponsored by Senator Lisa Wellman (D-41st LD, Mercer Island), SB 5984 makes an attempt to regulate AI companion chatbots, and specifically the way minors interact with them. 

Lisa Wellman has represented the 41st Legislative District in the Washington State Senate since 2017. (Washington Legislative Support Services)

“AI holds the promise of amazing benefits, but with numerous instances of damage to humans and a number of child suicides with AI involvement, we feel we need to step in and put industry on notice that it is not okay to put out a product that has so many possibilities for significant damage,” Wellman said at the bill’s hearing. “Their software, the playground we’re talking about, these chatbots, it’s their responsibility that the children have a safe environment to operate in.”

The bill requires an AI chatbot that seems human to give reminders that it is not human at the beginning of a session, at least every three hours, and whenever a new session is initiated. Chatbot operators would be required to have a protocol for detecting and addressing suicidal ideation and expressions of self-harm, including giving referrals to crisis hotlines and other mental health resources if such behavior is detected. 

In addition, if the operator knows the chatbot is engaging with a minor, then the chatbot is not allowed to generate sexually explicit content or engage in emotionally manipulative techniques.

Attorney Laura Marquez Garrett specializes in representing children and families harmed by tech products. 

“This is not just one or two apps. This is AI chat bots as an unregulated product type, and these are design-based harms,” Garrett testified. “That is why we see sexual abuse harms with some AI products and not others. We see suicide encouragement with some AI products and not others. […] We shouldn’t need a law to stop these kinds of design-based harms, but the AI industry has made clear that we do.”

Pincus from Nexus of Privacy would like to see the current version of the bill improved, including adding privacy protections and extending the ban on emotionally manipulative techniques to everyone, not just kids. He’d also like to see the manipulative techniques be expanded to include a chatbot using first person language (referring to itself as “I”), which reaffirms the impression that the chatbot is a person.

Sannon doesn’t think this bill takes the right approach to the harms caused by chatbots.

“We’re all concerned about children’s safety online, and I think there are a number of other solutions that we can explore, like device-based protections,” Sannon said. “Sometimes I think the gap ends up being focusing on minors to protect them, but when there are certain protections that just seem good, maybe we just apply them to everyone.” 

Right now this bill doesn’t include explicit age verification, but if this were added, such a policy would bring with it a lot of privacy concerns as well as potential harms to marginalized groups. 

Companion bill House Bill 2225 just passed through the House executive committee on Friday, but as a substitute version that made several changes, including prohibiting a chatbot from mimicking romantic partnership with minors and requiring notifications that it isn’t human every hour instead of every three hours.

An amendment was introduced that would have removed the bill’s private right of action, but it failed.

A slew of AI bills

The bills discussed above only represent a few of the issues around AI currently being tackled by the legislature. Other bills being considered include regulating AI use in therapy, AI use in health insurance authorization, transparency around AI training data, allowing collective bargaining around use of AI, and regulating high-risk AI systems that make impactful decisions in areas such as housing, employment, and health care. 

But with a short session and a tight purse, lawmakers are likely to be constrained in what they can accomplish this year. Lawmakers may be tempted to weaken the bills to get them over the finish line.

Ferguson stands at the lectern with Brown over his shoulder. Both wear suits and ties.
The success of AI regulation may ultimately rely on defending state legislation from federal pushback, plus enforcement. (State of Washington)

The tech industry is lobbying to remove the private right of action from each bill, as we’ve already seen illustrated in the House chatbot bill. Such changes will both reduce the public’s ability to access accountability from the harmful behavior of tech companies and potentially increase the price tag of bills that will then rely solely on the Attorney General’s office for enforcement. 

Another temptation will be to add a “right to cure” to some of the bills, giving tech companies time to address issues after they’ve arisen. This right to cure also raises the costs for the Attorney General, and advocates say it’s a way for companies to avoid accountability.

Still, as attitudes around AI shift, both legislators and the public seem more inclined to push harder around efforts to regulate AI.

“We’re seeing AI in every single space,” Morales said. “There is this decision and this crossroads that we’re at as people and as local and state governments, where many of us are realizing, and it’s clear our legislators are realizing, we’ve actually got to stand up and be a little more decisive about the future that we really want to build for ourselves, and we cannot really allow [Elon] Musk and a few gazillionaires to make those decisions for us.”

Article Author

Amy Sundberg is the publisher of Notes from the Emerald City, a weekly newsletter on Seattle politics and policy with a particular focus on public safety, police accountability, and the criminal legal system. She also writes science fiction, fantasy, and horror novels. She is particularly fond of Seattle’s parks, where she can often be found walking her little dog.