The fate of a bill that could reshape Silicon Valley’s surging artificial intelligence industry could be decided this week, with everyone from big tech to members of Congress jockeying to influence the outcome.
Among the hundreds of bills scheduled for funding decisions by the powerful California State Assembly’s Appropriations Committee on Thursday is a controversial plan to regulate the AI industry. The proposed legislation, SB 1047, is polarizing Silicon Valley — and has even prompted two federal lawmakers representing the area to take the unusual step of chiming in on Sacramento business.
“The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” authored by State Sen. Scott Wiener, a San Francisco Democrat, aims to protect the public from AI-generated catastrophes. It would regulate the development and deployment of advanced AI models, specifically large-scale AI products costing at least $100 million to build, by creating a new regulatory body called the Frontier Model Division.
The bill, which faces opposition from major tech companies like Meta and Google, also proposes the establishment of a publicly funded computing cluster program, CalCompute, aimed at developing large-scale AI models, providing operational expertise and user support, and fostering “equitable” AI innovation.
The proposal has garnered bipartisan support in the state legislature and among California voters, according to recent public surveys. Introduced last February, it passed the state Senate in a 32-1 vote in May, and is now making its way through the state Assembly.
Assembly members on Thursday are expected to decide if the bill will be among those selected for discussion and potential funding — or tabled for later.
But prominent Congress members, Ro Khanna and Zoe Lofgren, Democrats who represent Silicon Valley, have expressed concern the bill could stifle innovation.
Khanna said that while he agrees with the need to regulate AI, it should be done without harming California’s robust tech startup and small-business community.
“As the representative from Silicon Valley, I have been pushing for thoughtful regulation around artificial intelligence to protect workers and address potential risks including misinformation, deepfakes and an increase in wealth disparity,” Khanna said in a statement. “I agree wholeheartedly that there is a need for legislation and appreciate the intention behind SB 1047, but am concerned that the bill as currently written would be ineffective, punishing of individual entrepreneurs and small businesses and hurt California’s spirit of innovation.”
Khanna believes the approach to AI regulation should be more surgical.
“We should start by focusing on the most immediate issue at hand, which is the clear labeling and marketing of AI-generated products,” he said. “The second thing is we need much stronger privacy provisions that can make it harder to proliferate and target misinformation. And third we need serious antitrust provisions, so we aren’t beholden to just one or two platforms for information.”
Meanwhile, Lofgren, a ranking member of the House Committee on Science, Space, and Technology, said in a letter to Wiener that the bill is “heavily skewed” toward addressing hypothetical risks “while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts and workforce displacement.”
As lawmakers and AI entrepreneurs turn up the heat against the bill, Wiener held a press conference in Sacramento Wednesday morning to discuss the risks AI poses in developing weapons of mass destruction — something Lofgren argues there’s little evidence to support.
Wiener said he responded to Lofgren’s letter and is willing to speak to her about possible amendments if the bill makes it through the Appropriations Committee this week.
“I respectfully disagree with Congresswoman Lofgren,” Wiener said. “This is a very reasonable and light touch piece of legislation.”
He said the bill focuses on companies with large-scale development capabilities.
“This bill will not impede AI innovation or hurt start-ups in any way. And it will promote safe innovation,” Wiener said.
However, Christopher Nguyen, CEO of AI startup Aitomatic, based in Palo Alto, and a member of Bay Area-based AI alliance, worries the bill may impact startup companies who rely on large language AI models such as Meta’s Llama 3.1.
“We depend very much on this thriving ecosystem of open-source AI,” Nguyen said. “If we can’t keep access to state-of-the-art technology accessible, it will immediately impact the startup ecosystem, small businesses, and even the man on the street, in terms of the technologies and services they have access to.”
Llama 3.1, launched by Meta in 2023, is one of several open-source AI platforms developed by tech giants and used by start-ups and small businesses, among others.
In his written response to Lofgren, Wiener was emphatic that the bill would not harm California’s thriving tech economy.
“(Similar previous) predictions have consistently proven false,” Wiener said, referring to predictions made when the California Consumer Privacy Act of 2018 was signed into law. “California remains the global epicenter of technological innovation.”