Governor of California signs historical bill that creates security measures around artificial intelligence

Sacrament – The governor of California, Gavin Newsom, signed on Monday a law that aims to prevent people from using powerful artificial intelligence models for potentially catastrophic activities such as the construction of a biological weapon or the closure of a banking system.

The measure occurs when Newsom promoted California as a leader in the regulation of artificial intelligence and criticized the federal inaction at a recent conversation with former President Bill Clinton. The new law will establish some of the first national regulations on large -scale artificial intelligence models without harming the local state industry, said Newsom. Many of the world’s artificial intelligence companies are located in California and will have to meet the requirements.

“California has shown that we can establish regulations to protect our communities and at the same time ensure that the growing AI industry continues to thrive. This legislation achieves that balance,” said Newsom in a statement.

The legislation requires that artificial intelligence companies implement and disseminate security protocols to prevent their most advanced models from being used to cause important damage. The rules are designed to cover artificial intelligence systems if they meet a “border” threshold that indicates that they are executed with a large amount of computer power.

These thresholds are based on the amount of calculations made by computers. Those who developed the regulations have recognized that numerical thresholds are an imperfect starting point to distinguish generative artificial intelligence systems of higher performance of the next generation that could be even more powerful. The existing systems are largely created by California headquarters such as Anthropic, Google, Meta Platforms and OpenAi.

The legislation defines a catastrophic risk as something that would cause at least $ 1,000 million in damages or more than 50 injured or dead. It is designed to protect against the use of artificial intelligence for activities that could cause massive disturbances, such as pirate an electricity.

Companies must also inform the State any critical security incident within 15 days. The law creates protections for complainants for artificial intelligence workers and establishes a public cloud for investigators. It includes a fine of $ 1 million per violation.

It attracted the opposition of some technology companies, which argued that artificial intelligence legislation should be done at the federal level. But Anthropic said the regulations are “practical safeguards” that make official security practices that many companies are already doing voluntarily.

“While federal standards are still essential to avoid a mosaic of state regulations, California has created a solid framework that balances public security with continuous innovation,” Jack Clark, co -founder and Chief of Anthropic policy, said in a statement.

The firm occurs after Newsom verted last year a broader version of the legislation, putting on the side of the technology companies that said that the requirements were too rigid and would have hindered innovation. Instead, Newsom asked a group of several industry experts, including the Pioneer of FEI-FEI LI artificial intelligence, to develop recommendations on protection measures around the powerful artificial intelligence models.

The new law incorporates recommendations and comments from the group of experts in Newsom artificial intelligence and industry, supporters said. The legislation also does not impose the same level of reporting requirements for emerging companies to avoid harming innovation, said state senator Scott Wiener, of San Francisco, author of the bill.

“With this law, California is taking a step forward, once again, as a world leader in innovation and technological security,” Wiener said in a statement.

Newsom’s decision occurs when the president Donald Trump In July announced a plan to eliminate what its administration considers “onerous” regulations to accelerate innovation in artificial intelligence and consolidate the position of the United States as a world leader in AI. Republicans in Congress tried earlier this year, without success, prohibit states and locations regulating artificial intelligence for a decade.

Without stricter federal regulations, the states throughout the country have spent the last years trying to control technology, addressing everything from “Deepfakes” in the elections to the “therapy” of artificial intelligence. In California, the Legislature approved this year several bills to address security concerns around the “chatbots” of artificial intelligence for children and the use of artificial intelligence in the workplace.

California has also been one of the first to adopt artificial intelligence technologies. The State has implemented generative artificial intelligence tools to detect forest fires and address road congestion and road safety, among other things.