1st in the nation A.I. safety bill passes CA Senate, headed to Governor's desk

A new A.I. safety bill is on its way to the Governor’s Desk. State Senate Bill 1047 just passed the Senate and, if signed, would be the first in the nation to set safety regulations for artificial intelligence.

The tech experts I spoke to say this bill is probably a step in the right direction, but whether the Governor signs the bill or not, they say A.I. innovation will continue to grow.

"The bill is very specific about safety and security protocols for the A.I. specifically talking about something called shutting down techniques or the kill switch," said Ahmed Banafa, San Jose State Univ. Engineering Professor. 

Senate Bill 1047 or the California A.I. Safety bill passed the Senate 29-9 on Thursday. The bill aims to safeguard society from A.I. being used to conduct cyberattacks on critical infrastructure, create biological or nuclear weapons or conduct automated crime. Ahmed Banafa is an engineering professor at San Jose State University.

"There should be some kind of mechanism within A.I. itself, where they can shut it down. The companies have to implement something that is like the kill switch concept. We hear about it all the time," said Banafa. 

State Senator Scott Weiner authored the bill, and says a recent survey found 70% of A.I. researchers believe safety measures should be a priority and 73% say they’re extremely concerned A.I. technology will fall into the hands of dangerous people. Still, political leaders in Silicon Valley like Nancy Pelosi and Ro Khanna as well as Meta and OpenA.I., oppose the bill. They think new regulations may stifle A.I. innovation.

"While I do think their concern is certainly warranted, I think that it is unlikely to play out practically in a lot of negative ways because it’s going to play out with larger companies that will be able to handle that regulation," said Joseph Thacker, an A.I. Engineer with AppOmni. 

If the bill becomes state law, only A.I. developers who produce large-scale A.I. models with computing power valued over $100 million will have to follow the regulations. The law also won’t carry criminal penalties for companies and A.I. Engineer Joseph Thacker supports that. 

"I think if a model tells someone how to make a bomb, and they make a bomb, I think we need to prosecute the person that made the bomb. But I also think it makes sense to try to prevent these models from giving away information that could harm people as well," said Thacker. 

Both Banafa and Thacker say if this bill is signed into law, it will be a starting point for A.I. safety and as technology evolves, so will the laws. Governor Newsom has not publicly said if he will sign the bill.  

Featured

Can artificial intelligence decode dog barks?

Scientists are developing tools to distinguish whether a dog's bark shows playfulness or aggression.