What is California's SB-1047? - Safe and Secure Innovation for Frontier Artificial Intelligence Models Act for 2023-2024

AI


An artificial intelligence safety bill named SB 1047 was overwhelmingly approved by California Congress on Wednesday and now heads to the Senate for final consideration. If enacted, the "fiercely debated" bill would require tech companies to safety-test AI programs prior to release, and empower the attorney general to sue AI companies for any major harm caused by their technologies. The bill earned cautious support from the likes of Elon Musk and Anthropic, while its opponents include OpenAI and former Speaker of the House Nancy Pelosi.

This bill seeks to establish guardrails for the development of the most powerful AI models to avoid the more catastrophic possibilities about which experts have raised alarms. It places a series of obligations on developers of “covered models” and providers of the cloud compute for training such models. This bill also seeks to establish a framework for developing a public cloud-computing cluster that facilitates equitable participation in the development and deployment of responsible AI systems. This bill is co-sponsored by the Center for AI Safety Action Fund, Economic Security California Action, and Encode Justice. It is supported by a host of tech companies and labor organizations, as well as the Los Angeles Area Chamber of Commerce, and opposed by the Chamber of Progress and a coalition of industry associations.
 
Meanwhile, on a federal level, OpenAI and Anthropic have agreed to work on safety testing with the Commerce Department's AI Safety Institute. The current status of the bill is 'Unfinished Business' in the senate as of August 29. 'Unfinished business' is the portion of the Daily File that contains measures awaiting Senate or Assembly concurrence in amendments taken in the other house. It also contains measures vetoed by the Governor for a 60-day period after the veto. The house where the vetoed bill originated has 60 days to attempt to override.

WHAT SB 1047 DOES
More specifically this bill does the following:

Establishes the Board of Frontier Models in the Government Operations Agency. Allows the
Governor to appoint an executive director of the Board, subject to Senate confirmation, to
exercise all duties and functions necessary to ensure that the responsibilities of the board are
successfully discharged. Specifies that the board shall be composed of five members, as
follows:

a) A member of the open-source community, appointed by the Governor, subject to Senate
confirmation.
b) A member of the artificial intelligence industry, appointed by the Governor, subject to
Senate confirmation.
c) A member of academia, appointed by the Governor, subject to Senate confirmation.
d) A member appointed by the Speaker of the Assembly.
e) A member appointed by the Senate Rules Committee.

Establishes the Frontier Model Division in the Government Operations Agency, under the
direct supervision of the Board.

Authorizes the Attorney General to bring a civil action for violations of the bill for specified civil penalties and injunctive and declaratory relief.

Make public and provide to the Attorney General redacted copies of safety and security protocol and auditors' reports. Upon request, provide unredacted copies of those documents, which are exempt from the California Public Records Act and submit annual compliance statements as well as report safety incidents within 72 hours.

Provides whistleblower protections for employees who have reasonable cause to disclose to the AG or Labor Commissioner information regarding a developer's noncompliance with the bill or regarding unreasonably dangerous models.

ARGUMENTS IN OPPOSITION

SB1047 protocols only impact models that cost over $100 million in computing power to train—this threshold is baked into the definition of "covered model" provided by the bill and cannot be altered except by future legislative action. 

The safety and security protocols developers would be required to implement broadly align with national and international guidelines, as well as procedures these developers already claim to implement. 

The bill only requires that developers implement shutdown capabilities for models they themselves control. The open-source community can rest easy knowing the models they download will not contain an immutable kill switch.

The bill does not create a state-sponsored licensing regime for AI.

It does not ban the creation and use of AI above a certain compute threshold

It does not impose strict liability on developers for downstream harms caused by third parties, nor alter baseline rules of tort and criminal liability.

It does not create exorbitant costs for startups seeking to train large models.
 
The bill forces model developers to engage in speculative fiction about imagined threats of machines run amok, computer models spun out of control, and other nightmare scenarios for which there is no basis in reality. It also forces developers operating in the real world to proactively mitigate against every conceivable harm - and many inconceivable ones - not just by the model itself, but subsequent third parties who make use of the model.

Comments

Popular posts from this blog

Jonathan Majors — Assault and Harassment Convictions

Asta Jonasson v. Vin Diesel, One Race Films, Inc. et al.—Wrongful Termination, Sexual battery lawsuit brought by former assistant

Antitrust Law: American Medical Response v. The County of San Bernardino et. al.