California State Senator Scott Wiener and OpenAI are locked in a heated debate over Senate Bill 1047 (SB 1047), proposed legislation that seeks to regulate the development and deployment of artificial intelligence (AI) models.
Wiener introduced the bill in February, requiring AI companies to conduct rigorous safety evaluations of their models before releasing them to the public.
Despite OpenAI’s high-profile objection, Wiener contends that their criticisms are baseless. In an August 21st press release, he asserted that it “doesn’t criticize a single provision of the bill,” reaffirming his stance that the bill is crucial for ensuring public safety and national security.
Open AI’s Argument: Does SB 1047 Stifle Innovation?
ChatGPT developer OpenAI has expressed its opposition to SB 1047 in a letter addressed to Wiener and California Governor Gavin Newsom.
OpenAI’s chief strategy officer Jason Kwon warned in the letter that the bill could stifle innovation and drive talent out of California, a state that has long been a global leader in the tech industry, according to a Bloomberg report.
Kwon argued that the AI sector is still in its early stages and that overly restrictive state regulations could hinder its growth. He suggested that federal legislation, rather than state laws, would be more appropriate for governing AI development.
Wiener, however, dismissed these concerns as a “tired argument,” likening them to the objections raised by the tech industry when California passed its data privacy law, which ultimately did not materialize as feared.
He acknowledged that “ideally” Congress should handle AI regulation but expressed skepticism given Congress’s previous lack of engagement with data privacy laws.
Wiener emphasized the necessity for state-level action, asserting that, just like with data privacy, with “Congress’s lack of action, Californians would have no protection whatsoever.”
SB 1047: Does AI Pose A Risk?
Wiener concluded his defense of SB 1047 by asserting that the bill is a “highly reasonable” measure designed to ensure that large AI labs, such as OpenAI, adhere to their commitments to test their models against catastrophic safety risks.
He highlighted the extensive collaboration with source advocates, Anthropic, and others in refining the bill, emphasizing that it is “well-calibrated” and “deserves to be enacted.”
Supporting Wiener’s stance, a poll conducted by the Artificial Intelligence Policy Institute (AIPI) from August 4th to 5th reveals strong public backing for the bill.
Among 1,000 Californian voters surveyed, 70% agreed with the bill, citing concerns that powerful AI models could be misused for cyber-attacks or developing biological weapons.
Only 16% supported the opponents’ view that the bill could hinder AI innovation in California and accused its supporters of fear-mongering.
Additionally, 23% of voters felt that the bill should be moderated to avoid a potential chilling effect on AI innovation, reflecting some concern over the bill’s strictness.
According to Wiener, these provisions are crucial for safeguarding the public from potential dangers posed by advanced AI systems. He found OpenAI’s opposition particularly perplexing given their prior commitments to safety evaluations.
The post OpenAI vs. California: Will SB 1047 Protect the Public or Stifle AI Innovation? appeared first on Cryptonews.