OpenAI, the renowned artificial intelligence research laboratory, has voiced strong opposition to California’s Senate Bill 1047 (SB 1047), a proposed regulation that aims to establish safety standards for AI systems. The company argues that the bill, if passed, could significantly hinder innovation and progress in the field of AI, potentially driving companies out of the state.
SB 1047, introduced by California State Sen. Scott Wiener in February 2024, targets AI systems that cost over $100 million to develop. The bill mandates the implementation of protocols to prevent AI models from causing “critical harms” and requires “full shutdown” capabilities for these systems. While supporters claim the legislation is necessary to ensure responsible AI development, OpenAI and other tech giants see it as an overreach that could stifle growth.
Bloomberg reports that Jason Kwon, OpenAI’s Chief Strategy Officer, warns that the bill’s provisions could slow down progress and force companies to relocate their AI research and development efforts outside of California. He argues that AI regulation concerning national security is best managed at the federal level rather than through state-specific legislation. Kwon also mentions OpenAI’s previous lobbying efforts against similar legislation proposed by the European Union.
“The AI revolution is only just beginning, and California’s unique status as the global leader in AI is fueling the state’s economic dynamism,” Jason Kwon, chief strategy officer at OpenAI, wrote in the letter. “SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere.”
Businessinsider reports that OpenAI is not alone in its opposition to SB 1047. Meta and Anthropic, two other prominent players in the AI industry, are also actively lobbying against the bill as pointed out by Yahoo. Meta warns that the legislation could discourage the open-source movement and expose developers to significant legal liabilities. Anthropic advocates for a more balanced approach to regulation, cautioning that overly restrictive measures could stymie progress in the field.
UCLA adjunct professor Arun Rao tweeted, “It’s final – every major AI lab in California and most academics at the major Californian universities want @Scott_Wiener’s SB 1047 to be tabled. Nancy Pelosi and most of the CA Congressional delegation agree. OpenAI asserts that open source LLMs and likely even closed ones will leave CA if this bill is passed, devastating the AI economy (and possibly all of tech, which relies on AI).”
Supporters of the bill, including Sen. Wiener, defend it as a “highly reasonable” measure that aligns with the commitments made by AI labs to ensure the safe and responsible development of their technologies. However, critics, including venture capitalists and executives, argue that the bill is anti-competitive and could disproportionately impact smaller companies and startups. Some smaller founders express apprehension about the potential consequences of the legislation but also acknowledge the potential benefits in terms of increased transparency and equity in AI research.
SB 1047 has already passed the state Senate and now awaits a final vote in the California Assembly. The outcome of this vote could have far-reaching implications for the future of AI development and applications, not only in California but also across the United States and beyond. As the debate surrounding AI regulation continues, it remains to be seen how California’s approach will compare to similar measures proposed or enacted in other countries, such as the European Union’s AI Act.
Image credit: Wikimedia