After the European Parliament and its “AI Act,” the state of California may adopt the “SB 1047” bill, introducing stricter regulation of artificial intelligence. A decision that doesn’t sit well with AI companies.
Artificial intelligence is evolving rapidly, serving as personal assistants, image generators, or even tools for scams. Its widespread adoption across various industries is driving California to consider regulations similar to those seen in Europe. However, not everyone is pleased with this move.
Law vs. Lobbying
On August 15, 2024, California’s Assembly and Senate passed the “SB 1047” bill. Now awaiting the governor’s approval, the bill aims to implement safeguards against AI’s rapid advancements. Initially, the legislation included provisions such as a “kill switch” for certain AI systems and stringent security tests to assess the potential risks AI poses to humanity. These measures, however, have not been well received by industry leaders, who are being held accountable for their creations.
The bill has since been revised, removing the ability to hold developers legally responsible for major security incidents. It will, however, still address concerns over deepfakes that threaten democratic integrity.
Despite these revisions, the softened version remains contentious within the industry, which fears it may lag behind other countries in AI development. The Verge spoke with several entrepreneurs, including Andrew Ng, founder of Google Brain, who advocates for distinguishing between technology and its applications. “When someone trains a large language model… that’s technology. When someone uses it to generate political deepfakes or non-consensual deepfake porn, those are applications,” he explained.
“The risk of AI is not a function. It doesn’t depend on the technology; it depends on the application.”
Andrew Ng
A justification that would absolve designers from the malicious acts that could be committed with their “tools.”
A European Echo
The state of California is known for its strong stance on regulating AI, with a system similar to Europe’s GDPR. In its more flexible version, the “SB 1047” law serves as a reflective measure for companies, forcing them to take the risks of their creations more seriously.
In Europe, a similar law titled the “AI Act” requires AI companies to provide transparency on their training models, personal data, and copyright. This law favors users but is unpopular with industry leaders, who delay launching their products in Europe, even at the risk of alienating their users.