California's SB 1047 AI security bill sparks controversy, Musk: You may be unhappy, but I suppo
California's SB 1047 AI Security Act
California is pushing for a regulatory bill called "SB 1047" targeting artificial intelligence. The bill was proposed by State Senator Scott Wiener in February this year, with the aim of establishing security standards for the development of large-scale AI systems to prevent their models from causing serious harm, such as facilitating the development of biological weapons that could cause significant casualties or economic losses exceeding $500 million.
AI technology companies led by OpenAI have publicly expressed opposition to this and sent a letter to California Senator Scott Wiener's office on Wednesday, stating that the bill will harm innovation in the artificial intelligence industry and that regulation of this issue should be set by the federal government rather than states.
In addition, critics also argue that the bill requires companies to submit detailed information about AI models to the state government, which would hinder innovation and prevent small open-source developers from establishing new ventures because they fear being sued. Despite Scott Wiener's balanced removal of the criminal liability originally imposed on tech companies in the bill, he still failed to gain recognition from critics such as OpenAI.
Musk shares views with Vitalik
There are also many discussions in the community regarding this issue. Musk posted on X today that, after comprehensive consideration, he believes California should pass the "SB 1047" AI Security Act. Musk said:
For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that may pose a risk to the public
Ethereum founder Vitalik Buterin replied that he agrees with Musk's ideas, but he added that responsibility is not "if there are serious problems with using AI derived models, you will be fined", but depends on whether you have met the standard of "reasonable care".
I agree that the standard of 'reasonable care' is indeed vague, as it is difficult to establish an unambiguous standard at this stage. However, this does not mean that you have unlimited responsibility for almost all the behavior of downstream users
In addition, Vitalik further added that one of the reasons he likes this bill is that it introduces the category of "significant harm" and clearly distinguishes this harm from other adverse events.
I think this is very important, especially in today's era where many people use words like 'verbal violence' and attempt to expand the scope of the term originally intended to represent extreme harm to include all adverse events (similarly, expanding the definition of 'safety'). The inflation of language cannot be completely avoided because it is a result of basic game theory, but we need to compensate by redefining the difference between 'truly serious things' and' moderately bad things'
本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。