A law to regulate AI is currently being drafted in California. It is somewhat similar to the European AI Act. However, there are differences in the responsibilities and duty of the kill switch. According to Stanford professor Fei-Fei Li, both of these can largely hinder the development of AI in California as well as the United States. She is also concerned that the law does not address existing real problems, such as how society wants to deal with AI.
Advertisement
Lee explains in a guest article in Fortune magazine Their concerns. There, the ‘Godmother of AI’ writes, SB-1047, as the law is succinctly put, will hamper developers and prevent innovation, because providers of AI models will be held responsible if misuse occurs. According to Lee, as a developer it is impossible to rule out every possible scenario of misuse.
A kill switch for AI kills the AI
Your second concern is related to the requirement to include a kill switch in models larger than a certain size. According to the law, the kill switch is supposed to ensure security, it can be used to completely shut down the system; Lee said this option will make developers more hesitant and reluctant to contribute equally if they know the programs they are working on can be removed. This particularly affects the open source community.
And because the open source community will then be less active, science will also have less information available. And without access you can’t train the “next generation of AI leaders.”
In addition to assumptions about what is at risk due to regulation, Lee also writes about what she sees as missing: “This law does not address the potential dangers of AI, such as bias and deepfakes.” Lee offers her expertise to the responsible senator, Scott Wiener.
California Senate Bill Number 1047 and the long title “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”. Lee is not the first critic of this law; some in the AI industry have already expressed their concerns. Lee is now very clear in his statements. Similar to AI regulation, so far most accusations have been that regulation stifles innovation. In the eyes of some, this is generally true. The compliance costs have also already been criticized. The law, again similar to the European version, provides for obligations that include, for example, risk assessments and reports on security incidents.
Critics worry that all of this could slow down growth in the AI industry.
Read this also
(EMW)