Leading AI researchers, including Turing Award winners and a Nobel laureate, urge artificial intelligence companies and governments to allocate a minimum of one-third of their AI research and development funding to ensure the safe and ethical use of AI systems.
In a joint letter released just ahead of the international AI Safety Summit in London, the experts outlined measures to address AI-related risks. They also advocate for governments to legally hold companies accountable for foreseeable harm caused by their advanced AI systems.
“AI systems could rapidly come to outperform humans in an increasing number of tasks…They threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society,” prominent figures in the AI field, such as Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song, and Yuval Noah Harari, wrote in the letter.
“They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance.”
Urgent Actions Required for Safe and Ethical AI Development
The urgency of this plea stems from the belief that current regulations fail to adequately address the rapid progress of AI technology.
In AI development, the experts recognize the pressing need to confront complex challenges. They argue that merely bolstering AI capabilities will fall short of the mark. These challenges encompass oversight and honesty, wherein advanced AI can exploit testing vulnerabilities, as well as issues related to robustness, interpretability, risk evaluation, and emerging challenges.
Future AI systems may manifest unanticipated failure modes, underscoring the importance of major tech companies and public funders allocating a significant portion of their AI research and development budget to prioritize safety and ethics alongside the enhancement of AI capabilities.
According to the letter, the absence of AI governance may tempt companies and nations to prioritize capabilities over safety or delegate important societal functions to AI systems with limited human oversight.
For effective regulation, governments urgently need comprehensive insights into AI development. Moreover, extra measures are essential for highly potent AI systems, including licensing their development, temporary halts in response to concerning capabilities, access controls, and robust information security.
Despite some concerns from AI companies regarding compliance costs and liability risks, proponents argue that robust regulations are essential to mitigate the potential risks associated with unchecked AI development.
“AI may be the technology that shapes this century. While AI capabilities are advancing rapidly, progress in safety and governance is lagging behind. To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path, if we have the wisdom to take it,” the letter concluded.
Read more:
What Tech Leaders Seek in Artificial Intelligence? Tech Visionaries Share Insights
OpenAI: AI Could Potentially Do a Lot of Harm to People, But Trying to Stop Progress is Not an Option
AI Is Transitioning Towards Operating Systems, According to Sequoia
Nobel laureate Paul Krugman says the crypto boom looks like the housing bubble
The post AI Experts Call for Higher AI Safety Spending by Governments and Businesses appeared first on Metaverse Post.