Brad Smith, the president of Big Tech firm Microsoft has called on governments to "move faster" and corporations "step up" amid a massive acceleration in artificial intelligence (AI) development.
Speaking at a May 25 panel in front of United States lawmakers in Washington D.C. Smith made the call as he proposed regulations that could mitigate the potential risks of AI, according to a report from the New York Times.
The company has called for companies to implement "safety brakes" for AI systems that control critical infrastructure and develop broader legal and regulatory framework for AI, among others.
AI may be the most consequential technology advance of our lifetime. Today we announced a 5-point blueprint for Governing AI. It addresses current and emerging issues, brings the public and private sector together, and ensures this tool serves all society. https://t.co/zYektkQlZy
— Brad Smith (@BradSmi) May 25, 2023
Smith is yet another industry heavyweight to raise the alarm over the rapid development of AI technology. The breakneck pace of advancements in AI has already given rise to a number of harmful developments, including threats to privacy, job losses by way of automation, and extremely convincing “deep fake” videos that routinely spread scams and disinformation across social media.
The Microsoft executive said that governments shouldn’t bear the full brunt of action and that companies also need to work to mitigate the risks of unfettered AI development.
Smith’s comments come even though Microsoft has been working on AI as well, reportedly developing a series of new specialized chips that would help power OpenAI’s viral chatbot, ChatGPT.
Still, Smith argued that Microsoft wasn’t trying to palm off responsibility, as it pledged to carry out its own AI-related safeguarding regardless of whether the government compelled it, stating:
“There is not an iota of abdication of responsibility.”
On May 16, OpenAI founder and CEO Sam Altman testified before Congress, where he advocated for the establishment of a federal oversight agency that would grant licenses to AI companies.
Senator Lindsey Graham asks the witnesses at the hearing on artificial intelligence regulation if there should be an agency to license and oversee AI tools.All say yes, but IBM Chief Privacy & Trust Officer Christina Montgomery has stipulations: pic.twitter.com/UD7R8N7s23
— Yahoo Finance (@YahooFinance) May 16, 2023
Notably, Smith endorsed Altman’s idea of handing out licenses to developers, saying that “high risk” AI services and development should only be undertaken in licensed AI data centers.
Ever since ChatGPT first launched in November last year, there have been widespread calls for more stringent oversight of AI, with some organizations even suggesting that development of the technology should be brought to a temporary standstill.
On March 22, the Future of Life Institute published an open letter calling for industry leaders to “pause” development of AI. It was signed by a number of major tech industry leaders including Tesla CEO Elon Musk and Apple cofounder Steve Wozniak. At the time of publication, the letter has attracted more than 31,000 signatures.
AI Eye: Make 500% from ChatGPT stock tips? Bard leans left, $100M AI memecoin