Last week, US President Joe Biden issued a comprehensive regulation to protect citizens, government agencies and companies to ensure AI safety standards. While the regulation sets six new standards for the safety and ethical use of AI, some industry experts have raised concerns about how this regulation could prevent companies from developing high-end models.
Adam Struck, co-founder of Struck Capital and AI investor, said the regulation shows seriousness about AI's potential to reshape every industry. However, he said the regulation is less directive for the open source community, making it challenging for companies and developers.
While the regulation will be implemented under the guidance of those with experience in AI and AI governance, some have expressed fears that smaller companies will be confused by the requirements required for larger companies.
Nanotronics CEO and co-founder Matthew Putman stated that regulation requires regulatory frameworks for consumer safety and the ethical development of AI. Putman stated that AI's "doomsday scenario" potential has been exaggerated based on its short-term positive effects.
Although the regulation is still new, the US National Institute of Standards and Technology (NIST) and the Department of Commerce have begun recruiting members for the newly formed Artificial Intelligence (AI) Security Institute Consortium.
How do you think this regulation may affect AI development? Share your comments with us.