Article reprint source: AIGC Open Community
Original source: AIGC Open Community
Image source: Generated by Unbounded AI
On October 26, Microsoft, OpenAI, Google and Anthropic issued a joint statement on their official websites, appointing Chris Meserole as the executive director of the "Frontier Model Forum" and investing $10 million in a security fund to enhance the security of generative AI products such as ChatGPT and Bard.
This statement is an important manifestation of the four major technology companies' research and development of safe, reliable and stable generative AI. It is also part of the "AI Safety Commitment" signed with the official, which is crucial to the healthy development of global generative AI.
Introduction to the Frontier Model Forum
The Frontier Model Forum was jointly founded by Microsoft, OpenAI, Google and Anthropic on July 26 this year. Its main responsibility is to ensure the safe and responsible development of cutting-edge AI models.
The forum has four core objectives:
1) Advance AI safety research, promote the responsible development of cutting-edge models, minimize risks, and enable independent, standardized assessments of AI capabilities and safety.
2) Identify best practices for the responsible development and deployment of cutting-edge models to help the public understand the nature, capabilities, limitations, and impacts of the technology.
3) Work with policymakers, academics, civil society and companies to share knowledge on trust and security risks.
4) Strongly support the development of AI applications that can help address societal challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and countering cyber threats.
Chris Meserole Responsibilities
Chris has deep expertise in technology policy and has long been committed to the governance and security of emerging technologies and their future applications. Most recently, Chris served as the director of the Brookings Institution’s AI and Emerging Technologies Initiative.
In the Frontier Model Forum, Chris will perform the following duties:
Advance AI safety research, promote the development of cutting-edge models, and minimize potential risks.
Identify security best practices for leading-edge models.
Share knowledge with policymakers, academia, society and other stakeholders to promote the responsible development of AI.
Support efforts to use AI to solve society’s greatest challenges.
Chris said, "The most powerful AI models hold great promise for society, but to realize their potential, we need to better understand how to safely develop and evaluate them. I am pleased to work with the Frontier Model Forum on this challenge."
AI Safety Fund
In the past year, generative AI has made great progress, but it is also accompanied by many potential security risks. To address this problem, Microsoft, OpenAI, Google, and Anthropic have established an AI Safety Fund to support independent researchers from academic institutions, research institutes, and startups around the world to work together to build a healthy and safe AI safety ecosystem.
Earlier this year, Microsoft, OpenAI, Google, and Anthropic signed the official "AI Safety Commitment," which includes a commitment to promote third-party discovery and reporting of AI vulnerabilities in the four major technology companies.
The primary focus of the AI Safety Fund is to support the development of new model evaluation techniques to aid in the development and testing of cutting-edge AI models.
Increased funding for AI safety will help improve safety and security standards and provide effective controls for industry, developers, and others to address the challenges posed by AI systems.