AI giants collaborate to establish the industry organization Frontier Model Forum, OpenAI launches a $10 million "Safety Fund".
Artificial intelligence company OpenAI announced in a blog post yesterday that it is establishing a $10 million fund through its "Frontier Model Forum" dedicated to AI safety research. The initiative will be led by Chris Meserole, former Director of AI and National Security at the Brookings Institution, who will serve as the inaugural executive director. The forum aims to collaborate with governments, academics, and the public to share industry insights and promote the safe development, application, and advancement of AI.
Table of Contents
What is the Frontier Model Forum?
As one of the world's most highly valued startups, OpenAI announced in July through a joint statement that it is collaborating with major companies such as Anthropic, Google, and Microsoft to establish an industry organization focused on ensuring the safety and development of innovative artificial intelligence models, known as the Frontier Model Forum (FMF) :
Companies creating AI technology have a responsibility to ensure its safety. In the future, FMF will focus on knowledge sharing and applications, developing public databases for technical assessments and safety standards related to AI models, establishing cross-unit connections and collaborations with government agencies and academia, and regularly disclosing risk information and security mechanisms.
FMF Updates
OpenAI officially announced many updates about its industry organization FMF in a tweet yesterday, which are summarized below.
Today, we are announcing Chris Meserole as the Executive Director of the Frontier Model Forum, and the creation of a new AI Safety Fund, a $10 million initiative to promote research in the field of AI safety. https://t.co/a20tU1zXke
— OpenAI (@OpenAI) October 25, 2023
Appointment of Executive Director Chris Meserole
Firstly, the forum welcomed its first Executive Director Chris Meserole, who previously served as the Director of AI and Emerging Technology at the Brookings Institution and brings rich experience in managing and securing various emerging technologies:
The most powerful AI models will bring tremendous developmental benefits to society, but to realize their potential, we need a comprehensive understanding of how to develop and evaluate them safely. I am excited to face this challenge together with the Frontier Model Forum.
Establishment of AI Safety Fund
Furthermore, the organization has created a $10 million AI Safety Fund to support researchers from global academic institutions, research units, and startups in developing and testing their AI products, while contributing to the identification and assessment standards of their potential risks.
It is understood that this fund is supported by organizational members Google, Microsoft, and several charitable foundation partners, and will be open for funding proposals in the coming months.
Establishment of Common Safety Standards
In addition, the forum is also establishing a common database of terminology and process definitions for participants to discuss AI safety and management together.
They are also committed to developing a disclosure process that allows AI labs and companies to share the latest information on vulnerability risks, development trends, and mitigation measures.
Next Steps
Finally, FMF plans to establish an advisory committee to help guide its strategies and priorities, and to publish various perspectives and expert knowledge.
Prior to this, SEC Chairman Gary Gensler and the United Nations have also expressed serious concerns about the rapid development of AI technology.