[ad_1]
The forum’s goal is to establish “guardrails” to mitigate the risk of AI. Learn about the group’s four core objectives, as well as the criteria for membership.
OpenAI, Google, Microsoft and Anthropic have announced the formation of the Frontier Model Forum. With this initiative, the group aims to promote the development of safe and responsible artificial intelligence models by identifying best practices and broadly sharing information in areas such as cybersecurity.
Jump to:
What is the Frontier Model Forum’s goal?
The goal of the Frontier Model Forum is to have member companies contribute technical and operational advice to develop a public library of solutions to support industry best practices and standards. The impetus for the forum was the need to establish “appropriate guardrails … to mitigate risk” as the use of AI increases, the member companies said in a statement.
Additionally, the forum says it will “establish trusted, secure mechanisms for sharing information among companies, governments, and relevant stakeholders regarding AI safety and risks.” The forum will follow best practices in responsible disclosure in areas such as cybersecurity.
SEE: Microsoft Inspire 2023: Keynote Highlights and Top News (TechRepublic)
What are the Frontier Model Forum’s main objectives?
The forum has crafted four core objectives:
1. Advancing AI safety research to promote responsible development of frontier models, minimize risks and enable independent, standardized evaluations of capabilities and safety.
2. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations and impact of the technology.
3. Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
4. Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyberthreats.
SEE: OpenAI Is Hiring Researchers to Wrangle ‘Superintelligent’ AI (TechRepublic)
What are the criteria for membership in the Frontier Model Forum?
To become a member of the forum, organizations must meet a set of criteria:
- They develop and deploy predefined frontier models.
- They demonstrate a strong commitment to frontier model safety.
- They demonstrate a willingness to advance the forum’s work by supporting and participating in initiatives.
The founding members noted in statements in the announcement that AI has the power to change society, so it behooves them to ensure it does so responsibly through oversight and governance.
“It is vital that AI companies — especially those working on the most powerful models — align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,” said Anna Makanju, vice president of global affairs at OpenAI. Advancing AI safety is “urgent work,” she said, and the forum is “well-positioned” to take quick actions.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure and remains under human control,” said Brad Smith, vice chair and president of Microsoft. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
SEE: Hiring kit: Prompt engineer (TechRepublic Premium)
Frontier Model Forum’s advisory board
An advisory board will be set up to oversee strategies and priorities, with members coming from diverse backgrounds. The founding companies will also establish a charter, governance and funding with a working group and executive board to spearhead these efforts.
The board will collaborate with “civil society and governments” on the design of the forum and discuss ways of working together.
Cooperation and criticism of AI practices and regulation
The Frontier Model Forum announcement comes less than a week after OpenAI, Google, Microsoft, Anthropic, Meta, Amazon and Inflection agreed to the White House’s list of eight AI safety assurances. These recent actions are especially interesting in light of recent measures taken by some of these companies regarding AI practices and regulations.
For instance, in June, Time magazine reported that OpenAI lobbied the E.U. to water down AI regulation.Further, the formation of the forum comes months after Microsoft laid off its ethics and society team as part of a larger round of layoffs, calling into question its commitment to responsible AI practices.
“The elimination of the team raises concerns about whether Microsoft is committed to integrating its AI principles with product design as the organization looks to scale these AI tools and make them available to its customers across its suite of products and services,” wrote Rich Hein in a March 2023 CMSWire article.
Other AI safety initiatives
This is not the only initiative geared toward promoting the development of responsible and safe AI models. In June, PepsiCo announced it would begin collaborating with the Stanford Institute for Human-Centered Artificial Intelligence to “ensure that AI is implemented responsibly and positively impacts the individual user as well as the broader community.”
The MIT Schwarzman College of Computing has established the AI Policy Forum, which is a global effort to formulate “concrete guidance for governments and companies to address the emerging challenges” of AI such as privacy, fairness, bias, transparency and accountability.
Carnegie Mellon University’s Safe AI Lab was formed to “develop reliable, explainable, verifiable, and good-for-all artificial intelligent learning methods for consequential applications.”
[ad_2]
Source link