Tuesday, November 28, 2023
LetsAskBinu.com
  • Home
  • Cybersecurity
  • Cyber Threats
  • Hacking
  • Protection
  • Networking
  • Malware
  • Fintech
  • Internet Of Things
No Result
View All Result
LetsAskBinu.com
No Result
View All Result
Home Cybersecurity

OpenAI, Microsoft, Google, Anthropic Launch Frontier Model Forum to Promote Safe AI

Researcher by Researcher
July 29, 2023
in Cybersecurity
0
OpenAI, Microsoft, Google, Anthropic Launch Frontier Model Forum to Promote Safe AI
189
SHARES
1.5k
VIEWS
Share on FacebookShare on Twitter


The forum’s goal is to establish “guardrails” to mitigate the risk of AI. Learn about the group’s four core objectives, as well as the criteria for membership.

Artificial intelligence and modern computer technology image concept.
Image: putilov_denis/Adobe Stock

OpenAI, Google, Microsoft and Anthropic have announced the formation of the Frontier Model Forum. With this initiative, the group aims to promote the development of safe and responsible artificial intelligence models by identifying best practices and broadly sharing information in areas such as cybersecurity.

Jump to:

Related articles

Sentra Raises $30 Million for DSPM Technology

Northern Ireland’s Top Police Officer Apologizes for ‘Industrial Scale’ Data Breach

August 13, 2023
Minimizing Risk Through Proactive Apple Device Management: Addigy

Minimizing Risk Through Proactive Apple Device Management: Addigy

August 12, 2023

What is the Frontier Model Forum’s goal?

The goal of the Frontier Model Forum is to have member companies contribute technical and operational advice to develop a public library of solutions to support industry best practices and standards. The impetus for the forum was the need to establish “appropriate guardrails … to mitigate risk” as the use of AI increases, the member companies said in a statement.

Additionally, the forum says it will “establish trusted, secure mechanisms for sharing information among companies, governments, and relevant stakeholders regarding AI safety and risks.” The forum will follow best practices in responsible disclosure in areas such as cybersecurity.

SEE: Microsoft Inspire 2023: Keynote Highlights and Top News (TechRepublic)

What are the Frontier Model Forum’s main objectives?

The forum has crafted four core objectives:

1. Advancing AI safety research to promote responsible development of frontier models, minimize risks and enable independent, standardized evaluations of capabilities and safety.

2. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations and impact of the technology.

3. Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.

4. Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyberthreats.

SEE: OpenAI Is Hiring Researchers to Wrangle ‘Superintelligent’ AI (TechRepublic)

What are the criteria for membership in the Frontier Model Forum?

To become a member of the forum, organizations must meet a set of criteria:

  • They develop and deploy predefined frontier models.
  • They demonstrate a strong commitment to frontier model safety.
  • They demonstrate a willingness to advance the forum’s work by supporting and participating in initiatives.

The founding members noted in statements in the announcement that AI has the power to change society, so it behooves them to ensure it does so responsibly through oversight and governance.

More must-read AI coverage

“It is vital that AI companies — especially those working on the most powerful models — align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,” said Anna Makanju, vice president of global affairs at OpenAI. Advancing AI safety is “urgent work,” she said, and the forum is “well-positioned” to take quick actions.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure and remains under human control,” said Brad Smith, vice chair and president of Microsoft. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

SEE: Hiring kit: Prompt engineer (TechRepublic Premium)

Frontier Model Forum’s advisory board

An advisory board will be set up to oversee strategies and priorities, with members coming from diverse backgrounds. The founding companies will also establish a charter, governance and funding with a working group and executive board to spearhead these efforts.

The board will collaborate with “civil society and governments” on the design of the forum and discuss ways of working together.

Cooperation and criticism of AI practices and regulation

The Frontier Model Forum announcement comes less than a week after OpenAI, Google, Microsoft, Anthropic, Meta, Amazon and Inflection agreed to the White House’s list of eight AI safety assurances. These recent actions are especially interesting in light of recent measures taken by some of these companies regarding AI practices and regulations.

For instance, in June, Time magazine reported that OpenAI lobbied the E.U. to water down AI regulation.Further, the formation of the forum comes months after Microsoft laid off its ethics and society team as part of a larger round of layoffs, calling into question its commitment to responsible AI practices.

“The elimination of the team raises concerns about whether Microsoft is committed to integrating its AI principles with product design as the organization looks to scale these AI tools and make them available to its customers across its suite of products and services,” wrote Rich Hein in a March 2023 CMSWire article.

Other AI safety initiatives

This is not the only initiative geared toward promoting the development of responsible and safe AI models. In June, PepsiCo announced it would begin collaborating with the Stanford Institute for Human-Centered Artificial Intelligence to “ensure that AI is implemented responsibly and positively impacts the individual user as well as the broader community.”

The MIT Schwarzman College of Computing has established the AI Policy Forum, which is a global effort to formulate “concrete guidance for governments and companies to address the emerging challenges” of AI such as privacy, fairness, bias, transparency and accountability.

Carnegie Mellon University’s Safe AI Lab was formed to “develop reliable, explainable, verifiable, and good-for-all artificial intelligent learning methods for consequential applications.”



Source link

Tags: AnthropicForumfrontierGoogleLaunchMicrosoftModelOpenAIPromotesafe
Share76Tweet47

Related Posts

Sentra Raises $30 Million for DSPM Technology

Northern Ireland’s Top Police Officer Apologizes for ‘Industrial Scale’ Data Breach

August 13, 2023
0

Northern Ireland’s top police officer apologized Thursday for what he described as an “industrial scale” data breach in which the...

Minimizing Risk Through Proactive Apple Device Management: Addigy

Minimizing Risk Through Proactive Apple Device Management: Addigy

August 12, 2023
0

Enterprise IT teams are struggling to cope with three major forces of change: the evolving regulatory environment, a globally dispersed...

Decipher Podcast: Katelyn Bowden and TC Johnson

Decipher Podcast: Katelyn Bowden and TC Johnson

August 12, 2023
0

Veilid main site: https://veilid.com/ Cult of the Dead Cow site: https://cultdeadcow.com/ Source link

In Other News: Government Use of Spyware, New Industrial Security Tools, Japan Router Hack 

In Other News: macOS Security Reports, Keyboard Spying, VPN Vulnerabilities

August 12, 2023
0

SecurityWeek is publishing a weekly cybersecurity roundup that provides a concise compilation of noteworthy stories that might have slipped under...

Used Correctly, Generative AI is a Boon for Cybersecurity

Used Correctly, Generative AI is a Boon for Cybersecurity

August 12, 2023
0

Adobe stock, by Busra At the Black Hat kickoff keynote on Wednesday, Jeff Moss (AKA Dark Tangent), the founder of...

Load More
  • Trending
  • Comments
  • Latest
This Week in Fintech: TFT Bi-Weekly News Roundup 08/02

This Week in Fintech: TFT Bi-Weekly News Roundup 15/03

March 15, 2022
Supply chain efficiency starts with securing port operations

Supply chain efficiency starts with securing port operations

March 15, 2022
Microsoft to Block Macros by Default in Office Apps

Qakbot Email Thread Hijacking Attacks Drop Multiple Payloads

March 15, 2022
QNAP Escalation Vulnerability Let Attackers Gain Administrator Privileges

QNAP Escalation Vulnerability Let Attackers Gain Administrator Privileges

March 15, 2022
Beware! Facebook accounts being hijacked via Messenger prize phishing chats

Beware! Facebook accounts being hijacked via Messenger prize phishing chats

0
Shoulder surfing: Watch out for eagle‑eyed snoopers peeking at your phone

Shoulder surfing: Watch out for eagle‑eyed snoopers peeking at your phone

0
Remote work causing security issues for system and IT administrators

Remote work causing security issues for system and IT administrators

0
Elementor WordPress plugin has a gaping security hole – update now – Naked Security

Elementor WordPress plugin has a gaping security hole – update now – Naked Security

0
North Korean Hackers Exploiting Zero-day Vulnerabilities

North Korean Hackers Exploiting Zero-day Vulnerabilities

November 28, 2023
North Korean Hackers Exploit MagicLine4NX Zero-day

North Korean Hackers Exploit MagicLine4NX Zero-day

November 28, 2023
NukeSped Malware Exploiting Apache ActiveMQ Vulnerability

NukeSped Malware Exploiting Apache ActiveMQ Vulnerability

November 28, 2023
A New Telekopye Bots That Tricks Users to Steal Payment Details

A New Telekopye Bots That Tricks Users to Steal Payment Details

November 27, 2023

Recent Posts

North Korean Hackers Exploiting Zero-day Vulnerabilities

North Korean Hackers Exploiting Zero-day Vulnerabilities

November 28, 2023
North Korean Hackers Exploit MagicLine4NX Zero-day

North Korean Hackers Exploit MagicLine4NX Zero-day

November 28, 2023
NukeSped Malware Exploiting Apache ActiveMQ Vulnerability

NukeSped Malware Exploiting Apache ActiveMQ Vulnerability

November 28, 2023

Categories

  • Cyber Threats
  • Cybersecurity
  • Fintech
  • Hacking
  • Internet Of Things
  • LetsAskBinuBlogs
  • Malware
  • Networking
  • Protection

Tags

Access attack Attacks banking BiWeekly bug Cisco cloud code critical Cyber Cybersecurity Data Digital exploited financial Fintech Flaw flaws Google Group Hackers Krebs Latest launches malware Microsoft million Network News open patches platform Ransomware RoundUp security Software Stories TFT Threat Top vulnerabilities vulnerability warns Week

© 2022 Lets Ask Binu All Rights Reserved

No Result
View All Result
  • Home
  • Cybersecurity
  • Cyber Threats
  • Hacking
  • Protection
  • Networking
  • Malware
  • Fintech
  • Internet Of Things

© 2022 Lets Ask Binu All Rights Reserved