Thursday, September 21, 2023
LetsAskBinu.com
  • Home
  • Cybersecurity
  • Cyber Threats
  • Hacking
  • Protection
  • Networking
  • Malware
  • Fintech
  • Internet Of Things
No Result
View All Result
LetsAskBinu.com
No Result
View All Result
Home Cybersecurity

Google Introduces SAIF, a Framework for Secure AI Development and Use

Researcher by Researcher
June 11, 2023
in Cybersecurity
0
ChatGPT’s Chief Testifies Before Congress, Calls for New Agency to Regulate Artificial Intelligence
189
SHARES
1.5k
VIEWS
Share on FacebookShare on Twitter


The Google SAIF (Secure AI Framework) is designed to provide a security framework or ecosystem for the development, use and protection of AI systems.

All new technologies bring new opportunities, threats, and risks. As business concentrates on harnessing opportunities, threats and risks can be overlooked. With AI, this could be disastrous for business, business customers, and people in general. SAIF offers six core elements to ensure maximum security in AI.

Expand strong security foundations to the AI ecosystem
Many existing security controls can be expanded and/or focused on AI risks. A simple example is protection against injection techniques, such as SQL injection. “Organizations can adapt mitigations, such as input sanitization and limiting, to help better defend against prompt injection style attacks,” suggests SAIF.

Traditional security controls will often be relevant to AI defense but may need to be strengthened or expanded. Data governance and protection becomes critical to protect the integrity of the learning data used by AI systems. The old concept of ‘rubbish in, rubbish out’ is magnified manyfold by AI, but made critical where business and people decisions are based on that rubbish.

Extend detection and response to bring AI into an organization’s threat universe
Threat intelligence must now also include an understanding and awareness of threats relevant to organizations’ own AI usage, including the consequences of a breach. If a data pool is poisoned without knowledge of that poisoning, AI outputs will be adversely and possibly invisibly affected. 

It will be necessary to monitor AI output to detect algorithmic errors and adversarial input. “Organizations that use AI systems must have a plan for detecting and responding to security incidents and mitigate the risks of AI systems making harmful or biased decisions,” says Google.

Automate defenses to keep pace with existing and new threats
This is the most common advice used in the face of AI-based attacks – automate defenses with AI to counter the increasing speed and magnitude of adversarial AI-based attacks. But Google warns that humans must be kept in the loop for important decisions, such as determining what constitutes a threat and how to respond to it.

The human element is important for both detection and response. “This is because AI systems can be biased or make mistakes, and human oversight is necessary to ensure that AI systems are used ethically and responsibly,” says Google.

Advertisement. Scroll to continue reading.

AI-based automation goes beyond the automated detection of threats and can also be used to decrease the workload and increase the efficiency of the security team. Secure scripts could be generated through no-code systems to control and automate security processes. Reverse engineering a malicious binary could be automated, and the subsequent automatic generation of a Yara rule could look for evidence of related activity.

Harmonize platform level controls to ensure consistent security across the organization
As the use of AI grows, it is important to have periodic reviews to identify and mitigate associated risks. This should include the AI models used and the data used to train them, together with the security measures implemented, and the AI security risk awareness and training for all employees.

Reduce overlapping frameworks for security and compliance controls to help reduce fragmentation. Fragmentation increases complexity, costs, and inefficiencies. Reducing fragmentation will, suggests Google, “provide a ‘right fit’ approach to controls to mitigate risk.”

Adapt controls to adjust mitigations and create faster feedback loops for AI deployment
This involves continuously testing and evolving systems in use, including techniques such as reinforcement learning based on incidents and user feedback. The training data needs to be monitored and updated as necessary, and the models fine-tuned to respond to attacks.

It involves being continuously aware of new attacks involving prompt injection, data poisoning and evasion attacks. “By staying up to date on the latest attack methods, organizations can take steps to mitigate these risks,” says Google. Red teaming can also help organizations identify and mitigate security risks before they can be exploited by malicious actors.

An effective feedback loop is required to ensure that everything learned is put to good use, whether that is to improve defenses or improve the AI model itself.

Contextualize AI system risks in surrounding business processes
This involves a thorough understanding of how AI will be used within business processes, and requires a complete inventory of AI models in use. Assess their risk profile based on the specific use cases, data sensitivity, and shared responsibility when leveraging third-party solutions and services.

“Implement data privacy, cyber risk, and third-party risk policies, protocols and controls throughout the ML model lifecycle to guide the model development, implementation, monitoring, and validation,” says Google.

Throughout this, it is important to assemble a strong AI security team. “AI systems are often complex and opaque, have a large number of moving parts, rely on large amounts of data, are resource intensive, can be used to apply judgment-based decisions, and can generate novel content that may be offensive, harmful, or can perpetuate stereotypes and social biases,” warns Google.

For many organizations this will expand the necessary expertise available to the security team, including business use case owners, security, cloud engineering, risk and audit teams, privacy, legal, data science, development, and responsible AI and ethics.

Google has based its SAIF framework on the experience of 10-years in the development and use of AI in its own products. The company hopes that making public its own experience in AI will lay the groundwork for secure AI – just as its BeyondCorp access model led to the zero trust principles which are industry standard today.

Related: Insider Q&A: Artificial Intelligence and Cybersecurity In Military Tech

Related: ChatGPT’s Chief Testifies Before Congress, Calls for New Agency to Regulate Artificial Intelligence

Related: Harris to Meet With CEOs About Artificial Intelligence Risks

Related: Cyber Insights 2023 | Artificial Intelligence



Source link

Related articles

Sentra Raises $30 Million for DSPM Technology

Northern Ireland’s Top Police Officer Apologizes for ‘Industrial Scale’ Data Breach

August 13, 2023
Minimizing Risk Through Proactive Apple Device Management: Addigy

Minimizing Risk Through Proactive Apple Device Management: Addigy

August 12, 2023
Tags: DevelopmentFrameworkGoogleIntroducesSAIFsecure
Share76Tweet47

Related Posts

Sentra Raises $30 Million for DSPM Technology

Northern Ireland’s Top Police Officer Apologizes for ‘Industrial Scale’ Data Breach

August 13, 2023
0

Northern Ireland’s top police officer apologized Thursday for what he described as an “industrial scale” data breach in which the...

Minimizing Risk Through Proactive Apple Device Management: Addigy

Minimizing Risk Through Proactive Apple Device Management: Addigy

August 12, 2023
0

Enterprise IT teams are struggling to cope with three major forces of change: the evolving regulatory environment, a globally dispersed...

Decipher Podcast: Katelyn Bowden and TC Johnson

Decipher Podcast: Katelyn Bowden and TC Johnson

August 12, 2023
0

Veilid main site: https://veilid.com/ Cult of the Dead Cow site: https://cultdeadcow.com/ Source link

In Other News: Government Use of Spyware, New Industrial Security Tools, Japan Router Hack 

In Other News: macOS Security Reports, Keyboard Spying, VPN Vulnerabilities

August 12, 2023
0

SecurityWeek is publishing a weekly cybersecurity roundup that provides a concise compilation of noteworthy stories that might have slipped under...

Used Correctly, Generative AI is a Boon for Cybersecurity

Used Correctly, Generative AI is a Boon for Cybersecurity

August 12, 2023
0

Adobe stock, by Busra At the Black Hat kickoff keynote on Wednesday, Jeff Moss (AKA Dark Tangent), the founder of...

Load More
  • Trending
  • Comments
  • Latest
This Week in Fintech: TFT Bi-Weekly News Roundup 08/02

This Week in Fintech: TFT Bi-Weekly News Roundup 15/03

March 15, 2022
Supply chain efficiency starts with securing port operations

Supply chain efficiency starts with securing port operations

March 15, 2022
Microsoft to Block Macros by Default in Office Apps

Qakbot Email Thread Hijacking Attacks Drop Multiple Payloads

March 15, 2022
QNAP Escalation Vulnerability Let Attackers Gain Administrator Privileges

QNAP Escalation Vulnerability Let Attackers Gain Administrator Privileges

March 15, 2022
Beware! Facebook accounts being hijacked via Messenger prize phishing chats

Beware! Facebook accounts being hijacked via Messenger prize phishing chats

0
Shoulder surfing: Watch out for eagle‑eyed snoopers peeking at your phone

Shoulder surfing: Watch out for eagle‑eyed snoopers peeking at your phone

0
Remote work causing security issues for system and IT administrators

Remote work causing security issues for system and IT administrators

0
Elementor WordPress plugin has a gaping security hole – update now – Naked Security

Elementor WordPress plugin has a gaping security hole – update now – Naked Security

0
LUCR-3 Attacking Fortune 2000 Companies Using Victims’ Own Tools

LUCR-3 Attacking Fortune 2000 Companies Using Victims’ Own Tools

September 21, 2023
EBANX Furthers Expansion into Africa; Adding 8 new Countries to its Ecosystem

EBANX Furthers Expansion into Africa; Adding 8 new Countries to its Ecosystem

September 21, 2023
Trend Micro Zero-day Vulnerability Let Attackers Run Arbitrary Code

Trend Micro Zero-day Vulnerability Let Attackers Run Arbitrary Code

September 21, 2023
Intel Reveals New 288-Core Sierra Forest CPU, Core Ultra Processors at Intel Innovation 2023

Intel Reveals New 288-Core Sierra Forest CPU, Core Ultra Processors at Intel Innovation 2023

September 21, 2023

Recent Posts

LUCR-3 Attacking Fortune 2000 Companies Using Victims’ Own Tools

LUCR-3 Attacking Fortune 2000 Companies Using Victims’ Own Tools

September 21, 2023
EBANX Furthers Expansion into Africa; Adding 8 new Countries to its Ecosystem

EBANX Furthers Expansion into Africa; Adding 8 new Countries to its Ecosystem

September 21, 2023
Trend Micro Zero-day Vulnerability Let Attackers Run Arbitrary Code

Trend Micro Zero-day Vulnerability Let Attackers Run Arbitrary Code

September 21, 2023

Categories

  • Cyber Threats
  • Cybersecurity
  • Fintech
  • Hacking
  • Internet Of Things
  • LetsAskBinuBlogs
  • Malware
  • Networking
  • Protection

Tags

Access attack Attacks banking BiWeekly bug Cisco cloud code critical Cyber Cybersecurity Data Digital exploited financial Fintech Flaw flaws Google Group Hackers Krebs Latest launches malware Microsoft million Network News open patches platform Ransomware RoundUp security Software Stories TFT Threat Top vulnerabilities vulnerability warns Week

© 2022 Lets Ask Binu All Rights Reserved

No Result
View All Result
  • Home
  • Cybersecurity
  • Cyber Threats
  • Hacking
  • Protection
  • Networking
  • Malware
  • Fintech
  • Internet Of Things

© 2022 Lets Ask Binu All Rights Reserved