Thursday, September 21, 2023
LetsAskBinu.com
  • Home
  • Cybersecurity
  • Cyber Threats
  • Hacking
  • Protection
  • Networking
  • Malware
  • Fintech
  • Internet Of Things
No Result
View All Result
LetsAskBinu.com
No Result
View All Result
Home Cybersecurity

How Artificial Intelligence Is Changing Cyber Threats

Researcher by Researcher
July 28, 2023
in Cybersecurity
0
How Artificial Intelligence Is Changing Cyber Threats
189
SHARES
1.5k
VIEWS
Share on FacebookShare on Twitter


Person looking at a visualization of an interconnected big data structure.
Image: NicoElNino/Adobe Stock

HackerOne, a security platform and hacker community forum, hosted a roundtable on Thursday, July 27, about the way generative artificial intelligence will change the practice of cybersecurity. Hackers and industry experts discussed the role of generative AI in various aspects of cybersecurity, including novel attack surfaces and what organizations should keep in mind when it comes to large language models.

Jump to:

Related articles

Sentra Raises $30 Million for DSPM Technology

Northern Ireland’s Top Police Officer Apologizes for ‘Industrial Scale’ Data Breach

August 13, 2023
Minimizing Risk Through Proactive Apple Device Management: Addigy

Minimizing Risk Through Proactive Apple Device Management: Addigy

August 12, 2023

Generative AI can introduce risks if organizations adopt it too quickly

Organizations using generative AI like ChatGPT to write code should be careful they don’t end up creating vulnerabilities in their haste, said Joseph “rez0” Thacker, a professional hacker and senior offensive security engineer at software-as-a-service security company AppOmni.

For example, ChatGPT doesn’t have the context to understand how vulnerabilities might arise in the code it produces. Organizations have to hope that ChatGPT will know how to produce SQL queries that aren’t vulnerable to SQL injection, Thacker said. Attackers being able to access user accounts or data stored across different parts of the organization often cause vulnerabilities that penetration testers frequently look for, and ChatGPT might not be able to take them into account in its code.

The two main risks for companies that may rush to use generative AI products are:

  • Allowing the LLM to be exposed in any way to external users that have access to internal data.
  • Connecting different tools and plugins with an AI feature that may access untrusted data, even if it’s internal.

How threat actors take advantage of generative AI

“We have to remember that systems like GPT models don’t create new things — what they do is reorient stuff that already exists … stuff it’s already been trained on,” said Klondike. “I think what we’re going to see is people who aren’t very technically skilled will be able to have access to their own GPT models that can teach them about the code or help them build ransomware that already exists.”

Prompt injection

Anything that browses the internet — as an LLM can do — could create this kind of problem.

One possible avenue of cyberattack on LLM-based chatbots is prompt injection; it takes advantage of the prompt functions programmed to call the LLM to perform certain actions.

For example, Thacker said, if an attacker uses prompt injection to take control of the context for the LLM function call, they can exfiltrate data by calling the web browser feature and moving the data that’s exfiltrated to the attacker’s side. Or, an attacker could email a prompt injection payload to an LLM tasked with reading and replying to emails.

SEE: How Generative AI is a Game Changer for Cloud Security (TechRepublic)

Roni “Lupin” Carta, an ethical hacker, pointed out that developers using ChatGPT to help install prompt packages on their computers can run into trouble when they ask the generative AI to find libraries. ChatGPT hallucinates library names, which threat actors can then take advantage of by reverse-engineering the fake libraries.

Attackers could insert malicious text into images, too. Then, when an image-interpreting AI like Bard scans the image, the text will deploy as a prompt and instruct the AI to perform certain functions. Essentially, attackers can perform prompt injection through the image.

Must-read security coverage

Deepfakes, custom cryptors and other threats

Carta pointed out that the barrier has been lowered for attackers who want to use social engineering or deepfake audio and video, technology which can also be used for defense.

“This is amazing for cybercriminals but also for red teams that use social engineering to do their job,” Carta said.

From a technical challenge standpoint, Klondike pointed out the way LLMs are built makes it difficult to scrub personally identifying information out of their databases. He said that internal LLMs can still show employees or threat actors data or execute functions that are supposed to be private. This doesn’t require complex prompt injection; it might just involve asking the right questions.

“We’re going to see entirely new products, but I also think the threat landscape is going to have the same vulnerabilities we’ve always seen but with greater quantity,” Thacker said.

Cybersecurity teams are likely to see a higher volume of low-level attacks as amateur threat actors use systems like GPT models to launch attacks, said Gavin Klondike, a senior cybersecurity consultant at hacker and data scientist community AI Village. Senior-level cybercriminals will be able to make custom cryptors — software that obscures malware — and malware with generative AI, he said.

“Nothing that comes out of a GPT model is new”

There was some debate on the panel about whether generative AI raised the same questions as any other tool or presented new ones.

“I think we need to remember that ChatGPT is trained on things like Stack Overflow,” said Katie Paxton-Fear, a lecturer in cybersecurity at Manchester Metropolitan University and security researcher. “Nothing that comes out of a GPT model is new. You can find all of this information already with Google.

“I think we have to be really careful when we have these discussions about good AI and bad AI not to criminalize genuine education.”

Carta compared generative AI to a knife; like a knife, generative AI can be a weapon or a tool to cut a steak.

“It all comes down to not what the AI can do but what the human can do,” Carta said.

SEE: As a cybersecurity blade, ChatGPT can cut both ways (TechRepublic)

Thacker pushed back against the metaphor, saying that generative AI cannot be compared to a knife because it’s the first tool humanity has ever had that can “… create novel, completely unique ideas due to its wide domain experience.”

Or, AI could end up being a mix of a smart tool and creative consultant. Klondike predicted that, while low-level threat actors will benefit the most from AI making it easier to write malicious code, the people who benefit the most on the cybersecurity professional side will be at the senior level. They already know how to build code and write their own workflows, and they’ll ask the AI to help with other tasks.

How businesses can secure generative AI

The threat model Klondike and his team created at AI Village recommends software vendors to think of LLMs as a user and create guardrails around what data it has access to.

Treat AI like an end user

Threat modeling is critical when it comes to working with LLMs, he said. Catching remote code execution, such as a recent problem in which an attacker targeting the LLM-powered developer tool LangChain, could feed code directly into a Python code interpreter, is important as well.

“What we need to do is enforce authorization between the end user and the back-end resource they’re trying to access,” Klondike said.

Don’t forget the basics

Some advice for companies who want to use LLMs securely will sound like any other advice, the panelists said. Michiel Prins, HackerOne cofounder and head of professional services, pointed out that, when it comes to LLMs, organizations seem to have forgotten the standard security lesson to “treat user input as dangerous.”

“We’ve almost forgotten the last 30 years of cybersecurity lessons in developing some of this software,” Klondike said.

Paxton-Fear sees the fact that generative AI is relatively new as a chance to build in security from the start.

“This is a great opportunity to take a step back and bake some security in as this is developing and not bolting on security 10 years later.”



Source link

Tags: ArtificialchangingCyberintelligenceThreats
Share76Tweet47

Related Posts

Sentra Raises $30 Million for DSPM Technology

Northern Ireland’s Top Police Officer Apologizes for ‘Industrial Scale’ Data Breach

August 13, 2023
0

Northern Ireland’s top police officer apologized Thursday for what he described as an “industrial scale” data breach in which the...

Minimizing Risk Through Proactive Apple Device Management: Addigy

Minimizing Risk Through Proactive Apple Device Management: Addigy

August 12, 2023
0

Enterprise IT teams are struggling to cope with three major forces of change: the evolving regulatory environment, a globally dispersed...

Decipher Podcast: Katelyn Bowden and TC Johnson

Decipher Podcast: Katelyn Bowden and TC Johnson

August 12, 2023
0

Veilid main site: https://veilid.com/ Cult of the Dead Cow site: https://cultdeadcow.com/ Source link

In Other News: Government Use of Spyware, New Industrial Security Tools, Japan Router Hack 

In Other News: macOS Security Reports, Keyboard Spying, VPN Vulnerabilities

August 12, 2023
0

SecurityWeek is publishing a weekly cybersecurity roundup that provides a concise compilation of noteworthy stories that might have slipped under...

Used Correctly, Generative AI is a Boon for Cybersecurity

Used Correctly, Generative AI is a Boon for Cybersecurity

August 12, 2023
0

Adobe stock, by Busra At the Black Hat kickoff keynote on Wednesday, Jeff Moss (AKA Dark Tangent), the founder of...

Load More
  • Trending
  • Comments
  • Latest
This Week in Fintech: TFT Bi-Weekly News Roundup 08/02

This Week in Fintech: TFT Bi-Weekly News Roundup 15/03

March 15, 2022
Supply chain efficiency starts with securing port operations

Supply chain efficiency starts with securing port operations

March 15, 2022
Microsoft to Block Macros by Default in Office Apps

Qakbot Email Thread Hijacking Attacks Drop Multiple Payloads

March 15, 2022
QNAP Escalation Vulnerability Let Attackers Gain Administrator Privileges

QNAP Escalation Vulnerability Let Attackers Gain Administrator Privileges

March 15, 2022
Beware! Facebook accounts being hijacked via Messenger prize phishing chats

Beware! Facebook accounts being hijacked via Messenger prize phishing chats

0
Shoulder surfing: Watch out for eagle‑eyed snoopers peeking at your phone

Shoulder surfing: Watch out for eagle‑eyed snoopers peeking at your phone

0
Remote work causing security issues for system and IT administrators

Remote work causing security issues for system and IT administrators

0
Elementor WordPress plugin has a gaping security hole – update now – Naked Security

Elementor WordPress plugin has a gaping security hole – update now – Naked Security

0
LUCR-3 Attacking Fortune 2000 Companies Using Victims’ Own Tools

LUCR-3 Attacking Fortune 2000 Companies Using Victims’ Own Tools

September 21, 2023
EBANX Furthers Expansion into Africa; Adding 8 new Countries to its Ecosystem

EBANX Furthers Expansion into Africa; Adding 8 new Countries to its Ecosystem

September 21, 2023
Trend Micro Zero-day Vulnerability Let Attackers Run Arbitrary Code

Trend Micro Zero-day Vulnerability Let Attackers Run Arbitrary Code

September 21, 2023
Intel Reveals New 288-Core Sierra Forest CPU, Core Ultra Processors at Intel Innovation 2023

Intel Reveals New 288-Core Sierra Forest CPU, Core Ultra Processors at Intel Innovation 2023

September 21, 2023

Recent Posts

LUCR-3 Attacking Fortune 2000 Companies Using Victims’ Own Tools

LUCR-3 Attacking Fortune 2000 Companies Using Victims’ Own Tools

September 21, 2023
EBANX Furthers Expansion into Africa; Adding 8 new Countries to its Ecosystem

EBANX Furthers Expansion into Africa; Adding 8 new Countries to its Ecosystem

September 21, 2023
Trend Micro Zero-day Vulnerability Let Attackers Run Arbitrary Code

Trend Micro Zero-day Vulnerability Let Attackers Run Arbitrary Code

September 21, 2023

Categories

  • Cyber Threats
  • Cybersecurity
  • Fintech
  • Hacking
  • Internet Of Things
  • LetsAskBinuBlogs
  • Malware
  • Networking
  • Protection

Tags

Access attack Attacks banking BiWeekly bug Cisco cloud code critical Cyber Cybersecurity Data Digital exploited financial Fintech Flaw flaws Google Group Hackers Krebs Latest launches malware Microsoft million Network News open patches platform Ransomware RoundUp security Software Stories TFT Threat Top vulnerabilities vulnerability warns Week

© 2022 Lets Ask Binu All Rights Reserved

No Result
View All Result
  • Home
  • Cybersecurity
  • Cyber Threats
  • Hacking
  • Protection
  • Networking
  • Malware
  • Fintech
  • Internet Of Things

© 2022 Lets Ask Binu All Rights Reserved