Tuesday, November 28, 2023
LetsAskBinu.com
  • Home
  • Cybersecurity
  • Cyber Threats
  • Hacking
  • Protection
  • Networking
  • Malware
  • Fintech
  • Internet Of Things
No Result
View All Result
LetsAskBinu.com
No Result
View All Result
Home Fintech

Navigating the security and privacy challenges of large language models

Researcher by Researcher
November 7, 2023
in Fintech
0
5 cybercrime trends to watch
189
SHARES
1.5k
VIEWS
Share on FacebookShare on Twitter


Business Security

Organizations that intend to tap the potential of LLMs must also be able to manage the risks that could otherwise erode the technology’s business value

Phil Muncaster

06 Nov 2023
 • 
,
5 min. read

Navigating the security and privacy challenges of large language models

Everyone’s talking about ChatGPT, Bard and generative AI as such. But after the hype inevitably comes the reality check. While business and IT leaders alike are abuzz with the disruptive potential of the technology in areas like customer service and software development, they’re also increasingly aware of some potential downsides and risks to watch out for.

In short, for organizations to tap the potential of large language models (LLMs), they must also be able to manage the hidden risks that could otherwise erode the technology’s business value.

What’s the deal with LLMs?

ChatGPT and other generative AI tools are powered by LLMs. They work by using artificial neural networks to process enormous quantities of text data. After learning the patterns between words and how they are used in context, the model is able to interact in natural language with users. In fact, one of the main reasons for ChatGPT’s standout success is its ability to tell jokes, compose poems and generally communicate in a way that is difficult to tell apart from a real human.

RELATED READING: Writing like a boss with ChatGPT: How to get better at spotting phishing scams

The LLM-powered generative AI models, as used in chatbots like ChatGPT, work like super-charged search engines, using the data they were trained on to answer questions and complete tasks with human-like language. Whether they’re publicly available models or proprietary ones used internally within an organization, LLM-based generative AI can expose companies to certain security and privacy risks.

5 of the key LLM risks

1. Oversharing sensitive data

LLM-based chatbots aren’t good at keeping secrets – or forgetting them, for that matter. That means any data you type in may be absorbed by the model and made available to others or at least used to train future LLM models. Samsung workers found this out to their cost when they shared confidential information with ChatGPT while using it for work-related tasks. The code and meeting recordings they entered into the tool could theoretically be in the public domain (or at least stored for future use, as pointed out by the United Kingdom’s National Cyber Security Centre recently). Earlier this year, we took a closer look at how organizations can avoid putting their data at risk when using LLMs.

2. Copyright challenges  

LLMs are trained on large quantities of data. But that information is often scraped from the web, without the explicit permission of the content owner. That can create potential copyright issues if you go on to use it. However, it can be difficult to find the original source of specific training data, making it challenging to mitigate these issues.

3. Insecure code

Developers are increasingly turning to ChatGPT and similar tools to help them accelerate time to market. In theory it can help by generating code snippets and even entire software programs quickly and efficiently. However, security experts warn that it can also generate vulnerabilities. This is a particular concern if the developer doesn’t have enough domain knowledge to know what bugs to look for. If buggy code subsequently slips through into production, it could have a serious reputational impact and require time and money to fix.

4. Hacking the LLM itself

Unauthorized access to and tampering with LLMs could provide hackers with a range of options to perform malicious activities, such as getting the model to divulge sensitive information via prompt injection attacks or perform other actions that are supposed to be blocked. Other attacks may involve exploitation of server-side request forgery (SSRF) vulnerabilities in LLM servers, enabling attackers to extract internal resources. Threat actors could even find a way of interacting with confidential systems and resources simply by sending malicious commands through natural language prompts.

RELATED READING: Black Hat 2023: AI gets big defender prize money

As an example, ChatGPT had to be taken offline in March following the discovery of a vulnerability that exposed the titles from the conversation histories of some users to other users. In order to raise awareness of vulnerabilities in LLM applications, the OWASP Foundation recently released a list of 10 critical security loopholes commonly observed in these applications.

5. A data breach at the AI provider

There’s always a chance that a company that develops AI models could itself be breached, allowing hackers to, for example, steal training data that could include sensitive proprietary information. The same is true for data leaks – such as when Google was inadvertently leaking private Bard chats into its search results.

What to do next

If your organization is keen to start tapping the potential of generative AI for competitive advantage, there are a few things it should be doing first to mitigate some of these risks:

  • Data encryption and anonymization: Encrypt data before sharing it with LLMs to keep it safe from prying eyes, and/or consider anonymization techniques to protect the privacy of individuals who could be identified in the datasets. Data sanitization can achieve the same end by removing sensitive details from training data before it is fed into the model.
  • Enhanced access controls: Strong passwords, multi-factor authentication (MFA) and least privilege policies will help to ensure only authorized individuals have access to the generative AI model and back-end systems.
  • Regular security audits: This can help to uncover vulnerabilities in your IT systems which may impact the LLM and generative AI models on which its built.
  • Practice incident response plans: A well rehearsed and solid IR plan will help your organization respond rapidly to contain, remediate and recover from any breach.
  • Vet LLM providers thoroughly: As for any supplier, it’s important to ensure the company providing the LLM follows industry best practices around data security and privacy. Ensure there’s clear disclosure over where user data is processed and stored, and if it’s used to train the model. How long is it kept? Is it shared with third parties? Can you opt in/out of your data being used for training?
  • Ensure developers follow strict security guidelines: If your developers are using LLMs to generate code, make sure they adhere to policy, such as security testing and peer review, to mitigate the risk of bugs creeping into production.

The good news is there’s no need to reinvent the wheel. Most of the above are tried-and-tested best practice security tips. They may need updating/tweaking for the AI world, but the underlying logic should be familiar to most security teams.

FURTHER READING: A Bard’s Tale – how fake AI bots try to install malware



Source link

Related articles

Mashreq Sets a New Benchmark for Sustainable Financing in MEA With Working Capital Facility

Mashreq Sets a New Benchmark for Sustainable Financing in MEA With Working Capital Facility

November 23, 2023
This Week in Fintech: TFT Bi-Weekly News Roundup 08/02

This Week in Fintech: TFT Bi-Weekly News Roundup 23/11

November 23, 2023
Tags: challengeslanguagelargeModelsNavigatingprivacysecurity
Share76Tweet47

Related Posts

Mashreq Sets a New Benchmark for Sustainable Financing in MEA With Working Capital Facility

Mashreq Sets a New Benchmark for Sustainable Financing in MEA With Working Capital Facility

November 23, 2023
0

Mashreq, a financial institution in the MENA region, has extended to Chalhoub Group its first Sustainability-Linked Working Capital Facility. This...

This Week in Fintech: TFT Bi-Weekly News Roundup 08/02

This Week in Fintech: TFT Bi-Weekly News Roundup 23/11

November 23, 2023
0

The Fintech Times Bi-Weekly News Roundup on Thursday 23 November 2023.Funding and investmentsCrezco, a London-based fintech company that uses open...

Your voice is my password – the risks of AI-driven voice cloning

Your voice is my password – the risks of AI-driven voice cloning

November 23, 2023
0

Digital Security AI-driven voice cloning can make things far too easy for scammers – I know because I’ve tested it...

UK Fintech News Round-Up: The Latest Stories 02/03

UK Fintech News Roundup: The Latest Stories 22/11

November 22, 2023
0

Every Wednesday, we delve into the latest fintech updates from across the UK. This week brings updates from FinTech Wales,...

Can a driverless car get arrested?

Can a driverless car get arrested?

November 22, 2023
0

Digital Security What happens when problems caused by autonomous vehicles are not the result of errors, but the result of...

Load More
  • Trending
  • Comments
  • Latest
This Week in Fintech: TFT Bi-Weekly News Roundup 08/02

This Week in Fintech: TFT Bi-Weekly News Roundup 15/03

March 15, 2022
Supply chain efficiency starts with securing port operations

Supply chain efficiency starts with securing port operations

March 15, 2022
Microsoft to Block Macros by Default in Office Apps

Qakbot Email Thread Hijacking Attacks Drop Multiple Payloads

March 15, 2022
QNAP Escalation Vulnerability Let Attackers Gain Administrator Privileges

QNAP Escalation Vulnerability Let Attackers Gain Administrator Privileges

March 15, 2022
Beware! Facebook accounts being hijacked via Messenger prize phishing chats

Beware! Facebook accounts being hijacked via Messenger prize phishing chats

0
Shoulder surfing: Watch out for eagle‑eyed snoopers peeking at your phone

Shoulder surfing: Watch out for eagle‑eyed snoopers peeking at your phone

0
Remote work causing security issues for system and IT administrators

Remote work causing security issues for system and IT administrators

0
Elementor WordPress plugin has a gaping security hole – update now – Naked Security

Elementor WordPress plugin has a gaping security hole – update now – Naked Security

0
North Korean Hackers Exploiting Zero-day Vulnerabilities

North Korean Hackers Exploiting Zero-day Vulnerabilities

November 28, 2023
North Korean Hackers Exploit MagicLine4NX Zero-day

North Korean Hackers Exploit MagicLine4NX Zero-day

November 28, 2023
NukeSped Malware Exploiting Apache ActiveMQ Vulnerability

NukeSped Malware Exploiting Apache ActiveMQ Vulnerability

November 28, 2023
A New Telekopye Bots That Tricks Users to Steal Payment Details

A New Telekopye Bots That Tricks Users to Steal Payment Details

November 27, 2023

Recent Posts

North Korean Hackers Exploiting Zero-day Vulnerabilities

North Korean Hackers Exploiting Zero-day Vulnerabilities

November 28, 2023
North Korean Hackers Exploit MagicLine4NX Zero-day

North Korean Hackers Exploit MagicLine4NX Zero-day

November 28, 2023
NukeSped Malware Exploiting Apache ActiveMQ Vulnerability

NukeSped Malware Exploiting Apache ActiveMQ Vulnerability

November 28, 2023

Categories

  • Cyber Threats
  • Cybersecurity
  • Fintech
  • Hacking
  • Internet Of Things
  • LetsAskBinuBlogs
  • Malware
  • Networking
  • Protection

Tags

Access attack Attacks banking BiWeekly bug Cisco cloud code critical Cyber Cybersecurity Data Digital exploited financial Fintech Flaw flaws Google Group Hackers Krebs Latest launches malware Microsoft million Network News open patches platform Ransomware RoundUp security Software Stories TFT Threat Top vulnerabilities vulnerability warns Week

© 2022 Lets Ask Binu All Rights Reserved

No Result
View All Result
  • Home
  • Cybersecurity
  • Cyber Threats
  • Hacking
  • Protection
  • Networking
  • Malware
  • Fintech
  • Internet Of Things

© 2022 Lets Ask Binu All Rights Reserved