Monday, March 20, 2023
LetsAskBinu.com
  • Home
  • Cybersecurity
  • Cyber Threats
  • Hacking
  • Protection
  • Networking
  • Malware
  • Fintech
  • Internet Of Things
No Result
View All Result
LetsAskBinu.com
No Result
View All Result
Home Internet Of Things

Humans are still better at creating phishing emails than AI — for now

Researcher by Researcher
March 16, 2023
in Internet Of Things
0
Humans are still better at creating phishing emails than AI — for now
189
SHARES
1.5k
VIEWS
Share on FacebookShare on Twitter


AI-generated phishing emails, including ones created by ChatGPT, present a potential new threat for security professionals, says Hoxhunt.

An AI generated phishing email.
Image: Gstudio/Adobe Stock

Amid all of the buzz around ChatGPT and other artificial intelligence apps, cybercriminals have already started using AI to generate phishing emails. For now, human cybercriminals are still more accomplished at devising successful phishing attacks, but the gap is closing, according to security trainer Hoxhunt’s new report released Wednesday.

Phishing campaigns created by ChatGPT vs. humans

Hoxhunt compared phishing campaigns generated by ChatGPT versus those created by human beings to determine which stood a better chance of hoodwinking an unsuspecting victim.

Related articles

Microsoft Adds AI Productivity Bot to 365 Suite

Microsoft Adds AI Productivity Bot to 365 Suite

March 17, 2023
GrammarlyGo uses AI to write your emails

GrammarlyGo uses AI to write your emails

March 16, 2023

To conduct this experiment, the company sent 53,127 users across 100 countries phishing simulations designed either by human social engineers or by ChatGPT. The users received the phishing simulation in their inboxes as they’d receive any type of email. The test was set up to trigger three possible responses:

  1. Success: The user successfully reports the phishing simulation as malicious via the Hoxhunt threat reporting button.
  2. Miss: The user doesn’t interact with the phishing simulation.
  3. Failure: The user takes the bait and clicks on the malicious link in the email.

The results of the phishing simulation led by Hoxhunt

In the end, human-generated phishing mails caught more victims than did those created by ChatGPT. Specifically, the rate in which users fell for the human-generated messages was 4.2%, while the rate for the AI-generated ones was 2.9%. That means the human social engineers outperformed ChatGPT by around 69%.

One positive outcome from the study is that security training can prove effective at thwarting phishing attacks. Users with a greater awareness of security were far more likely to resist the temptation of engaging with phishing emails, whether they were generated by humans or by AI. The percentages of people who clicked on a malicious link in a message dropped from more than 14% among less-trained users to between 2% and 4% among those with greater training.

SEE: Security awareness and training policy (TechRepublic Premium)

The results also varied by country:

  • U.S.: 5.9% of surveyed users were fooled by human-generated emails, while 4.5% were fooled by AI-generated messages.
  • Germany: 2.3% were tricked by humans, while 1.9% were tricked by AI.
  • Sweden: 6.1% were deceived by humans, with 4.1% deceived by AI.

Current cybersecurity defenses can still cover AI phishing attacks

Though phishing emails created by humans were more convincing than those from AI, this outcome is fluid, especially as ChatGPT and other AI models improve. The test itself was performed before the release of ChatGPT 4, which promises to be savvier than its predecessor. AI tools will certainly evolve and pose a greater threat to organizations from cybercriminals who use them for their own malicious purposes.

Must-read security coverage

On the plus side, protecting your organization from phishing emails and other threats requires the same defenses and coordination whether the attacks are created by humans or by AI.

“ChatGPT allows criminals to launch perfectly worded phishing campaigns at scale, and while that removes a key indicator of a phishing attack — bad grammar — other indicators are readily observable to the trained eye,” said Hoxhunt CEO and co-founder Mika Aalto. “Within your holistic cybersecurity strategy, be sure to focus on your people and their email behavior because that is what our adversaries are doing with their new AI tools.

“Embed security as a shared responsibility throughout the organization with ongoing training that enables users to spot suspicious messages and rewards them for reporting threats until human threat detection becomes a habit.”

Security tips or IT and users

Toward that end, Aalto offers the following tips.

For IT and security

  • Require two-factor authentication or multi-factor authentication for all employees who access sensitive data.
  • Give all employees the skills and confidence to report a suspicious email; such a process should be seamless.
  • Provide security teams with the resources needed to analyze and address threat reports from employees.

For users

  • Hover over any link in an email before clicking on it. If the link appears out of place or irrelevant to the message, report the email as suspicious to IT support or help desk team.
  • Scrutinize the sender field to make sure the email address contains a legitimate business domain. If the address points to Gmail, Hotmail or other free service, the message is likely a phishing email.
  • Confirm a suspicious email with the sender before acting on it. Use a method other than email to contact the sender about the message.
  • Think before you click. Socially engineered phishing attacks try to create a false sense of urgency, prompting the recipient to click on a link or engage with the message as quickly as possible.
  • Pay attention to the tone and voice of an email. For now, phishing emails generated by AI are written in a formal and stilted manner.

Read next: As a cybersecurity blade, ChatGPT can cut both ways (TechRepublic)



Source link

Tags: creatingEmailshumansphishing
Share76Tweet47

Related Posts

Microsoft Adds AI Productivity Bot to 365 Suite

Microsoft Adds AI Productivity Bot to 365 Suite

March 17, 2023
0

Copilot, a natural language bot that can pull from data across the Microsoft 365 suite, is now in testing with...

GrammarlyGo uses AI to write your emails

GrammarlyGo uses AI to write your emails

March 16, 2023
0

A natural language AI writing assistant is the latest in Grammarly’s attempts to assist with professional writing. Image: 4zevar/Adobe Stock...

As a cybersecurity blade, ChatGPT can cut both ways

OpenAI Debuts GPT-4 After Year of Training on Azure Supercomputer

March 16, 2023
0

The GPT-4 language model is now available for ChatGPT Plus subscribers. Developers can join an API waitlist.  Image: gguy/Adobe Stock...

Business leaders’ expectations for AI/ML applications are too high, say CDOs

Business leaders’ expectations for AI/ML applications are too high, say CDOs

March 14, 2023
0

A new survey details the potential risks of data science teams not having the necessary skilled staff, funding and tech...

Nokia smartphone with DIY features launches

Nokia smartphone with DIY features launches

March 4, 2023
0

The Nokia G22 smartphone comes with components you can easily dismantle and offers a repair manual from iFixit. Image: Nokia...

Load More
  • Trending
  • Comments
  • Latest
This Week in Fintech: TFT Bi-Weekly News Roundup 08/02

This Week in Fintech: TFT Bi-Weekly News Roundup 15/03

March 15, 2022
QNAP Escalation Vulnerability Let Attackers Gain Administrator Privileges

QNAP Escalation Vulnerability Let Attackers Gain Administrator Privileges

March 15, 2022
Supply chain efficiency starts with securing port operations

Supply chain efficiency starts with securing port operations

March 15, 2022
A first look at threat intelligence and threat hunting tools

A first look at threat intelligence and threat hunting tools

March 15, 2022
Beware! Facebook accounts being hijacked via Messenger prize phishing chats

Beware! Facebook accounts being hijacked via Messenger prize phishing chats

0
Shoulder surfing: Watch out for eagle‑eyed snoopers peeking at your phone

Shoulder surfing: Watch out for eagle‑eyed snoopers peeking at your phone

0
Remote work causing security issues for system and IT administrators

Remote work causing security issues for system and IT administrators

0
Elementor WordPress plugin has a gaping security hole – update now – Naked Security

Elementor WordPress plugin has a gaping security hole – update now – Naked Security

0
undetected since 2021 and resists firmware update

undetected since 2021 and resists firmware update

March 20, 2023
Sentra Raises $30 Million for DSPM Technology

New ‘Trigona’ Ransomware Targets US, Europe, Australia

March 20, 2023
What’s the Best Way to Sack People?

What’s the Best Way to Sack People?

March 20, 2023
Biden administration sees dangers in cloud, but users must protect perimeters

Biden administration sees dangers in cloud, but users must protect perimeters

March 19, 2023

Recent Posts

undetected since 2021 and resists firmware update

undetected since 2021 and resists firmware update

March 20, 2023
Sentra Raises $30 Million for DSPM Technology

New ‘Trigona’ Ransomware Targets US, Europe, Australia

March 20, 2023
What’s the Best Way to Sack People?

What’s the Best Way to Sack People?

March 20, 2023

Categories

  • Cyber Threats
  • Cybersecurity
  • Fintech
  • Hacking
  • Internet Of Things
  • LetsAskBinuBlogs
  • Malware
  • Networking
  • Protection

Tags

Access attack Attacks banking BiWeekly bug Cisco cloud code critical Cybersecurity Data Digital exploited financial Fintech Flaw flaws Google Group Hackers Krebs Latest launches malware Microsoft million Network News open patches Payments platform Ransomware RoundUp security Software Stories TFT Threat Top vulnerabilities vulnerability warns Week

© 2022 Lets Ask Binu All Rights Reserved

No Result
View All Result
  • Home
  • Cybersecurity
  • Cyber Threats
  • Hacking
  • Protection
  • Networking
  • Malware
  • Fintech
  • Internet Of Things

© 2022 Lets Ask Binu All Rights Reserved