grid-line

6 AI Cybersecurity Trends For The 2025 Cybercrime Landscape

by James Martin
December 5, 2024

AI is critical to the future of cybersecurity. The technology is making defenses more sophisticated, but it’s also a tool in the arsenal of attackers.

In fact, one survey found that 56% of business and cyber leaders expect AI to hand an advantage to bad actors.

AI cybersecurity is duly necessary to keep up with ever-evolving AI cybercrime. In this article, we cover both sides of the changing landscape.

Read on to find out about trends including AI-supercharged malware, ransomware and phishing attacks, along with the response from the cybersecurity sector. You’ll discover how the multi-billion-dollar industry is working to tackle new threats, protect the cloud, and ultimately even take aim at “zero-day” vulnerabilities.

1. AI Leads To More Sophisticated Malware And Ransomware

Almost three-quarters of data breaches are attributable to human error. But the threat of malicious third parties certainly cannot be ignored.

According to the ITRC Annual Data Breach Report, there are as many as 11 victims of a malware attack per second. That’s more than 340 million victims per year.

And this year, North America has seen a 15% increase in the number of ransomware attacks.

In fact, 59% of businesses across 14 major countries including the US have been targeted by ransomware in the past 12 months.

(It’s little wonder that cybersecurity is an increasingly tough search keyword to target for companies within the industry.)

Get More Search Traffic

Use trending keywords to create content your audience craves.

Semrush Logo
Exploding Topics Logo

Rising AI cybercrime

AI can be part of the solution. But it is also a growing part of the problem.

"AI cyber attacks" growth chart
Searches for “AI cyber attacks” have increased by 8400% in the last 5 years.

AI is transforming entire industries. Unfortunately, cybercrime is no exception.

Bad actors can reap all the familiar benefits of AI: automation, efficient data collection, and continual evolution and improvement of methodologies, to name a few.

Malicious GPTs can generate malware. AI can adapt ransomware files over time, reducing their detectability and improving their efficacy.

And the power of AI to assist in writing code has lowered the skill barrier for cybercriminals. HP has found real-world evidence of malware partially written by AI.

2. AI Phishing Attacks Increase

Artificial intelligence technology can even help to target humans, the greatest vulnerability in most security networks.

Generative AI can be a legitimate, useful and powerful writing assistant. But in the wrong hands, these capabilities can greatly enhance phishing attacks.

Phishing is the most common kind of cyberattack. That’s when grouped together with “pretexting”, a similar but more targeted attempt to extract details from a system user.

In 50% of cases, phishing targets user credentials. That often means going after passwords to gain entry.

You might not imagine AI having a particularly sizable effect in this relatively low-tech area of cybercrime. But the technology can help hackers to devise more believable personas in order to convince victims to part with sensitive information.

A study published earlier this year found that 60% of participants were convinced by AI-created phishing attacks. That was comparable to the success rate of messages devised by human experts.

Example of an AI-generated phishing email
One of the phishing emails generated by AI for the purposes of a recent study.

Moreover, follow-up research found that AI is able to automate the entire phishing process. That means these broadly equivalent success rates are being achieved at a 95% reduced cost.

Voice phishing — or “vishing” — brings another layer of sophistication to phishing attacks.

"Vishing" growth chart
“Vishing” searches are up 167% in the last 5 years.

Vishing is the process of impersonating a trusted person’s voice in order to access information or money. AI has made that task far easier.

Microsoft, for instance, says its AI can create an effective voice clone from a clip of just 3 seconds.

There are any number of legitimate uses for voice cloning technology. Podcastle, for instance, offers it for digital touch-ups in podcasts.

"Podcastle" growth chart
“Podcastle” searches have risen steeply, no doubt helped by its AI voice cloning technology.

But this AI-powered technological development has been a gold mine for vishing schemes too. Phishing attacks have increased by 60% in the last year due to AI voice cloning.

There have been some high-profile examples. Last year, MGM Resorts was the victim of a cyberattack that ended up costing $100 million.

It all started with a voice phishing scam, where artificial intelligence was used to replicate an employee’s voice and secure system access.

Even more recently, a finance worker in Hong Kong was tricked into wiring $25 million to a scammer following a faked Zoom call with the CFO.

3. AI Cybersecurity Tackles AI Cybercrime Directly

Increasingly sophisticated threats require increasingly sophisticated cybersecurity. AI is being used in highly innovative ways in order to keep data safe.

The benefits of AI for cybersecurity experts are not too dissimilar to the benefits for cyber criminals: the ability to quickly analyze large amounts of data, automate repetitive processes, and spot vulnerabilities.

As a result, 61% of Chief Information Security Officers believe they are likely to use generative AI as part of their cybersecurity setup within the next 12 months. More than a third have already done so.

And according to IBM, the average saving in data breach costs for organizations that already use security AI and automation extensively is $2.22 million.

It’s little wonder that the AI cybersecurity market was valued at $22.1 billion last year. By 2033, that figure is forecast to reach $147.5 billion (a 20.8% CAGR).

AI solutions for AI threats

Some of the uses for AI in cybersecurity have arisen directly from a need to counter new AI threats. For example: AI voice detectors to combat fishing.

"AI voice detector" growth chart
“AI voice detector” searches are up 99X+ in the last two years.

The early market leader is simply named AI Voice Detector. Users can upload audio files or download a browser extension to check for AI voices online (for instance, in a Zoom or Google Meet call).

Detection is powered by an AI tool. It has already detected 30,000 AI voices, serving more than 20,000 clients.

Example image of AI Voice Detector in action
AI Voice Detector returns probabilities of a voice being natural or AI-generated.

Meanwhile, some of the cybersecurity responsibility for preventing voice phishing falls on the makers of the AI voice cloning technology. A process known as AI watermarking is rising to prominence.

ElevenLabs is one of the leading AI voice cloning providers. It has taken steps to ensure listeners can find out whether a clip originates from its own AI generator.

"Eleven Labs" growth chart
“ElevenLabs” searches are up 7800% in the last 2 years.

Its “speech classifier” tool analyzes the first minute of uploaded clips. From that, it is able to detect the likelihood that it was created using ElevenLabs in the first place.

ElevenLabs' AI speech classifier
ElevenLabs can check for the involvement of its own AI in the creation of voice clips.

Away from vishing, Google recently created an invisible watermark capable of labeling all text that has been generated by Gemini, its AI software.

Unlike the leading AI voice detection tools, this does not deal in probabilities. It is a true watermark, denoting the provenance of AI-generated text beyond any doubt.

Wider adoption of this technology by generative AI providers would be warmly received by educational institutions. But it would also be huge for cybersecurity, with users able to easily determine whether potentially suspicious emails were crafted using artificial intelligence.

4. AI Cybersecurity Protects The Cloud

Cybersecurity is having to keep pace with more than just AI threats. There’s also been a mass migration to cloud services over the past few years.

By last year, 70% of organizations reported that more than half of their infrastructure had moved to the cloud. 65% operate a multi-cloud system — and 80% store sensitive data in the cloud.

There has been a corresponding surge in Cloud-Native Application Protection Platforms (CNAPPs) within the cybersecurity space. 



"CNAPP" growth chart
“CNAPP” searches are up 99X+ in the last 5 years.

CNAPPs are all about building cybersecurity solutions specifically for the cloud. As opposed to playing catch-up by tacking on ad-hoc fixes and adjusting existing measures that may no longer be fit for purpose.

Prisma Cloud brands itself as a CNAPP. And it has integrated AI into its cybersecurity solutions.



"Prisma Cloud" growth chart
“Prisma Cloud” searches are up 378% in the last 5 years.

Prisma uses AI as a “force multiplier” when it comes to Attack Surface Management (ASM). By improving the speed, quality and reliability of data collection, AI can make ASM more efficient and effective.

Diagram showing where AI fits into Prisma's cybersecurity platform
Prisma integrates AI into its holistic cybersecurity platform.

The platform also works to counter cybersecurity risks associated with the legitimate use of AI in a business setting. Prisma secures vulnerabilities relating to potential data exposure or unsafe/unauthorized model usage.

5. AI takes aim at “zero-day” vulnerabilities

Cybersecurity is inherently on the defensive against cyber threats. AI can strengthen that defense.

But AI could have the potential to go even further. It could strike against “zero-day” vulnerabilities — weak points in systems that have not previously been exposed, and for which no known fixes or patches are readily available.

Explanations of various "zero-day" terminology
“Zero-day” vulnerabilities can be particularly hard for cybersecurity to mitigate.

This is a more proactive form of cybersecurity, and it’s one being pioneered by Google. Its Project Zero team has long been taking aim at zero-day threats, and has now joined forces with AI team DeepMind.

The result is Big Sleep. And the technology has already discovered its first real-world zero-day vulnerability: an “exploitable stack buffer underflow” in SQLite.

The open-source database is widely used. Google’s Big Sleep team reported the vulnerability to the developers in early October, and it was fixed on the same day.

Big Sleep believes this is the first time an AI agent has found a “previously unknown memory-safety issue in widely-used real-world software.”

Microsoft is also active in the AI zero-day space. It has expanded its Zero Day Quest bug bounty program, making $4 million in awards available in the “high-impact” areas of cloud and AI.

This is still a nascent area of AI cybersecurity. But it is certainly an interesting one to watch.

6. AI Cybersecurity Sees New Investment

There is a rush to invest in new AI cybersecurity technology. Internally, cybersecurity firms are consistently committing a greater percentage of their revenue to research and development than other businesses in the software industry.

"Invest in AI" growth chart
“Invest in AI” searches are up 1380% in 5 years.

Meanwhile, there is no shortage of external investment either. There was a dip in Q3, aligned with a slow-down in the global venture market, but cyber startups have nonetheless raised $9.1 billion in the first three quarters of the year.

Cybersecurity giant Wiz recently confirmed its acquisition of Israeli startup Dazz. The deal will enhance the US firm’s AI credentials as it seeks to build a strong CNAPP.

Dazz, which specializes in cloud security remediation, only completed a $50 million funding round back in July. That came at a valuation of $400 million.

"Dazz" growth chart
“Dazz” searches are up 117% in the last 5 years.

According to Tech Crunch, the eventual deal closed at a slightly higher price of $450 million. That figure is made up of cash and shares.

It comes amid information from Dazz that it has grown its annual recurring revenue by 400% between 2023 and 2024. It also says that it has tripled its workforce and expanded operations throughout the US and Europe.

Its technology, which uses AI to help find and fix critical issues within cloud infrastructures, claims to reduce the mean time to remediation by 810%, cutting down risk windows from weeks to hours.

Example of a Dazz cybersecurity dashboard
Dazz tracks down security issues within cloud infrastructures.

Both Wiz and Dazz were present alongside Amazon Web Services last July at an event organized by the Boston chapter of the Cloud Security Alliance.

VC firm Cyberstarts, comprising a number of successful founders from the cybersecurity industry, has previously backed both businesses.

This deal is far from the only one happening in a booming AI cybersecurity space.

Branding itself “the data-first security company”, Normalyze will become part of Proofpoint after the latter signed a “definitive agreement”.

"Normalyze" growth chart
Search volume for “Normalyze” was already growing, but it has spiked following news of an acquisition.

Normalyze uses AI to “classify valuable and sensitive data at scale”. After that, the platform assesses and prioritizes potential risks and vulnerabilities, before providing insights into possible remedial and preventive steps.

A deal would “close AI security gaps” for Proofpoint. Up to now, the company’s main area of operation has been the human element of data protection.

Elsewhere, Crowdstrike has acquired Adaptive Shield. That deal was worth $300 million, according to Security Week.

And away from acquisitions, data loss prevention startup MIND.io has raised $11 million in seed funding. It uses advanced AI and automations to identify and prevent data leaks.

Stay ahead of AI cybersecurity trends

AI is having a transformative effect everywhere. But few industries have been affected as profoundly as cybersecurity.

The threats posed by cyber criminals will never be the same again. They have gotten smarter, more efficient and harder to contain.

Yet at the same time, the tools in the cybersecurity arsenal have become far more sophisticated as well. AI can help to counter the arising novel threats, and to provide improved defense against the threats that already existed.

AI cybersecurity solutions are in high demand, as seen by the slew of big-money investments and acquisitions. This is undoubtedly a space to watch closely.