AI has supercharged the cybersecurity arms race over the past year. And the coming 12 months will provide no respite. This has major implications for corporate cybersecurity teams and their employers, as well as everyday web users. While AI technology helps defenders to improve security, malicious actors are wasting no time in tapping into AI-powered tools, so we can expect an uptick in scams, social engineering, account fraud, disinformation and other threats.
Here’s what you can expect from 2025.
What to watch out for
At the start of 2024, the UK’s National Cyber Security Centre (NCSC) warned that AI is already being used by every type of threat actor, and would “almost certainly increase the volume and impact of cyberattacks in the next two years.” The threat is most acute in the context of social engineering, where generative AI (GenAI) can help malicious actors craft highly convincing campaigns in faultless local languages. In reconnaissance, where AI can automate the large-scale identification of vulnerable assets.
While these trends will certainly continue into 2025, we may also see AI used for:
- Authentication bypass: Deepfake technology used to help fraudsters impersonate customers in selfie and video-based checks for new account creation and account access.
- Business email compromise (BEC): AI once again deployed for social engineering, but this time to trick a corporate recipient into wiring funds to an account under the control of the fraudster. Deepfake audio and video may also be used to impersonate CEOs and other senior leaders in phone calls and virtual meetings.
- Impersonation scams: Open source large language models (LLMs) will offer up new opportunities for scammers. By training them on data scraped from hacked and/or publicly accessible social media accounts, fraudsters could impersonate victims in virtual kidnapping and other scams, designed to trick friends and family.
- Influencer scams: In a similar way, expect to see GenAI being used by scammers in 2025 to create fake or duplicate social media accounts mimicking celebrities, influencers and other well-known figures. Deepfake video will be posted to lure followers into handing over personal information and money, for example in investment and crypto scams, including the kinds of ploys highlighted in ESET’s latest Threat Report. This will put greater pressure on social media platforms to offer effective account verification tools and badges – as well as on you to stay vigilant.
- Disinformation: Hostile states and other groups will tap GenAI to easily generate fake content, in order to hook credulous social media users into following fake accounts. These users could then be turned into online amplifiers for influence operations, in a more effective and harder-to-detect manner than content/troll farms.
- Password cracking: Ai-driven tools are capable of unmasking user credentials en masse in seconds to enable access to corporate networks and data, as well as customer accounts.
AI privacy concerns for 2025
AI will not just be a tool for threat actors over the coming year. It could also introduce an elevated risk of data leakage. LLMs require huge volumes of text, images and video to train them. Often by accident, some of that data will be sensitive: think, biometrics, healthcare information or financial data. In some cases, social media and other companies may change T&Cs to use customer data to train models.
Once it has been hoovered up by the AI model, this information represents a risk to individuals, if the AI system itself is hacked. Or if the information is shared with others via GenAI apps running atop the LLM. There’s also a concern for corporate users that they might unwittingly share sensitive work-related information via GenAI prompts. According to one poll, a fifth of UK companies have accidentally exposed potentially sensitive corporate data via employees’ GenAI use.
AI for defenders in 2025
The good news is that AI will play an ever-greater role in the work of cybersecurity teams over the coming year, as it gets built into new products and services. Building on a long history of AI-powered security, these new offerings will help to:
- generate synthetic data for training users, security teams and even AI security tools
- summarize long and complex threat intelligence reports for analysts and facilitate faster decision-making for incidents
- enhance SecOps productivity by contextualizing and prioritizing alerts for stretched teams, and automating workflows for investigation and remediation
- scan large data volumes for signs of suspicious behavior
- upskill IT teams via “copilot” functionality built into various products to help reduce the likelihood of misconfigurations
However, IT and security leaders must also understand the limitations of AI and the importance of human expertise in the decision-making process. A balance between human and machine will be needed in 2025 to mitigate the risk of hallucinations, model degradation and other potentially negative consequences. AI is not a silver bullet. It must be combined with other tools and techniques for optimal results.
To find out more about AI use in cybersecurity, see this ESET white paper.
AI challenges in compliance and enforcement
The threat landscape and development of AI security don’t happen in a vacuum. Geopolitical changes in 2025, especially in the US, may even lead to deregulation in the technology and social media sectors. This in turn could empower scammers and other malicious actors to flood online platforms with AI-generated threats.
Meanwhile in the EU, there is still some uncertainty over AI regulation, which could make life more difficult for compliance teams. As legal experts have noted, codes of practice and guidance still need to be worked out, and liability for AI system failures calculated. Lobbying from the tech sector could yet alter how the EU AI Act is implemented in practice.
However, what is clear is that AI will radically change the way we interact with technology in 2025, for good and bad. It offers huge potential benefits to businesses and individuals, but also new risks that must be managed. It’s in everyone’s interests to work closer over the coming year to make sure that happens. Governments, private sector enterprises and end users must all play their part and work together to harness AI’s potential while mitigating its risks.