Artificial Intelligence (AI) has been a a prominent theme across all areas of technology., recognized for its ability to automate repetitive tasks, analyze massive datasets more quickly, assist in decision-making, improve customer service, and even generate creative content. However, In the field of cybersecurity, the topic takes a dubious turn.While AI can serve as a powerful ally to cybersecurity analysts, it can also empower malicious actors to be more effective. The misuse of AI as an aid in cybercrime is accelerating the pace, scale, and sophistication of attacks.Traditional security measures often struggle to detect or combat these threats in real time. 

AI has been used in worrying ways, such as in the field of social engineeringSocial engineering, where psychological manipulation is used to deceive individuals, leading them to reveal sensitive information or perform actions that could be harmful. Traditionally, social engineering relied heavily on human effort: scammers creating emails of phishing manually, scammers making persuasive phone calls or editing fake documents, for example. Today, AI automates and improves these processes.. Malicious agents can generate messages of phishing highly customized that mimic the writing style of the victim's boss, friend, or colleague. Large-scale language models (LLMs) can instantly adapt content based on data collected from social networks, making messages more authentic and relevant. There are even malicious LLMs created specifically for this purpose, but legitimate models, such as ChatGPT and DeepSeek, can also be manipulated through this. prompts carefully crafted to produce outputs that can be used for phishing or other harmful purposes. 

Technology deepfake It takes this type of fraud to another level.AI can now produce synthetic videos and audio recordings that reproduce the face, voice, and mannerisms of a real person with near-perfect accuracy. In one well-known case, a finance worker at a multinational company in Hong Kong was tricked into transferring over $25 million after participating in a video conference where all the "colleagues" were fake. deepfakes AI-generated data based on real people. The attack worked not because of a technical vulnerability, but because the The attackers managed to simulate a human presence so convincingly that it eliminated any suspicion.

In addition to impersonating real people, AI is also capable of manipulating and generating completely original images that cannot be tracked through traditional verification methods, such as reverse image search. Tools like TinEye or Google Lens are effective at detecting if an image has been altered or copied from another location. Internet, but AI-generated content often lacks any prior source.These synthetic images and videos are created from scratch, making It is impossible to determine its origin or confirm its authenticity through conventional means.This capability has been increasingly exploited for fraudulent purposes, allowing the fabrication of false evidence, the staging of fictitious events, or the creation of false identities that appear genuine but do not exist in the real world. 

Social engineering, however, is only one part of AI's malicious potential. Attackers have also used it to carry out... scans automatedThis allows them to analyze vast networks to search for vulnerable systems in minutes, drastically reducing the time defenders have to deploy [the technology]. patches security. Once vulnerabilities are identified, AI can help create malware polymorphicsoftware malicious software that continually alters its code to avoid detection by traditional security tools, such as antivirus or anti-paneling software.malware

Another worrying issue is how AI has lowered the barrier to entry for cybercrime. Instead of require advanced technical skillsattackers can now resort to AI tools that are available on the black market, such as FraudGPT and WolfGPT, which are sold on forums of dark web and function as multifunctional malicious assistants. These tools are capable of generating emails of phishing convincing, creating malicious code and even providing step-by-step instructions for exploiting vulnerabilities. 

Combating cyber threats powered by AI. 

Defend against AI-powered attacks It requires more than reactive measures. requires a strategic and proactive approachOrganizations should start by defining clear objectives For the use of AI in their security operations, ensuring that it supports specific and measurable defensive goals. AI should be integrated with existing security tools, reinforcing, not replacing, established protection measures — although some cybersecurity professionals argue that AI should also be treated as a potential intruder and monitored in an equivalent manner, given its ability to be manipulated or exploited. 

THE Transparency is fundamental: priority needs to be given to explainable AI systemsso that analysts understand how decisions are made, in order to have some predictability in the AI's actions and to be able to analyze its performance. performanceAbove all, it is necessary maintain human controlAI should support cybersecurity teams, not make decisions without oversight. Ultimately, it is It is essential to regularly update and monitor AI systems. so that they can adapt to emerging threats, just as attackers are constantly evolving their methods. 

Article by Louise Altvater, cybersecurity analyst from the Foundation for Science and Technology, in the FCCN CSIRT team, for the campaign “Cybersecurity expert's opinion” from GÉANT's Connect magazinewithin the scope of Cybersecurity Month.

Other related articles