Training Experts to Protect AI: A New Challenge for Cybersecurity
- Caroline Haïat

- Oct 7
- 2 min read

Generative artificial intelligence is rapidly transforming digital infrastructures. Language models capable of writing, coding, and analyzing are now integrated into work tools, customer service systems, and decision-making platforms. But this revolution also exposes a critical weakness: AI itself has become a target. To address these emerging risks, a groundbreaking training program — Large Language Models Security — has just been launched to equip cybersecurity professionals with the skills needed to defend AI systems.
At the heart of this initiative, the Kaspersky AI Technology Research Center is taking both a scientific and operational approach. The goal is to teach participants how to understand, assess, and neutralize the vulnerabilities specific to large language models (LLMs), which are now driving the AI revolution.
According to a study conducted by Kaspersky, more than 50% of companies had already integrated AI and Internet of Things (IoT) solutions into their infrastructure by 2024. This rapid adoption boosts performance but also increases system complexity — and therefore, exposure to attacks. Traditional defense methods are no longer sufficient: it is now essential to protect the models themselves, their prompts, data, and interactions.
The program, led by Vladislav Tushkanov, Research Development Group Manager at the research center, combines practical exercises and interactive labs. Participants learn to detect typical attacks against language models, such as prompt injections, which manipulate instructions; jailbreaks, which bypass model restrictions; and token smuggling, which enables access to internal data. These real-world scenarios are designed to strengthen experts’ ability to anticipate threats before they compromise critical systems.
“The rise of language models has opened up an immense field of innovation, but it has also created an entirely new attack surface,” explains Vladislav Tushkanov. “For cybersecurity professionals, learning to identify, exploit, and protect these systems is no longer a niche skill — it’s become essential.”
The significance of this training goes well beyond technical expertise. It reflects a deeper shift in the cybersecurity landscape, where threats no longer target only networks and databases, but the very behavior of algorithms. In the coming years, generative AI security could emerge as a field of its own — with dedicated professions, certifications, and specialized technologies.
This evolution resonates particularly in Israel, where cybersecurity has long been a cornerstone of national innovation. At the Technion, Ben-Gurion University, and the CyberSpark hub in Beersheba, several research teams are already developing tools to audit the robustness of AI models against semantic attacks and data manipulation.
Experts there see this as the new frontier of digital defense: after securing networks and data, it is now intelligence itself that must be protected.
The establishment of a structured program dedicated to large language model security thus marks a major milestone. It signals a growing global awareness: artificial intelligence — a driver of progress and efficiency — must be safeguarded with the same rigor as critical infrastructure.Understanding its vulnerabilities means preserving trust in the systems already shaping our daily lives.
Caroline Haïat




Comments