Kaspersky has introduced a new online training course designed to help cybersecurity professionals identify and defend against vulnerabilities in large language models (LLMs). Developed by the Kaspersky AI Technology Research Center, the program aims to address one of the fastest-emerging security challenges in today’s AI-driven world.
As AI adoption accelerates, so do its risks. A recent Kaspersky study revealed that more than half of companies globally had implemented AI and Internet of Things (IoT) systems by 2024. These technologies are transforming operations—but they’re also creating new attack surfaces for cybercriminals.
To help organizations prepare, Kaspersky has expanded its Cybersecurity Training portfolio with a course that provides a structured foundation for securing LLM-based systems. Participants will learn how to identify potential vulnerabilities, design effective defenses, and apply frameworks to strengthen AI security.
The course offers a mix of theory and practice—featuring real-world case studies, hands-on labs, and interactive exercises. Learners will explore techniques such as jailbreaks, prompt injections, and token smuggling, gaining the skills to both understand and counter these emerging threats.
“The rise of large language models has opened new possibilities for innovation, but also introduced intricate security puzzles that demand immediate attention,” said Vladislav Tushkanov, Research Development Group Manager at Kaspersky. “This course was designed to equip professionals with the practical tools to safeguard LLM-driven applications and stay ahead of evolving threats.”
The training is suitable for cybersecurity beginners entering the AI domain, engineers integrating LLMs into systems, and specialists managing AI infrastructure. By combining Kaspersky’s decades of expertise in secure AI with practical instruction, the course seeks to strengthen the next generation of cybersecurity talent ready to protect the future of intelligent systems.