SecAI+ Explained: Mastering Security for Artificial Intelligence Systems

Apr 15 • 12:00 PM EDT
1 hrs

Artificial Intelligence is rapidly transforming how organisations operate, but it is also introducing entirely new security risks. From model poisoning and adversarial attacks to data privacy concerns and AI governance challenges, security professionals must now understand how to defend AI-driven systems.

The new CompTIA SecAI+ Certification validates the skills needed to secure AI and machine learning environments. It bridges traditional cybersecurity principles with emerging AI-specific threats, governance requirements, and operational controls.

In this 60-minute webinar, we’ll provide a high-level overview of SecAI+, explain why AI security is becoming mission-critical, and outline how this certification equips professionals to secure AI systems in enterprise environments.

We’ll also discuss how our SecAI+ training programme prepares you not just to pass the exam, but to confidently apply AI security principles in real-world scenarios.

 

[Webinar 5366]

Register Here!

Submit the form below to register for the webinar.

SecAI+ Explained: AI Security Webinar FAQs

  • Cybersecurity professionals expanding into AI security
  • Security analysts, engineers, and architects
  • GRC and risk management professionals
  • IT managers overseeing AI-enabled systems
  • Data scientists and AI practitioners who need security expertise
  • Organisations implementing AI tools and automation
  • Professionals seeking to differentiate themselves in an emerging high-growth domain

  • Understand why AI introduces new and unique security risks
  • Learn the core domains covered in the SecAI+ certification
  • Identify emerging AI threats including model manipulation, adversarial inputs, and data poisoning
  • Explore governance, risk, and compliance considerations for AI systems
  • See how SecAI+ aligns with real-world enterprise security challenges
  • Discover how our instructor-led training accelerates exam readiness and practical application
  • Position yourself or your team at the forefront of AI security

Please visit the following course pages and resources to learn more:

Christian Owens

Christian Owens is a cybersecurity engineer and governance, risk, and compliance (GRC) professional with extensive experience securing enterprise environments and advising organisations on risk mitigation strategies. He specialises in bridging technical security controls with governance frameworks and emerging technologies, including AI-enabled systems.

Christian brings a practical, real-world perspective to AI security and is passionate about helping professionals build the skills needed to secure next-generation technologies.