This document addresses organizational and technical solutions aimed at ensuring the cybersecurity of high-risk AI systems over the
lifecycle, appropriate to the relevant circumstances and the risks. The technical solutions to address AI specific vulnerabilities include,
where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training dataset
(data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the model to make a
mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. This document provides objective criteria to
enable decisions on whether a given technical or organizational solution adequately achieves a given vulnerability-related goal.
IN_DEVELOPMENT
cen:proj:79708
10.99
New project approved
Oct 2, 2024