Training Data Poisoning

From Cognitive Attack Taxonomy

Training Data Poisoning

Short Description: Injecting false data into training dataset to induce poor model performance.

CAT ID: CAT-2023-005

Layer: 7

Operational Scale: Operational

Level of Maturity: Proof of Concept

Category: Exploit

Subcategory:

Also Known As:

Description:

Brief Description:

Closely Related Concepts:

Mechanism:

Multipliers:

Detailed Description: Data poisoning exploits AI/ML model vulnerability to fales or misleading training data, which leads to maladjusted predictions and/or model performance.

INTERACTIONS [VETs]:

Examples:

Use Case Example(s):

Example(s) From The Wild:

Comments:

References: