Training Data Poisoning

From Cognitive Attack Taxonomy
Revision as of 02:58, 30 July 2024 by EE (talk | contribs) (Created page with "== '''Training Data Poisoning ''' == '''Short Description:''' Injecting false data into training dataset to induce poor model performance. <br> '''CAT ID:''' CAT-2023-005 <br> '''Layer:''' 7 <br> '''Operational Scale:''' Operational <br> '''Level of Maturity:''' Proof of Concept <br> '''Category:''' Exploit <br> '''Subcategory:''' <br> '''Also Known As:''' <br> == '''Description:''' == '''Brief Description:''' <br> '''Closely Related Concepts:''' <br>...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Training Data Poisoning

Short Description: Injecting false data into training dataset to induce poor model performance.

CAT ID: CAT-2023-005

Layer: 7

Operational Scale: Operational

Level of Maturity: Proof of Concept

Category: Exploit

Subcategory:

Also Known As:

Description:

Brief Description:

Closely Related Concepts:

Mechanism:

Multipliers:

Detailed Description: Data poisoning exploits AI/ML model vulnerability to fales or misleading training data, which leads to maladjusted predictions and/or model performance.

INTERACTIONS [VETs]:

Examples:

Use Case Example(s):

Example(s) From The Wild:

Comments:

References: