Training Data Poisoning Attack

From Cognitive Attack Taxonomy
Revision as of 02:56, 30 July 2024 by EE (talk | contribs) (Created page with "== '''Training Data Poisoning Attack ''' == '''Short Description:''' An attacker changes training data to manipulate the model to act favorably for the attacker. <br> '''CAT ID:''' CAT-2023-007 <br> '''Layer:''' 7 <br> '''Operational Scale:''' Operational <br> '''Level of Maturity:''' Proof of Concept <br> '''Category:''' TTP <br> '''Subcategory:''' <br> '''Also Known As:''' <br> == '''Description:''' == '''Brief Description:''' <br> '''Closely Related...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Training Data Poisoning Attack

Short Description: An attacker changes training data to manipulate the model to act favorably for the attacker.

CAT ID: CAT-2023-007

Layer: 7

Operational Scale: Operational

Level of Maturity: Proof of Concept

Category: TTP

Subcategory:

Also Known As:

Description:

Brief Description:

Closely Related Concepts:

Mechanism:

Multipliers:

Detailed Description: An attacker changes training data (or data labels) to manipulate the model to act favorably for the attacker. Can be insideous because the model appears to function normally but reacts unexpectedly when encountering a specific condition.

INTERACTIONS [VETs]:

Examples:

Use Case Example(s):

Example(s) From The Wild:

Comments:

References: