Interpreting the Cognitive Attack Taxonomy: Difference between revisions
Created page with "= '''<big>Introduction to the Cognitive Attack Taxonomy (CAT)</big>''' = “Hacking” refers to manipulating objects, systems, processes, or technologies in ways not originally intended to produce outcomes not otherwise achievable, or would be much more difficult to achieve, through conventional means. Extending the concept which was originally used within the model train community, then extended to the programming and software development community, then adapted to th..." |
No edit summary |
||
Line 193: | Line 193: | ||
= '''References''' = | = '''References''' = | ||
Latest revision as of 23:28, 17 November 2024
Introduction to the Cognitive Attack Taxonomy (CAT)
“Hacking” refers to manipulating objects, systems, processes, or technologies in ways not originally intended to produce outcomes not otherwise achievable, or would be much more difficult to achieve, through conventional means. Extending the concept which was originally used within the model train community, then extended to the programming and software development community, then adapted to the cybersecurity community, may be applied to human and artificial cognitive processes.
Cognitive “hacking” relies upon the ability to repurpose, extend, or misuse functions, which likely evolved or developed for one purpose, and apply these functions toward other purposes. For example, humans developed bi-pedal mobility (the ability to walk) because this provided some evolutionary adaptive advantage (likely hunting), but this ability to consistently balance on two feet can be applied in ways that extend beyond the original purpose. This ability to balance to on two feet can now be “hacked” to balance on a moving skateboard. Riding a skateboard was not the original intention of bipedal mobility, but the art of skateboarding would be much more limited if humans never developed the ability to stand up. While balancing on two feet is not what typically comes to mind when considering cognitive abilities, research in robotics demonstrates the enormous computational challenge of performing such an apparently simple function.
The Cognitive Attack Taxonomy (CAT) considers cognitive vulnerabilities, exploits, tactics/techniques, tools, and procedures, relative to cognitive processing in the broadest possible sense within biological (humans and animals) and artificial (embodied and virtual) cognitive systems at all levels. Cognition from this perspective refers to information processing systems, which may, or may not, include awareness, consciousness, or sentience.
Cognitive Vulnerabilities
The term ‘’cognitive vulnerabilities’’ is misleading in that it implies a weakness, but within the context of cognitive security, vulnerabilities should be considered as ‘’potentialities for misuse’’. A computer operating system that is capable of encrypting files for security purposes has an inherent ‘’vulnerability’’ in that this capability may also be weaponized by an adversary for malicious purposes. For example, if the file encryption (intended for enhancing security) is manipulated by a malicious threat actor, then that capability can be used to encrypt the owner’s files while the threat actor holds the decryption key. This activity is sometimes referred to as ‘’ransomware’’. This example demonstrates that beneficial features can often be misused to achieve malicious objectives, hence ‘’weaponized’’.
An example of how this might apply to human cognition might be found in the Reciprocity Norm, which seems to be universal across all cultures (and may span several species [1], suggesting that this is a relatively ‘’hard-wired’’ cognitive function. The Reciprocity Norm dictates that one should reciprocate actions that another has taken toward oneself. In other words, if someone does something nice for you, you should do something nice for them; conversely, if they do something unkind to you, you have the implied right to be unkind to them. This norm is also consistent with several game theoretical simulations of the Prisoner’s Dilemma which demonstrate that a “tit-for-tat” is the optimal inter-agent relational strategy.
From the perspective of social cooperation, the Reciprocity Norm is fundamentally critical to building human culture and society would immediately dissolve without such a norm, thus from this perspective Reciprocity is both critical and extremely beneficial. From the cognitive security perspective, the Reciprocity Norm is viewed as a vulnerability because a threat actor (defined as anyone not being completely transparent in their intentions) may employ an action, such as gift giving to increase the likelihood of receiving something they want in return. In this way that threat actor is “exploiting” the Reciprocity Norm by taking an action to ‘’induce the reciprocity norm’’ within the targeted individual.
Cognitive Exploits
Within the field of cybersecurity, an exploit refers to a sequence of commands, a software bug, a “glitch” or malfunction, or maliciously written code, which can be used to cause the targeted system to behave in unprescribed ways, which may or may not lead to damage of the system. Within this context, “an exploit” (as a noun) refers to the specified mechanism the threat actor uses to affect the action of the exploitation, whereas “to exploit” (as a verb) refers to the action of launching the mechanism or taking an action, which sets the exploitation mechanism into action.
The CAT uses this term of exploitation in a very similar way as it is used in information security. A cognitive exploit is a mechanism to manipulate a cognitive vulnerability (noun), or may be a sequence of actions taken by a threat actor to induce actions or state changes in the cognitive system (verb).
Returning to the previous example outline under Cognitive Vulnerabilities above which gave the example of the Reciprocity Norm, the threat actor ‘’exploits’’ the ‘’cognitive vulnerability’’ of the Reciprocity Norm by ‘’inducing reciprocity’’ through the action of giving a gift to the targeted individual. This is exploiting Reciprocity through an action (verb). Over the millennia threat actors in the forms of con artists, politicians, salespeople, marketers, propagandists, and others interested in manipulating people, have developed a broad catalog of tactics, techniques, and procedures (TTPs) designed with the intention of manipulating humans by exploiting cognitive vulnerabilities, in other words ‘’exploits’’ (as a noun).
Cognitive Tools | Tactics/Techniques | Procedures (T/TTPs)
Threat actors no longer need to develop cognitive exploits from scratch. In the first decade of the twenty-first century, the pick-up artist (PUA) community developed what they referred to as “technology”, as a series of techniques which enhanced their probability of success in “picking up” a prospective date. These techniques themselves were not particularly novel in themselves, as was the method of information sharing among the PUA community. This community co-evolved with the widespread adoption of the internet and early versions of social media (message boards). This meant that the community was able to compare notes, share successes or failures, and ‘’most importantly’’ develop a glossary of terms referring to highly specified tactics and techniques (exploits as a noun) which could be used to manipulate their prospective targets through ‘’cognitive vulnerabilities’’. Extending this concept into the larger cognitive security domain, it is possible to identify not only ‘’tactics’’ and ‘’techniques’’, but we may also extend the CAT to include available ‘’tools’’ and ‘’procedures’’.
The example of gift-giving was mentioned above as a means of exploiting the Reciprocity Norm cognitive vulnerability. Another example of an exploit (exploitation technique) which may be used against this cognitive vulnerability is the Door-In-The-Face technique which involves a threat actor initially presenting a large request that is anticipated to be denied, with the intention of following this up with a smaller request. This technique is effective because it manipulates the perceived sense of fairness between the actors and induces a sense of obligation in the target who now feels the need to cooperate with the threat actor out of a need to reciprocate and maintain the balance of fairness. The Door-In-The-Face technique is effective because it increases the likelihood that the second (less costly) option will be accepted, than if the threat actor presented that option initially.
Cognitive Attack Tools
More to come...
Cognitive Attack Tactics/Techniques
More to come...
Scams, Cons, and Ruses
More to come...
Cognitive Attack Procedures
More to come...
The V-E-T Relationship
The CAT describes the interlocking relationships between cognitive Vulnerabilities, Exploits, and T/TTPs. These relationships can be used to anticipate attacks (and defenses), for research purposes, threat modeling, and other applications.
The relationship between cognitive vulnerabilities, exploits, and TTPs should be thought of in terms of the cognitive vulnerabilities being opportunities or potentiality for exploitation, cognitive exploits being the mechanism of exploitation, and TTPs being the methodology that for implementing the exploit.
Adopting the physical analogy of the process of bolting pieces of metal together; the vulnerability would be a threaded hole in the metal, the exploit would be the bolt that fits within that threaded hole, and the wrench to tighten the bolt would be the TTP.
This relationship is key to understanding how the taxonomic classes work in concert, if no vulnerability exists, then there is no means to exploit it. For example, many migratory birds have the ability to sense magnetic fields, which they seem to use for navigation. While this ability provides several advantages it also presents a vulnerability which may be through the projection of radio frequency interference. Humans lack the same level of magnetic sensitivity as migratory birds and therefor cannot be exploited using the same mechanism.
Cognitive vulnerabilities are highly context dependent. For example, humans less than 23 to 25 years old have the ability to hear higher frequency sounds than older adults. As a vulnerability, this feature has been exploited by the Mosquito sonic area denial system. The Mosquito emits a loud high-pitched humming sound (similar to the sound that a mosquito makes) which is nearly unbearable to those who can hear it, but inaudible to those who cannot. The Mosquito is thus used to prevent individuals within a specific age range from congregating in areas covered by the Mosquito's range. This same feature of hearing is leveraged as an exploit in itself by using the same sound frequency to create a ring tone which teenaged adults can hear but older adults (such as school teachers) cannot. This example illustrates how quickly the context for a cognitive vulnerability can shift.
Any information processing system will have inherent cognitive vulnerabilities. These are endemic to a cognitive system and the system will usually cease functioning without them. These vulnerabilities tend to be highly specific to each system. For example, 3d printed facial prosthetics may allow threat actors to bypass biometric algorithms but will not deceive humans for an instant. Likewise, a painted graphic on a stop sign will not a human but may induce a computer vision algorithm to misclassify a stop sign as a "sports ball". By contrast, a perspective painting of a child chasing a ball into the street may deceive both a human driver and the computer vision algorithm controlling an autonomous vehicle, because these visual cues exploit similar information processing mechanisms in both cognitive systems.
Understanding these relationships improves understanding how cognitive attacks are applied, how they function, and how they might be potentially be "tuned" to particular targets. For example: if a vulnerability is lacking in an individual target or population, then it will be impossible to attack using a specified exploit.
Examples of V-E-T relationships
To more effectively clarify these relationships, here are a few examples of the relationships.
Examples:
Vulnerability: Affinity toward Neoteny (Youthful features).
Exploit: Presentation or display of neotenic Features (Large eyes, large head).
TTP: Design a robot to look "cute" to lower suspicions by adding neotenic features (such as large eye & head).
Vulnerability: Ability to hear sound within a specified frequency range.
Exploit: Playing sound within that specified frequency range in a aggravating manner.
TTP: "Mosquito" sonic area denial system.
Vulnerability: Need for Commitment and Consistency
Exploit: Ikea Effect
TTP: Foot-in-the-Door technique
Vulnerability: Need to Reciprocate
Exploit: Inducing Reciprocity
TTP: Door-in-the-Face technique (DITF technique is effective because of reciprocal concessions)
Applications of the CAT
Examples of Cognitive Attacks
Proof-of-Concept Examples of Cognitive Attacks
Real-World Examples of Cognitive Attacks
Cognitive Attack Graphs
Using the Cognitive Attack Taxonomy, it becomes possible to map attacks which have occurred or attack which may potentially occur in the future. The figure below depicts a SMSishing attack, a social engineering technique which uses SMS messaging to transmit a phishing message. In this example, the threat actors sent the target a SMS message impersonating the target's bank. This message threatened that the target would be locked out of their account within a short period of time due to a negative balance in their account. Using the tactic of SMSishing, these threat actors employed the techniques of Account Lockout and Negative Balance to launch the exploits of Scarcity and Fear Of Missing Out as a means to exploit the cognitive vulnerability of Loss Aversion. This system for building Cognitive Attack Graphs is extensible and scalable to accommodate a broad range of cognitive attacks ranging from social engineering, to influence operations, to lawfare, to neuro-attacks, narrative warfare, and others.
Open Systems Interconnection Model (OSI Model)
Layer 1-7 (OSI Model)
Introduction to the OSI Model...
OSI Layer 1
OSI Layer 2
OSI Layer 3
OSI Layer 4
OSI Layer 5
OSI Layer 6
OSI Layer 7
AI and Large Language Models
Generative AI models, such as large language models (LLMs) reside at OSI Layer 7. According to Chat GPT4...
"Large Language Models (LLMs) like GPTs (Generative Pre-trained Transformers) operate at the Application Layer (Layer 7) of the OSI model. The Application Layer is the topmost layer that provides interfaces for applications to access network services and defines protocols that applications use to communicate over a network. LLMs and GPTs, being advanced software applications that provide natural language processing capabilities, interact with other software applications and services through the Application Layer. They use protocols defined at this layer to send and receive data over the network, offering services such as text generation, language translation, and content creation that are utilized by end-user applications."
Attacks At Different OSI Layers
Human Interconnection Model (HIM)
Layer 8 (HIM)
Layer 9 (HIM)
Layer 10 (HIM)
Scales of Operation
Tactical
Operational
Strategic
Ethical Concerns
There is a philosophical perspective that advocates against the dissemination of “dark knowledge”; knowledge about ways that evil might be perpetrated or facilitated. This leads to an open debate which should be explored by the CSI community about whether these aspects of cognitive manipulation should be openly explored. A strong argument in favor of exploring these issues is that those intent on committing evil will learn of these tools, tactics, techniques, procedures, and human vulnerabilities without the aid of this reference. If being forewarned is to be forearmed, and knowledge is power, then advocating for the exposure of these vulnerabilities, exploits, and tools to the public may empower the public to anticipate and avoid such attacks.
How To Understand the Cognitive Attack Taxonomy
CAT Name: This is the common name to describe the cognitive attack taxonomy (CAT) vulnerability, exploit, or T/TTP (VET).
Short Description: This is a brief description of the CAT-VET is usually a two to three sentences maximum description of the entry.
CAT ID: Intended to be a unique identifier for the CAT-VET. The prefix "CAT" refers to the Cognitive Attack Taxonomy, followed by the year, and finally the serial number of the CAT-VET. For example, CAT-2021-005 identifies the fifth CAT-VET cataloged in calendar year 2021.
Layer: This refers to the interconnection layer that the CAT-VET operates through. The Open Systems Interconnection Model (OSI Model) refers to communication between information systems. Layer 1 refers to the physical layer (wires or radio waves) and Layer 7 refers to the application layer at which the connection interfaces with the human user. The Human Interconnection Model (HIM) refers to Layer 8 (human layer), Layer 9 (organizational layer), and Layer 10 (legal layer).
- Layer 7: This is the layer that AI operates through (according to ChatGPT4).
- Layer 8: The human layer at which heuristics, biases, and other psychological influence techniques operate. Social engineering or influence operations function at this layer.
- Layer 9: The organizational layer, manipulation techniques at this layer operate through policy functions.
- Layer 10: The legal layer, manipulation at this layer occurs through legislative processes or court cases.
Operational Scale: This refers to the typical or expected scale this CAT-VET is deployed at.
- Tactical: These are typically individual encounters with a single attacker and single target.
- Operational: This level refers to multiple engagements over a period of time, typically involving multiple parties.
- Strategic: Nation-state or nation-state level actors exercising multiple operations to exert strategic influence objectives.
Level of Maturity: This header describes the degree to which a vulnerability, exploit, or T/TTPs has been vetted, validated, or established. V-E-Ts exist on a continuum from hypothetical ideas anchoring one extreme with very well established laws human nature occupying the other extreme with a spectrum of support for constructs in between. The Level of Maturity refers to the degree of empirical support that is documented for each V-E-T, and this designation may evolve over time.
- Theoretical: Unproven CAT-VETs that are feasible but have not yet been proven through a proof-of-concept test nor has been documented in the wild.
- Proof-of-Concept: Security researchers commonly discover and report on vulnerabilities, exploits, or techniques. These CAT-VETs have not yet been documented in the wild.
- Observed in the Wild: CAT-VETs which have been reported as occurring in a non-laboratory or controlled setting. These reports usually result from criminals employing techniques during the commission of crimes.
- In Common Use: These CAT-VETs are commonly encountered or exploited in uncontrolled environments and are commonly used by criminals and other threat actors.
- Well-Established: CAT-VETs that are in common use and are well-documented to be effective.
Category: The CAT-VET category informs whether an entry is a cognitive vulnerability, cognitive exploit, or is a tactic, technique, tool, or procedure.
- Vulnerability: Cognitive Vulnerability
- Exploit: Cognitive Exploit
- TTP: Cognitive Attack Tactic/Technique, Tool, or Procedure
Subcategory: The CAT-VET subcategory refers to the type of vulnerability, exploit, or T/TTP the entry falls within. The subcategory is intended to be expandable as new discoveries are made, while CAT-VET Categories are intended to be immutable.
Also Known As: Identifies alternative names or adjacent terms and concepts to the entry.
Brief Description: This description is intended to be a five words or less description of the entry.
Closely Related Concepts: These are concepts which relate to the entry but are not alternative names.
Mechanism: This describes the operation of the CAT-VET entry. If the entry is a vulnerability, then the mechanisms described will be exploits or T/TTPs. If the entry is an exploit, the mechanisms will include vulnerabilities the exploit may be applied to or T/TTPs which may take advantage of the exploit, alternatively, a T/TTP entry will list cognitive vulnerabilities which it may be applied against or exploits which might be leveraged T/TTP deployment.
Multiplier: Describes adjacent phenomena which may enhance or degrade the entry. For example, decision fatigue (CAT-2022-050) is a cognitive vulnerability which describes the experience of increasing difficulty in resisting temptation as choices are made. This vulnerability may be enhanced by presenting more closely related alternatives when choosing between alternatives (increasing cognitive load), or degraded by presenting personally relevant information at the key decision point (increasing semantic relevance).
Detailed Description: This category provides a detailed description of the entry. This section can be as long as needed and is intended to be expandable to allow for new or updated information about the entry.
Use Case Example: Examples of how the CAT-VET might be used in a hypothetical situation.
Example From The Wild: Example of the CAT-VET has been used in a documented case.
Comments: General commentary on the CAT entry. Community discussions may exist here in addition to the page discussion notes.
References: All CAT entries are to be backed by references to the maximum practical extent. All entries should be backed by research and/or observations from the wild. The CAT is not intended to be a repository of fantasy or opinion.
References
- ↑ Two Monkeys Were Paid Unequally: Excerpt from Frans de Waal's TED Talk, Apr 4, 2013. Available at: https://www.youtube.com/watch?v=meiU6TxysCg&pp=ygUcbW9ua2V5IGdyYXBlIGN1Y3VtYmVyIHVuZmFpcg%3D%3D
- ↑ https://www.us-cert.gov/sites/default/files/publications/DDoS%20Quick%20Guide.pdf