Contemporary cybersecurity training is overwhelmingly procedural. It emphasizes tools, techniques, and repeatable methodologies designed to standardize performance across practitioners. Yet expert penetration testers do not operate procedurally in any meaningful sense, aside from script kiddies. Their practice is better understood as a form of concurrent reasoning — one that bears a striking resemblance to the method employed by Socrates in classical Athens.
Socrates did not possess domain expertise in law, medicine, or military command. Nevertheless, he consistently exposed contradictions in the reasoning of those who did. His effectiveness did not derive from superior subject-matter knowledge, but from his ability to operate across three dimensions simultaneously: he maintained a model of the broader social system, interpreted the live reasoning of his interlocutor, and deployed a calibrated repertoire of questions in response. The same three-component structure underlies expert adversarial practice in cybersecurity.
The Three Components of Adversarial Cognition
1. City Awareness: A System-of-Systems Perspective
City awareness is often described as a practitioner's mental model of an environment. This characterization is insufficient. It is more precisely understood as a system-of-systems perspective in which technical infrastructure, organizational roles, institutional incentives, and historical decisions are treated as interdependent layers of a single, evolving system.
Crucially, the practitioner is not external to this system. Their presence constitutes an intervention within it. Every interaction — whether a network probe, an authentication attempt, or a query against a directory service — introduces perturbations. These perturbations generate responses: logging behavior, control activation, latency variation, or even human intervention. The environment is therefore not static; it is responsive.
Expert practitioners incorporate this reflexivity into their model. They do not simply describe the system as it exists, but as it behaves under interaction. In this sense, city awareness is not observational but participatory. The practitioner models the system while simultaneously acting within it.
2. Live Signal Reading: Interpreting System Artifacts as Organizational Evidence
If city awareness provides the prior, live signal reading provides the update.
System artifacts — naming conventions, configuration choices, encryption types, versioning patterns — are not merely technical data. They are traces of human decision-making under constraint. Each artifact encodes information about the conditions under which it was produced: time pressure, legacy integration, policy regimes, or operational trade-offs.
Expert practitioners interpret these artifacts as organizational evidence. For example, the output of a Kerberoasting attempt is not evaluated solely for exploitability, but for what it reveals about account lifecycle management, administrative practices, and historical policy enforcement. Encryption types suggest when accounts were created; naming conventions suggest whether they are human-managed or automated; SPN distributions suggest maintenance discipline or neglect.
In a reflexive framework, the practitioner also interprets the system's response to their own actions. Variations in behavior — whether in monitoring, performance, or control activation — are incorporated into the evolving model. The practitioner is therefore reading both the system's structure and its reaction to being engaged.
3. The Toolbelt: Calibrated Response to Signal Configurations
The third component is the practitioner's repertoire of techniques — analogous to Socrates' elenchus. The Socratic elenchus and penetration testing share a common structure: both probe constructed systems from within. A philosophical position, like a network, is an organized environment — definitions = schemas; assumptions = trust relationships; inferences = logical pathways; conclusions = outputs. Socrates does not challenge them with competing authority, he enters their system from a posture of ignorance, asking simple questions —enumeration — to map internal commitments, just as an attacker maps identities, services, and configurations.
Once inside, both operate strictly within system constraints. Socrates uses only the interlocutor's own premises; attackers use valid credentials and protocols. The contradiction — A → B → C → ¬A — emerges internally, not externally imposed. The system collapses on its own terms.
The strategic inversion is critical. Sophists masquerade as authorities; Socrates masquerades as a novice. This grants access. From that foothold, he moves laterally — justice → virtue → knowledge → contradiction — expanding scope until the system fails in aporia, a breakdown analogous to system failure or privilege escalation.
The penetration testers repertoire, likewise, is not a checklist. It is a set of calibrated responses, each associated with particular configurations of signals. The selection of a tool or technique is therefore contingent on interpretation, not sequence. A novice executes tools in accordance with predefined workflows. An expert selects and deploys techniques in direct response to the meaning constructed from concurrent system reading. The distinction is not one of capability, but of coupling: in expert practice, action is tightly coupled to interpretation.
The Human–AI Gap Reconsidered
Recent work on LLM-assisted penetration testing demonstrates meaningful progress in procedural execution. Models can sequence actions and adapt within known vulnerability classes. However, their performance degrades significantly in the absence of explicit vulnerability descriptions.
This pattern is often described as a failure of "strategic coherence." A more precise account is that current systems lack system-of-systems orientation and reflexive integration. They do not generate a prior model of how human actors, operating within role constraints, produce structural drift. Nor do they model themselves as participants within the environment whose actions elicit informative responses.
As a result, they enumerate effectively but interpret weakly. They identify what has been specified, but struggle to discover what emerges from the interaction of independent, locally rational decisions across organizational boundaries.
Historical Precedent: Teaching Concurrent Reasoning
This gap between declarative knowledge and concurrent reasoning is not unique to cybersecurity. It has been encountered in other professions and addressed through pedagogical reform.
Langdell's case method in legal education reframed instruction around the analysis of concrete cases, using Socratic questioning to expose reasoning structures. Barrows and Tamblyn's problem-based learning model in medicine similarly shifted emphasis from knowledge recall to diagnostic reasoning under uncertainty. In both instances, the objective was to make expert reasoning processes visible and reproducible.
Cybersecurity education remains largely procedural by comparison. It has not yet systematically incorporated methods designed to develop concurrent, reflexive reasoning.
Implications for Training and AI Development
Formalizing adversarial cognition as a three-component, concurrent system has implications beyond pedagogy. It enables the specification of cognitive targets that can be taught, measured, and potentially modeled computationally.
Importantly, this creates a feedback loop: attempts to train machine systems against these targets will expose gaps in the formalization, which can then refine both instructional design and theoretical understanding. Human and machine learning trajectories become interdependent rather than parallel.
The Practitioner the Field Requires
The practitioner most needed in contemporary security environments is not the one who executes tools with maximal efficiency. That layer is increasingly subject to automation.
Rather, it is the practitioner who can enter an unfamiliar environment, construct a system-of-systems model that includes themselves as an interacting component, interpret the signals produced by both the system and its responses, and deploy techniques in direct correspondence with that interpretation.
Socrates described this capacity as philosophy. In a cybersecurity context, it may be more precisely termed:
Reflexive Adversarial Cognition
It cannot be reduced to procedure, nor transmitted through instruction alone. It must be made visible in practice and developed through guided exposure to authentic environments.