There's a particular kind of frustration that comes from doing the same manual task for the fourth time in a week.
For me, it was vulnerability scanning. Every time I needed to assess a subnet , whether it was a routine check, a vulnerability scan every now and then or following up on something suspicious, I'd open my notes, copy a series of nmap commands, run them one by one, wait, collect the XML outputs, manually correlate the results, and then try to make sense of it all in a spreadsheet. Every. Single. Time.
The commercial alternatives were either locked behind enterprise licensing I couldn't justify for internal use, required a full Linux VM stack with dedicated hardware, or were so heavyweight they needed their own team to operate. OpenVAS is excellent but it's not something you spin up on a Tuesday afternoon. Nessus is great until you see the price tag for a license that actually covers your subnet range.
So I did what any self-respecting security engineer does when tooling doesn't exist in the budget: I built it myself.
This article is not a full code walkthrough and I won't be publishing the application in its entirety here. What I want to do is explain the design decisions, the technical concepts behind each component, and why I chose to do things the way I did. If you work in security or network engineering, you'll recognise most of these moving parts.
The Problem With Manual Nmap Scanning
Before I get into the build, it's worth explaining why a bunch of nmap commands isn't a sustainable approach for internal assessments.
The biggest issue is that nmap commands don't compose. When you run a ping sweep and then a port scan, you're running two separate processes that don't know about each other. There's no automatic handoff, you manually take the output of one and feed it into the next. On a /24 with 35 live hosts that's manageable. On 10 subnets with a few hundred hosts, it becomes a full-time job.
The second issue is consistency. Different engineers on the same team will run slightly different flags depending on what they remember or what their notes say. One person adds --script vuln, another forgets -sV. The results aren't comparable and the coverage isn't uniform.
The third issue is that nmap on its own doesn't give you a report. You get XML or text output per scan. Taking that output and turning it into something you can show a security manager or use to track remediation over time requires another round of manual work.
The Architecture: A Five-Phase Pipeline
The core idea behind the tool is that scanning should be sequential and each phase should inform the next. This is how a good manual pentest works because you don't run a vulnerability scan against hosts you haven't confirmed exist. You don't run SSL checks against hosts that don't have port 443 open.
Phase 1 → Phase 2 → Phase 3 → Phase 4 → Phase 5
Discover Port Targeted Vuln HTML
live hosts map checks detection reportEach phase saves its output to disk before the next one starts. This means if anything fails or gets interrupted at Phase 3, you can resume from there without re-running a two-hour port scan.
Phase 1: Host Discovery
The first thing the scanner does is figure out what's actually alive. This sounds simple but there's a reason to be deliberate about it.
A naive approach is to just run your port scan across the entire /24. Nmap will figure out what's alive during the scan. The problem is that this is slower, less accurate, and it means your port scanner is wasting time on dead IP addresses. On a network with 254 possible addresses per subnet and only 35 live hosts, that's a lot of wasted effort.
Instead, Phase 1 runs a dedicated discovery sweep using multiple probe types:
-PE # ICMP echo request (standard ping)
-PP # ICMP timestamp request (often bypasses simple firewalls)
-PS21,22,80,443,445,3389,8080,8443 # TCP SYN to common portsThe reason for multiple probe types is that some hosts block ICMP but respond to TCP. A printer might not respond to ping but will respond to a SYN on port 80. A Windows machine with host-based firewall might block ICMP entirely but respond on 3389. Using all three probe types gives you a much more complete picture of what's actually reachable.
The result is a clean list of live IPs that every subsequent phase uses as its target list.
Phase 2: The Three-Pass Port Scan
This is the part I spent the most time optimizing, because it's where the old approach fell apart.
The obvious approach is nmap -p 1-65535 against every live host which works but it's brutally slow. In testing, a 35-host subnet with a full port range took 78 minutes at T3. That's before you've run a single vulnerability check.
The solution was a three-pass strategy.
Pass A : Top-1000 TCP sweep on all hosts
The first pass runs nmap's top-1000 port list against every live host in batches of 10. This is fast , typically 3–5 minutes per subnet and tells you which hosts have anything interesting open.
args = [
"-sS", f"-{timing}",
"--top-ports", "1000",
"--max-retries", "2",
"--host-timeout", "60s",
"--open",
"-iL", str(hosts_file),
"-oX", xml_out + ".xml",
]The --open flag is worth calling out specifically. By default nmap reports open, closed, and filtered ports. For a vulnerability assessment, closed ports are noise. Adding --open cuts the output down to only what matters and significantly reduces XML parse time downstream.
--max-retries 2 instead of the default 10 is the single biggest speed improvement. On a LAN where packet loss is minimal, retrying a port 10 times wastes time you don't get back.
Pass B : Full port scan on "interesting" hosts only
Pass A identifies which hosts have ports from a predefined "interesting" set open , things like 22, 80, 443, 445, 3389, 5900, 8080, 5985. Only those hosts get the full 1–65535 TCP scan.
This is the important insight: most hosts on an internal network are workstations or printers. They'll have a handful of standard ports open and nothing unusual. Running a full port scan on them is wasted time. The servers and infrastructure devices ,the ones with non-standard ports that might be running unexpected services are the ones that warrant a full sweep.
In practice on a typical corporate subnet, Pass B runs against maybe 40–50% of live hosts. That cuts the full-range scan time roughly in half.
Pass C : Targeted UDP
UDP scanning is slow because there's no handshake. Every closed port waits for a timeout. The original tool tried to scan 200 UDP ports against every host, which accounted for a significant chunk of overall scan time.
The fix was to limit UDP to 14 ports that actually matter for internal network security: DNS (53), DHCP (67/68), NTP (123), NetBIOS (137/138), SNMP (161), IKE (500), and a handful of others. These are the UDP services that regularly show up as findings in internal assessments. The other 186 ports that were being scanned were adding noise with very little signal.
Service version detection : after the sweep, not during
The original approach ran -sV during the port scan. This means nmap is doing service probing on every port it tries, including ports that turn out to be closed.
The optimized approach separates the concerns. First you find what's open, then you run service detection only against the ports that are actually open. On a host with 8 open ports out of 65,535 tried, this is dramatically more efficient.
Phase 3: Targeted Protocol Checks
Once you have a port map, you can run targeted checks against specific services. The key design principle here is that every targeted check only runs against hosts that actually have the relevant port open.
SSL/TLS checks don't run against hosts without SSL ports. SMB checks don't run against hosts without port 445. This sounds obvious, but it's something a naive automation would get wrong resulting in a flood of nmap "host seems down" messages and wasted scan time.
The checks in this phase cover:
SSL/TLS deep analysis: This goes well beyond just confirming a certificate exists. The scanner checks expiry date and calculates days remaining (expired = Critical, less than 30 days = High, less than 90 days = Medium), signature algorithm (SHA1 and MD5 are High severity findings), RSA key length (under 2048-bit is Medium, under 1024-bit is High), and known protocol vulnerabilities including Heartbleed, POODLE, and CCS Injection.
Certificate management is consistently undervalued in internal security assessments. Expired internal certificates break services, but weakly-signed or short-keyed certificates represent actual cryptographic exposure. I've found SHA1-signed certificates on internal services that were simply never updated after the 2017 deprecation.
SMB enumeration : This runs the full suite of SMB vulnerability scripts including MS17–010 (EternalBlue). EternalBlue is from 2017 but it still shows up in internal assessments on unpatched machines and isolated segments. The scanner also checks SMB signing status (disabled SMB signing enables relay attacks) and enumerates shares and users where possible.
ADCS enumeration : This was the addition I was most interested in building. Active Directory Certificate Services is one of the most consistently misconfigured components in Windows environments, and it's been getting significant attention in the security community since the ESC vulnerabilities were documented.
The scanner checks for exposed CA Web Enrollment pages (/certsrv), which is the ESC8 candidate , if NTLM authentication is presented over HTTP on a certificate enrollment endpoint, an attacker with network access can relay those credentials to obtain a certificate for any user including domain admins. It also checks for NDES/SCEP endpoints (ESC6-type attacks) and flags port 9389 (AD Web Services) which exposes the Active Directory PowerShell module interface remotely.
_ESC8_PATTERN = re.compile(
r"WWW-Authenticate.*NTLM|NTLM.*authentication|401.*NTLM",
re.IGNORECASE
)This regex runs against HTTP response headers. If you see WWW-Authenticate: NTLM coming back from port 80 or 443 on a CA server, you have a potential ESC8 condition worth investigating further.
Default credentials : The scanner runs nmap's http-default-accounts script against HTTP ports and additionally fingerprints devices by their HTTP title and server header. The fingerprinting covers 24 device types, such as printer vendors, IPMI/iDRAC/iLO management interfaces, Cisco and Fortinet gear, NAS devices, and common web applications like Tomcat, Jenkins, and Zabbix.
When a device is fingerprinted, the report includes the known default credentials for that device type as a reference. If nmap's http-default-accounts script actually confirms a valid login, that's escalated to Critical. Fingerprint-only is High ,it means the device is there and has a login page, the credentials still need manual verification.
Phase 4: Vulnerability Detection
Phase 4 has two components that run separately.
Nmap vuln scripts run --script vuln,exploit against the union of all open ports across all live hosts. The port spec is built dynamically from the Phase 2 results so nmap is only probing ports that are known to be open. This avoids the situation where the vul scan is trying to run web application scripts against a port that turned out to be a database.
The nmap vuln category covers a meaningful number of CVEs for older protocols and services. It's not comprehensive but it's reliable and adds no additional tooling dependency.
Nuclei is the optional but significantly more powerful component. Where nmap's vuln scripts are a relatively static library, Nuclei's template database is community-maintained and updated continuously covering 2023, 2024, and 2025 CVEs that nmap simply doesn't know about.
The integration required solving a non-obvious problem: Nuclei doesn't scan raw IP addresses. It scans URLs and service endpoints. Before passing anything to Nuclei, the scanner builds a proper URL list from the Phase 2 port results:
if port in HTTPS_PORTS:
url = f"https://{ip}:{port}"
elif port in HTTP_PORTS:
url = f"http://{ip}:{port}"
elif include_network and port in NETWORK_PORTS:
url = f"{ip}:{port}" # for Redis, MongoDB, etc.The HTTPS_PORTS and HTTP_PORTS sets handle the common cases. For ports not in either set, the service name from nmap's version detection is used to guess the protocol. This means Nuclei gets https://10.1.5.23:8443 instead of 10.1.5.23 which is what it needs to actually run web application templates correctly.
The -etags intrusive flag is always applied. This excludes templates that could cause service disruption or account lockouts. On a production network, you do not want to run templates that attempt to trigger actual exploits or hammer login endpoints. The non-intrusive templates still cover an enormous range of findings.
Phase 5: The Report
The entire output is a single self-contained HTML file. No server, no database, no dependencies just open it in a browser.
The report structure mirrors how I actually think about findings during an assessment:
- Summary bar at the top showing finding counts by severity across the whole scan
- Per-host cards that show IP, hostname, OS, and severity badges at a glance
- Each host expands to four tabs: Ports, Targeted Checks, Vulnerabilities, and Pentest




The Pentest tab is where the TLS, ADCS, and default credential findings live, separate from the generic vuln scripts because they require different remediation conversations. An expired certificate is a quick ticket to the PKI team. An ESC8 finding is a conversation with Active Directory administration and potentially a broader discussion about certificate template permissions.
Nuclei findings in the Vulnerabilities tab include CVE numbers as clickable links to the NVD, CVSS scores, and where Nuclei provides them, reproduce commands. That last part matters, being able to hand someone a curl command that demonstrates a finding is the difference between a finding getting fixed this sprint and getting deprioritized for six months.
What Running It Actually Looks Like
On a /24 with 35 live hosts at T3 timing, the rough timeline is:
- Phase 1 (discovery): ~30 seconds
- Phase 2 (port scan, three passes): 20–35 minutes
- Phase 3 (targeted checks): 15–25 minutes
- Phase 4 (vuln detection): 5–15 minutes
- Phase 5 (report): a few seconds
That's roughly an hour per subnet. For 10 subnets, this runs overnight. The --resume flag means if anything interrupts the scan, you pick up from Phase 3 without re-running the port scan.
The one thing you cannot get around is that Windows requires Administrator privileges for SYN scans and OS detection and both use raw sockets. Without admin, the scanner detects this and automatically falls back to TCP Connect scans, which work but are noisier and less stealthy. For an internal authorised assessment on your own network, running as admin is fine. For anything else, you'd want to think about the footprint.
What I'd Do Differently
A few things I'd change with hindsight.
The report uses Python's str.replace() for template substitution rather than .format(). The reason is that the HTML template contains CSS variables like var(--bg) which use curly brace syntax. Python's .format() tries to interpret every {...} as a variable placeholder and throws a KeyError when it hits a CSS block. I learned this the hard way when the first report generation attempt threw:
KeyError: '\n --bg'It's a clean solution once you understand the problem, but it's the kind of thing that takes a while to debug when you first hit it.
The other thing I'd revisit is parallelism. Currently subnets are scanned sequentially and hosts within a subnet are batched. A proper async implementation using Python's asyncio or concurrent.futures at the subnet level would cut total scan time significantly on larger networks. It's on the list.
Closing Thoughts
The goal was never to build a better Nessus. It was to build something I , could maintain and extend myself, and that actually fit into how I work: a single command, a single report, no licensing negotiations.
The libraries doing the heavy lifting are all well-chosen for the job: PyYAML keeps configuration separated from code, the XML parsing is handled by Python's standard library xml.etree.ElementTree, and the nmap NSE script ecosystem handles the actual vulnerability logic so the scanner doesn't need to re-implement checks from scratch.
If you're a security engineer doing internal assessments and you're still copying and pasting nmap commands into a terminal, I'd encourage you to think about what a proper pipeline for your specific environment would look like. The specific code matters less than the mental model so figure out the sequence, make each step inform the next, and save the intermediate results so you can iterate without starting over.
The tooling doesn't have to be expensive. It just has to work.
If you found this useful, I also wrote about automating network config backups with Python and Netmiko and building a self-hosted WireGuard VPN on Azure: both following the same principle of building lightweight tools you actually understand.