From Vendor Advisory to Evidence Collection: What Teams Miss in the First 72 Hours of Zero-Day Incident Response
Mid-March kept sending the same message to defenders: patching fast is necessary, but it is not the same as investigating well. CISA added actively exploited browser bugs to KEV in March 2026, Edge shipped fixes for in-the-wild Chromium issues on March 13 and March 16, and SharePoint CVE-2026–20963 was pushed into first-48-hour triage territory as a KEV-listed issue. Pentest Testing Corp's recent research is moving in the same direction too, with posts focused on first-48-hour SharePoint response, OAuth-driven Google Workspace takeover investigation, and Android evidence preservation after suspected compromise.
That is the operational gap this article is about.
A vendor advisory lands. The patch gets approved. Emergency change windows open. Engineers scramble. Leadership asks for status. The vulnerable component gets updated, and everyone wants to move on.
But the hard question usually arrives later:
Were we only exposed, or were we actually exploited?
If your team cannot answer that, then your zero-day incident response is incomplete.
Patching closes the door for the next attempt. It does not tell you whether someone already walked through it.

Patch Fast. Investigate Just as Fast.
One of the most common failure modes in post-advisory response is treating remediation as closure.
That mindset creates blind spots:
- evidence gets overwritten by log rotation
- volatile artifacts disappear during cleanup
- sessions remain valid after attacker access
- impacted accounts are under-scoped
- teams rebuild systems before understanding root cause
- compliance and legal questions arrive before the facts do
A mature zero-day incident response process has two tracks running in parallel:
Track 1: Exposure reduction Patch, isolate, restrict access, add detections, and reduce attacker opportunity.
Track 2: Evidence-led investigation Preserve what will disappear, scope what happened, assess whether access was obtained, and decide what containment actions are actually justified.
You need both.
The First 72 Hours: What to Preserve Before It Disappears
If a zero-day may have been exploited in your environment, the first 72 hours are not just for patch validation. They are your best chance to retain evidence that later answers root cause, attacker actions, and blast radius.
1. Vendor advisory context and internal timeline
Start by preserving:
- the advisory version you acted on
- the time your team became aware
- when the vulnerable asset list was generated
- when compensating controls were applied
- when patching began and ended
- who approved emergency changes
- what detections or hunts were added
This creates the backbone for later reconstruction.
2. Internet-facing telemetry
For externally reachable systems, preserve:
- WAF logs
- reverse proxy logs
- CDN logs
- load balancer logs
- firewall session data
- IDS/IPS alerts
- full web server access and error logs
These records often show the earliest exploit attempts, suspicious paths, anomalous user agents, payload delivery, or follow-on requests after initial access.
3. Identity and session artifacts
A lot of teams focus only on the vulnerable server and forget the identity layer.
Preserve:
- SSO and IdP sign-in logs
- MFA events
- session creation and revocation logs
- refresh token activity
- OAuth consent or app registration events
- admin role changes
- suspicious mailbox rules or forwarding rules
- password reset history
That matters because exploitation often shifts quickly from technical foothold to identity persistence.
4. Endpoint and server artifacts
If compromise is plausible, capture:
- EDR alerts and process trees
- command-line telemetry
- scheduled task / cron changes
- startup or persistence artifacts
- suspicious child processes from browsers, app servers, or document handlers
- file creation and modification times
- temporary directories
- web roots and upload paths
- crash dumps, if available
- memory, when feasible and operationally safe
The goal is not to collect everything. The goal is to collect what is most likely to disappear first.
5. Application and platform logs
Preserve:
- application audit logs
- admin panel actions
- API gateway logs
- job runner history
- background worker activity
- database authentication logs
- object storage access logs
- cloud control-plane changes
These sources often tell you whether the zero-day led only to probing, or to data access, account manipulation, or persistence.
What Teams Usually Miss
Here is the pattern I see most often:
They save the vendor advisory. They save the patch ticket. They save the "all systems updated" message.
They do not save the evidence needed to answer:
- which assets were actually exposed before patching
- whether exploit traffic reached the vulnerable code path
- whether any suspicious server-side behavior followed
- whether sessions or tokens remained usable after the patch
- whether attacker activity moved into identity, mail, or cloud systems
- whether data access occurred after the initial event
That is why post-patch investigation matters.
The real risk is not only the zero-day itself. It is the unmeasured period between exposure and containment.
How to Scope the Blast Radius
Exploit scoping is where zero-day incident response becomes either disciplined or dangerously vague.
A practical scoping flow looks like this:
Step 1: Identify the truly exposed population
Do not start with every asset that has the software installed.
Start with:
- vulnerable version
- reachable attack surface
- relevant feature enabled
- internet exposure or attacker path
- time window before patch/mitigation
This narrows noise immediately.
Step 2: Search for evidence of exploit path access
Look for:
- requests to known vulnerable endpoints
- malformed or anomalous payloads
- unusual headers or serialized objects
- suspicious response codes
- request bursts around disclosure
- successful access from rare IP space or geographies
- post-exploitation process launches
If you have no hits, that is useful. If you have weak hits, keep them separated from confirmed exploitation. Do not collapse exposure and compromise into the same bucket.
Step 3: Look for post-exploit behavior, not just exploit attempts
Many teams stop at "we saw scanning."
That is not enough.
You should also hunt for:
- new admin users
- token minting
- credential dumping indicators
- suspicious outbound network connections
- archive creation
- unexpected compression utilities
- file uploads or web shell writes
- mailbox forwarding
- federation or SSO trust changes
- data export behavior
This is how you separate noise from impact.
Step 4: Build a timeline, even if imperfect
A usable timeline answers:
- when exposure started
- when suspicious requests began
- when first abnormal behavior appeared
- what changed before containment
- whether actions continued after patching
An imperfect timeline is still more useful than a patched system with no investigative story.
When to Invalidate Sessions, Tokens, or Credentials
This is where teams often overreact or underreact.
Not every zero-day requires a full credential reset across the organization. But some absolutely require aggressive invalidation.
Invalidate sessions or tokens quickly when:
- the vulnerability could expose session material
- the affected system handled authentication or session termination
- admin consoles were exposed
- attacker access to browser or identity context is plausible
- suspicious OAuth grants, mailbox rules, or token use appears
- web shell or post-exploitation activity is confirmed
- scoping is weak and privileged access may have been involved
Reset passwords or rotate credentials when:
- there is evidence of credential theft
- cleartext secrets may have been exposed
- service accounts were present on the compromised system
- authentication logs show unusual reuse after patching
- refresh tokens or long-lived secrets may outlive the patch
Rotate keys, certificates, or app secrets when:
- the vulnerable platform stored or brokered them
- secrets may have been retrievable from config, memory, or logs
- downstream trust relationships depend on them
The mistake is treating "patch applied" as equivalent to "session risk removed." It is not.
A Repeatable Zero-Day DFIR Workflow for the First 72 Hours
0 to 6 hours
- confirm affected products, versions, and exposure
- preserve the advisory and internal decision log
- snapshot volatile logs before rotation
- start exploit-path hunting
- isolate or restrict the highest-risk systems if needed
- define investigation owner, containment owner, and comms owner
6 to 24 hours
- complete patching or compensating controls
- collect server, identity, and network evidence
- identify suspicious access and potential impacted accounts
- review signs of post-exploit behavior
- decide whether session invalidation is required
- begin executive and legal-ready timeline notes
24 to 48 hours
- expand scoping across similar assets
- validate whether suspicious activity stopped after containment
- hunt for persistence and lateral movement
- review cloud, email, and admin-plane logs
- classify findings as exposed, suspicious, or confirmed compromise
48 to 72 hours
- finalize blast radius assumptions
- complete high-priority containment and access invalidation
- document root-cause confidence and evidence gaps
- define long-tail remediation items
- produce a recovery and hardening checklist
- preserve evidence package for audit, insurance, or legal needs
This is the difference between emergency patching and defensible response.
Practical Evidence Collection Examples
These examples are intentionally simple. Adapt them to your retention, legal, and chain-of-custody requirements.
Linux/web server log preservation
IRDIR="/var/tmp/zero-day-ir-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$IRDIR"
cp /var/log/nginx/access.log* "$IRDIR"/ 2>/dev/null
cp /var/log/nginx/error.log* "$IRDIR"/ 2>/dev/null
cp /var/log/apache2/access.log* "$IRDIR"/ 2>/dev/null
cp /var/log/apache2/error.log* "$IRDIR"/ 2>/dev/null
journalctl --since "<start-time>" --until "<end-time>" > "$IRDIR/journalctl.txt"
ps auxww > "$IRDIR/process-list.txt"
ss -pant > "$IRDIR/network-sockets.txt"
sha256sum "$IRDIR"/* > "$IRDIR/SHA256SUMS.txt"Windows event log export
$dest = "C:\IR\zero-day-" + (Get-Date -Format "yyyyMMdd-HHmmss")
New-Item -ItemType Directory -Force -Path $dest | Out-Null
wevtutil epl Security "$dest\Security.evtx"
wevtutil epl System "$dest\System.evtx"
wevtutil epl Application "$dest\Application.evtx"
Get-Process | Sort-Object ProcessName | Out-File "$dest\processes.txt"
Get-NetTCPConnection | Out-File "$dest\net-connections.txt"
Get-FileHash "$dest\*.evtx" | Format-Table -AutoSize | Out-File "$dest\hashes.txt"Example KQL for exploit scoping
DeviceNetworkEvents
| where Timestamp between (datetime(<start>) .. datetime(<end>))
| where RemoteUrl has_any ("<ioc-domain>", "<vulnerable-path>", "<known-callback>")
or InitiatingProcessCommandLine has_any ("<ioc-string>", "<exploit-marker>")
| summarize
Hits=count(),
FirstSeen=min(Timestamp),
LastSeen=max(Timestamp)
by DeviceName, AccountName, InitiatingProcessFileName, RemoteUrl
| order by Hits descExample questions for the incident channel
Before your team closes the incident, make sure someone can answer:
- What evidence do we have that exploitation did or did not occur?
- What logs were preserved before cleanup?
- Which privileged identities touched the affected systems during the risk window?
- Which sessions, tokens, or secrets might still be valid?
- What are our confirmed findings versus assumptions?
If those answers are fuzzy, the investigation is not done.
Pentest Testing Corp's Website Vulnerability Scanner tool
While your team is handling emergency patching and triage, lightweight external hygiene checks can still help identify missing headers, exposed files, weak cookie settings, and other obvious web misconfigurations. O Free Website Vulnerability Scanner is useful for quick validation of internet-facing basics, but it should support — not replace — evidence-led zero-day investigation.

Sample Report by our tool to check Website Vulnerability

Where Pentest Testing Corp Fits in your Security requirements
If your team needs help closing the investigation gap after an emergency advisory, we designed our services around evidence-backed security work.
The dedicated DFIR and Digital Forensics explicitly focuses on confirming what happened, preserving evidence, and delivering actionable containment and recovery steps. The Remediation Services complements that by helping organizations implement the technical, policy, and procedural fixes needed after findings are confirmed.
- CVE-2026–20963 SharePoint: First 48-Hour Response
- Google Workspace Account Takeovers Without Passwords: Investigating OAuth App Abuse and Token Persistence
- Android March 2026 Bulletin: Evidence Preservation and Triage After Suspected Device Compromise
Final Takeaway
A zero-day is not over when the patch finishes deploying.
It is over when your team can explain, with reasonable confidence:
- whether exploitation occurred
- what evidence supports that conclusion
- what the attacker could access
- what containment actions were necessary
- what must still be remediated
That is why the first 72 hours matter so much.
Build a post-advisory investigation checklist before the next emergency patch day.
And if you want outside help, make it evidence-first help.