Why Do Security Alerts Get Ignored? Building and Tuning a Simple Detection System to Reduce Alert Fatigue

One of the biggest challenges in cybersecurity today isn't a lack of data — it's too much of it. Analysts are often overwhelmed with alerts, many of which are low-priority or false positives. Over time, this leads to alert fatigue, where real threats can be missed simply because they're buried in noise.

As a junior cybersecurity analyst, I wanted to understand this problem from a practical perspective. Instead of just generating alerts in a SIEM, I asked a more important question:

👉 Can I improve detection quality by reducing unnecessary alerts?

To explore this, I built a small lab using an Ubuntu system and a SIEM platform, generated both normal and malicious activity, and then tuned detection rules to improve signal-to-noise ratio.

This project focuses on a critical subproblem in cybersecurity operations: How do we reduce alert fatigue without losing visibility into real threats?

More Alerts ≠ Better Security

When I first began testing, I assumed that more alerts meant better detection.

That assumption was wrong.

After configuring my SIEM using Wazuh, I generated activity such as:

  • Failed SSH logins
  • Successful logins
  • Normal system usage

The result?

Instead of meaningful insights, I saw:

  • Dozens of identical alerts
  • No prioritization
  • Difficulty identifying real threats

📌 Key Takeaway:

High alert volume without context reduces effectiveness and increases risk.

Experiment 1 — Brute Force Noise vs Detection Signal (MITRE T1110.001)

Goal — Identify when failed logins become meaningful threats

To simulate a brute force attack, I used Hydra:

hydra -l testuser -P passwords.txt ssh://localhost

This maps to: 👉 MITRE ATT&CK: T1110.001 — Password Guessing

Initial Result (Before Tuning)

  • Every failed login triggered an alert
  • SIEM flooded with low-value events

Tuning Implementation

I modified detection logic to trigger alerts only when:

  • 5+ failed logins
  • Within a 2-minute window

Improved Result

  • Reduced alert volume significantly
  • Highlighted actual attack patterns

📌 Key Takeaway:

Threshold-based detection transforms noise into actionable intelligence.

Experiment 2 — Reducing False Positives (MITRE T1078)

Goal — Distinguish legitimate access from suspicious behavior

I simulated normal administrative activity:

  • Repeated SSH logins
  • Routine system access

This maps to: 👉 MITRE ATT&CK: T1078 — Valid Accounts

Problem Observed

  • SIEM flagged normal admin behavior as suspicious
  • High false positive rate

Tuning Implementation

I introduced:

  • User-based filtering (trusted accounts)
  • IP-based whitelisting
  • Context-aware alerting

Improved Result

  • Reduced false positives
  • Maintained visibility on unknown users

📌 Key Takeaway:

Not all activity is malicious — context is critical for accurate detection.

Experiment 3 — Prioritizing High-Risk Behavior (MITRE T1059)

Goal — Elevate meaningful attack patterns over isolated events

I simulated command execution behavior using:

bash -c "whoami"

Mapped to: 👉 MITRE ATT&CK: T1059 — Command and Scripting Interpreter

Problem Observed

  • Single command executions triggered alerts
  • No distinction between benign and malicious behavior

Tuning Implementation

I refined alerts to prioritize:

  • Multiple suspicious commands
  • Combined behaviors (login + command execution)

Improved Result

  • Alerts became more contextual
  • Reduced unnecessary noise

📌 Key Takeaway:

Correlation across events is more valuable than isolated alerts.

✅ More alerts do NOT equal better security ✅ Alert fatigue can hide real threats ✅ Threshold-based detection reduces noise ✅ Context-aware filtering minimizes false positives ✅ Correlation improves detection accuracy ✅ SIEM tuning is just as important as SIEM deployment

So, why do security alerts get ignored?

Because without tuning, they lose meaning.

This showed that simply generating alerts is not enough. Detection systems must be refined to:

  • Reduce unnecessary noise
  • Highlight real threats
  • Provide actionable insights

By applying thresholding, filtering, and correlation techniques, I was able to significantly improve detection quality while reducing alert volume.

Final Thoughts

This changed how I think about cybersecurity.

Before, I believed the goal was to detect everything. Now, I understand the goal is to detect what matters.

One of the biggest lessons I learned is that alert fatigue is not just a usability issue — it's a security risk.

Challenges I Faced

  • Overwhelming alert volume
  • Difficulty distinguishing real threats
  • Lack of initial detection prioritization

What I Would Improve Next

  • Implement automated response (SOAR integration)
  • Expand detection to cloud environments
  • Test additional MITRE techniques (privilege escalation, persistence)

Advice for New Analysts

👉 Don't just build detection systems — break and improve them 👉 Always validate what your SIEM is actually showing 👉 Focus on quality of alerts, not quantity

Where to Follow My Work

This is part of my journey transitioning into cybersecurity, focusing on SIEM operations, detection engineering, and real-world security testing.

If you're working on similar projects or learning cybersecurity, feel free to connect and share ideas.