As is well known, the market is flooded with all manner of security mechanisms: firewalls, intrusion detection systems, honeypots, and so on. Broadly speaking, these can be divided into hardware and software solutions, between which a serious battle has raged in the minds of users — though philosophical concepts concern us less.
If one technology had undeniable advantages over all the others, competing solutions would have gone extinct long ago, like mammoths. A well-known market rule states: if there is a clear favorite that has captured ~69% of the market, then there is no problem with choice and everything is fine. The available alternative solutions are concentrated in exotic niches where no one is challenging their position, and consumers of such products usually know in advance what they need (after all, we're not discussing the purchase of a new refrigerator, but the deployment of a complex system whose specifications must be thoroughly studied by engineers — otherwise, what kind of IT industry are we even talking about?!).
The existence of numerous competing solutions with similar specifications gives rise to… no, by no means freedom, but rather a problem of choice, since it is very difficult to figure out right away which specifications we really need and which ones are just getting in the way. Independent reviewers very often use so-called integrated tests, in which each characteristic is assigned its own "weight" (usually plucked out of thin air), after which we get a set of scalar coefficients presented as a nice-looking diagram, the sight of which often strikes terror into our hearts.
What, for example, is the weight of ease of use? Contrary to a common misconception — absolutely none! We're not talking about application software here, but about security systems designed for autonomous operation, requiring minimal scheduled maintenance. And that means all technical issues can be left to the specialists (administrators and other IT staff), so they know they're not just sitting around doing nothing. The "buy the hardware, plug it in, and everything works" approach is only suitable for SOHO offices and creates merely the illusion of security, since the default configuration offers very little protection. No matter how you look at it, proper configuration still requires in-depth knowledge of network protocols and hacking techniques. In other words, you'll need specialists. And specialists have their own way of thinking: the more "knobs" and controls a device has, the easier it is to configure!
Or here's another example: installation density. Yes, indeed, hardware solutions generally have higher installation density, which allows for the deployment of powerful "data centers" even in modest spaces. But… the problem of limited space in Romania (unlike, for example, in Japan) is local rather than global in nature, and therefore the importance of this criterion varies widely and must be determined by each consumer individually.
No integrated tests! No religious wars between "hardware" and "software." My goal is to show when it is more advantageous to use one solution or another. Moreover, in most cases, it is recommended to use both solutions together.
And this is where things get really interesting. When we stop viewing security mechanisms as "hardware versus software" and start treating them as elements of a single system, half the arguments suddenly disappear. Because in a real-world infrastructure, things don't work on a "pick a side" basis, but on a "pick what gets the job done" basis. Hardware solutions are good where predictability and speed are needed: they honestly handle traffic at line-rate, filter out junk, withstand DDoS attacks, and don't crash just because someone decided to send them 200,000 SYNs per second. But as soon as it comes to protocol behavior and analysis, or attempts to understand why a client suddenly started acting strangely, hardware inevitably falls short. Not because it's bad — it's just not designed for that.
At this point, the software stops being a "cheap alternative" and becomes what it's meant to be: an analytical layer that sees what hardware physically cannot. Suricata effortlessly handles DPI on HTTP/2 and TLS fingerprinting, catching things that a hardware firewall wouldn't even notice. And Zeek basically works like a network microscope: it doesn't just "detect attacks"; it turns traffic into a story — who talked to whom, what was requested, what certificates were used, what oddities occurred between packets. When these two layers work together, the result is truly powerful: the hardware offloads the load and filters out the noise, while the software analyzes what's left at a level no ASIC can handle.
To avoid making empty claims, let's look at an example that comes up in production more often than we'd like. At the perimeter stands a hardware firewall handling 20–40G of traffic.
Next comes mirroring to Suricata, which analyzes the logs. Then Zeek, which converts the stream into events. And already at this level, you can see that the same client is making strange sequences of requests, changing its User-Agent every 30 seconds, and its DNS requests follow a characteristic pattern typically seen in tunneling. Hardware will never see this. But software will see it immediately.
But the most interesting part begins when we add reactivity. Not just "looking at logs," but intervening in the traffic right now, while the attack is still ongoing. That's when the software layer truly comes into its own. Suricata in IPS mode via NFQUEUE allows you not only to analyze packets, but also to make decisions: let through, drop, or modify. This is no longer monitoring but control — something that hardware, in most cases, cannot do, because it is optimized for speed, not logic.
Here is a simple NFQUEUE handler written in Python:
def process(pkt):
payload = pkt.get_payload()
if b"../../" in payload or b"%2e%2e%2f" in payload:
pkt.drop()
else:
pkt.accept()It's a simple example, but it illustrates the key point: we can intercept traffic on the fly without touching the hardware.
Now, let's talk about eBPF. It allows you to move some of the logic into the kernel so you don't have to route every packet through userspace. For example, you can filter DNS requests with suspicious patterns or block TLS clients with malicious JA3 fingerprints.
Here is an example of an eBPF filter at the XDP level:
SEC("xdp")
int dns_filter(struct xdp_md *ctx) {
void *data_end = (void *)(long)ctx->data_end;
void *data = (void *)(long)ctx->data;
struct ethhdr *eth = data;
if ((void*)(eth + 1) > data_end) return XDP_PASS;
if (eth->h_proto != htons(ETH_P_IP)) return XDP_PASS;
struct iphdr *ip = data + sizeof(*eth);
if ((void*)(ip + 1) > data_end) return XDP_PASS;
if (ip->protocol != IPPROTO_UDP) return XDP_PASS;
struct udphdr *udp = (void*)ip + sizeof(*ip);
if ((void*)(udp + 1) > data_end) return XDP_PASS;
if (udp->dest == htons(53)) {
if ((data_end - (void*)(udp + 1)) > 300)
return XDP_DROP;
}
return XDP_PASS;
}But XDP is more restrictive.
tc-BPF allows for much more nuanced actions: for example, detecting malicious JA3 fingerprints and blocking a client only after several attempts.
struct bpf_map_def SEC("maps") ja3_counters = {
.type = BPF_MAP_TYPE_HASH,
.key_size = sizeof(__u32),
.value_size = sizeof(__u32),
.max_entries = 10240,
};
SEC("tc")
int tls_monitor(struct __sk_buff *skb) {
__u32 src_ip = load_word(skb, 26);
__u32 ja3 = parse_ja3(skb);
if (ja3 == 0xDEADBEEF) {
__u32 *count = bpf_map_lookup_elem(&ja3_counters, &src_ip);
if (count) {
(*count)++;
if (*count > 5)
return TC_ACT_SHOT;
} else {
__u32 init = 1;
bpf_map_update_elem(&ja3_counters, &src_ip, &init, BPF_ANY);
}
}
return TC_ACT_OK;
}Now — NFQUEUE, but with some context:
from collections import defaultdict
requests = defaultdict(int)
def process(pkt):
payload = pkt.get_payload()
src = pkt.get_src()
if b"POST" in payload:
requests[src] += 1
if requests[src] > 20:
pkt.drop()
return
if b"X-Original-URL" in payload and b"/admin" in payload:
pkt.drop()
return
pkt.accept()And finally — Zeek.
Here is a policy script that tracks changes in URI patterns and request frequency, and draws conclusions within a time window:
module API;
export {
redef enum Notice::Type += { API_Scan_Attempt };
}
global activity: table[addr] of count &default=0;
global last_uri: table[addr] of string &default="";
global ts_window: interval = 5mins;
global last_seen: table[addr] of time &default=0;
event http_request(c: connection, method: string, uri: string, version: string)
{
local ip = c$id$orig_h;
local now = network_time();
if (now - last_seen[ip] > ts_window)
activity[ip] = 0;
last_seen[ip] = now;
activity[ip] += 1;
if ( last_uri[ip] != "" &&
uri[0] != last_uri[ip][0] &&
activity[ip] > 15 )
{
NOTICE([$note=API_Scan_Attempt,
$conn=c,
$msg=fmt("A suspicious change in patterns: %s", uri)]);
}
last_uri[ip] = uri;
}This isn't an IDS anymore.
It's behavioral analytics, extended over time, with context and memory.
And the deeper you look into real-world cases, the clearer it becomes: no "ideal box" will save you if you don't have a layer that understands what's going on. Hardware can honestly process millions of packets, but it doesn't know that a client who was accessing a single endpoint yesterday suddenly started tweaking parameters today, changing the order of requests, and trying to extract extra fields from JSON responses. To the hardware, it's just traffic. To Zeek, it's already a story, a sequence, an anomaly. And it's precisely these little details that most often reveal the preparation of an attack, rather than the attacks themselves.
The result is a system that really delivers: the hardware keeps up the pace, eBPF offloads the kernel, NFQUEUE allows us to intervene in the flow, Suricata analyzes protocols, Zeek collects behavioral data, and SIEM ties it all together into a single picture. There's no room for abstract debates here — it all comes down to practical application. Different tasks require different tools, which is precisely why a combined approach works best. It's just basic engineering: each component handles its own part of the job, and together they form an architecture that can withstand real-world workloads and real-world attacks.