Most security teams have more alerts than they can investigate. Most of those alerts are low quality — either false positives that shouldn't fire, or true positives that fire without enough context to act on them. The response to this situation is usually to buy more tooling, tune thresholds, or hire more analysts. Rarely is it to question the quality of the detection logic itself.

Here's the thing: detection is not a configuration problem. It's a content problem.

The Assumptions Problem

Security teams routinely assume their detection tools work. They buy a SIEM, point it at their log sources, deploy a few out-of-the-box rules, and proceed on the assumption that meaningful alerts will surface when something bad happens. Sometimes they do. Often, they don't.

The gap is usually not technical. The SIEM is working. The logs are arriving. The rules are executing. What's missing is that nobody validated that the rules actually catch the behavior they're supposed to catch — in this environment, with this log format, in this configuration.

Assumptions accumulate. Rules are copied from detection repositories and deployed without being tailored to the local log schema. Alert thresholds are set based on default values rather than environmental baselines. Log sources are assumed to be complete when they're actually partial. Nobody tests because testing is slow and manual and there's always something more urgent.

Good detections are like good documentation: nobody notices when they're great, but everybody suffers when they're bad.

Detection as Content Creation

Writing a detection rule requires:

This is content creation. It requires expertise, time, and iteration. A rule that takes five minutes to write and five years to become trusted isn't a good rule — it's technical debt that generates noise while it matures.

The "Mini Blog Post" Approach

I think about detection rules the same way I think about documentation: every rule should be a self-contained artifact that tells a complete story.

A well-written detection rule includes:

Rules written this way take longer to produce. They're also dramatically more valuable — both for investigation quality and for institutional knowledge that survives team turnover.

AI Makes This More Important, Not Less

Here's the uncomfortable part: AI-powered detection tools amplify the quality of your detection logic. Good detection logic, with AI on top, becomes better. Bad detection logic, with AI on top, becomes bad at higher volume and with more confidence.

AI systems that help generate detection rules can produce syntactically correct rules with plausible-sounding logic that doesn't actually work. The output looks professional. The coverage appears comprehensive. The reality is that you've automated the production of unvalidated rules — which is worse than having fewer, well-understood rules.

The right use of AI in detection is not to generate rules from scratch, but to support the human work that makes detection valuable:

AI as an accelerator for human detection work. Not as a replacement for the judgment that makes detection meaningful.

What Good Detection Programs Actually Look Like

The detection programs I've seen work well share a few properties:

The programs that struggle have a different profile: hundreds of rules, most inherited from default configurations, a high false positive rate that trains analysts to ignore alerts, and no systematic process for validating that the detection layer works.

The fix isn't more rules. It's better ones.