The question every security team eventually has to answer is: "How do we know we're ready for the next incident?" The honest answer, in most organizations, is that they don't. They have playbooks. They have escalation procedures. They have a SIEM. But whether those things actually work together under real conditions — that's usually unknown until an incident proves otherwise.
SIRAS is my attempt to change that. It's an open-source Python framework for simulating realistic attack scenarios in controlled environments, designed to expose detection blind spots and validate IR readiness before a real attacker does it.
The Problem with "We Have a Playbook"
Incident response playbooks are written in quiet moments, by people thinking about what they'd do if something went wrong. They're usually reasonable documents. What they rarely are is tested.
The gaps I consistently see across organizations:
- Detection rules that were written but never validated against actual attack behavior
- Monitoring systems that alert to the wrong channel, with no one watching the right one
- SIEM configurations that look correct but have an edge case that causes them to drop specific event types
- Escalation paths that work in theory but depend on someone who's on vacation
- IR teams that know the playbook but have never actually run through it under pressure
The only way to find these gaps is to simulate the conditions that expose them — safely, repeatably, and ideally before an attacker does the testing for you.
What SIRAS Does
SIRAS (Security Incident Response Automated Simulations) is a Python-based framework that simulates realistic attack scenarios against your environment and validates whether your detection and response infrastructure catches them.
Current simulation capabilities include:
- Compromised credential scenarios — Simulating account takeover patterns: unusual login times, anomalous API access patterns, credential stuffing behavior
- Unauthorized admin role assignments — Privilege escalation scenarios that test whether your IAM monitoring fires when unexpected role changes occur
- Insecure workload configurations — Deliberately misconfigured resources to test whether your configuration monitoring detects drift
- Living-off-the-land techniques — Simulations aligned with MITRE ATT&CK framework techniques that use legitimate tools and services in attack patterns
Every simulation is designed to leave a realistic forensic trail — the kind of activity that a detection rule should catch, and that an analyst should be able to investigate.
What Running SIRAS Taught Me
Running SIRAS in our environment surfaced things that wouldn't have been visible any other way:
- Detection blind spots — Specific attack techniques that should have triggered alerts but didn't, because the corresponding detection logic had a logic error or was pointed at the wrong log source
- False positive baseline — By running the same simulation repeatedly, we could identify which alerts were generated consistently (true positives from the simulation) vs. which fired randomly (false positives to investigate separately)
- Monitoring system validation — Confirming that SIEM alerts actually arrived in Slack, that the right people were tagged, and that escalation thresholds triggered at the expected severity levels
- Training under realistic conditions — Using simulations as tabletop exercises where the team didn't know in advance whether the alert was a simulation or a real incident
Detection engineering is no longer optional. It's critical for organizational survival. SIRAS makes that testing repeatable and systematic rather than a one-time fire drill.
The Methodology Behind SIRAS
SIRAS simulations follow a consistent structure:
- Define the scenario — What attack technique or behavior pattern is being simulated? Map it to MITRE ATT&CK where applicable.
- Set the scope — Which systems, accounts, or services are in scope for the simulation?
- Execute safely — Simulations are designed to be non-destructive. The goal is to generate realistic telemetry, not to actually compromise anything.
- Measure detection — Did the expected alerts fire? Within the expected timeframe? In the right channels?
- Document the gap — Any simulation that didn't generate the expected detection is a finding. Treat it like a vulnerability.
The repeatability is the point. Run the same simulation before and after updating a detection rule. Run it monthly as a sanity check. Run it after a SIEM migration to confirm nothing got lost in the transition.
What's Next for SIRAS
The roadmap for SIRAS includes:
- Cloud-specific attack scenarios (AWS, GCP, Azure) at higher fidelity
- SIEM integration plugins for direct validation against detection platforms
- A web interface for non-CLI users and team-based simulation management
- More MITRE ATT&CK technique coverage, particularly in the cloud sub-techniques
SIRAS is open source and actively developed. If you're working on detection validation, IR readiness testing, or just want to know whether your monitoring actually works — I'd love to hear what you're building.