Security programs fail for a predictable reason: they depend on individual heroics instead of repeatable systems. When the security-minded engineer leaves, the program leaves with them. When the startup hits growth, the ad-hoc approach that worked at ten people collapses at a hundred.

After managing security programs at an enterprise firm in Argentina and working across multiple startups at different growth stages, I kept building variations of the same framework. The specific controls changed — the structure didn't. This is that structure.

A security program that depends on individual brilliance isn't a program. It's a person with a job title.

Phase 1: Assess

You can't prioritize what you don't understand. The first phase is about getting an accurate picture of where you actually stand — not where you think you stand, and not where a questionnaire says you stand.

The assessment covers:

  • Infrastructure review — Cloud architecture, network topology, data flows, where sensitive data lives
  • Architecture analysis — How services communicate, where trust boundaries exist, what's exposed
  • Policy gap assessment — What's documented, what's enforced, what's assumed
  • Current tooling evaluation — What's deployed, what's actually configured, what's generating signal vs. noise
  • Threat landscape mapping — Who would target this organization, how, and what the realistic attack paths look like

The output isn't a long PDF. It's a clear, shared understanding of the current state — documented well enough that a new team member could understand it without a two-hour briefing.

Phase 2: Prioritize

Every assessment surfaces more findings than you can fix. The question is: in what order?

Most teams prioritize by scanner severity — Critical first, High second, Medium when there's time. This is the wrong approach. A critical CVE in an internal system with no external access and no sensitive data may matter far less than a medium-severity misconfiguration in the IAM layer that could let an attacker pivot to everything.

Prioritization should be driven by actual business impact:

  • What would a breach of this system cost the organization?
  • How likely is exploitation given the current threat landscape?
  • What's the blast radius if this is compromised?
  • Is there compensating control already in place?

The output of this phase is a risk register and a security roadmap — a prioritized list of initiatives, mapped to business objectives, with owners and timelines. Not a wish list. A plan.

Phase 3: Operate

This is where most security programs either succeed or quietly disappear. Assessment is easy. Prioritization is fun. Execution is the hard part.

Operating the security program means:

  • Implementing controls — Not recommending them. Actually deploying IAM policies, configuring logging, writing detection rules, hardening cloud configurations
  • Architecture reviews — Being part of the design conversation before infrastructure decisions are made, not after
  • Incident readiness — Building and testing IR playbooks, running simulations, validating that monitoring systems catch what they're supposed to catch
  • Vendor and third-party assessment — Reviewing the security posture of tools and services in the supply chain

Phase 4: Measure

If you can't measure it, you can't improve it — and you can't explain it to anyone else in the organization.

Security metrics that matter:

  • Remediation velocity — How quickly are identified risks being addressed?
  • Detection coverage — What percentage of the attack surface has working detection?
  • Mean time to detect / respond — How fast does the team actually catch and respond to incidents?
  • Control effectiveness — Are deployed controls doing what they're supposed to?
  • Program maturity score — A normalized view of overall program maturity tracked over time

Phase 5: Improve

Every quarter, the cycle resets. The environment has changed, the threat landscape has shifted, the company has grown. What worked six months ago may not be enough today.

  • Reassessing: what's changed since the last assessment?
  • Refreshing the roadmap: what priorities have shifted?
  • Updating the risk register: what new risks have emerged?
  • Integrating lessons learned: what did incidents, simulations, or near-misses teach us?

A security program that looks the same after two years of growth hasn't been operating — it's been on life support.


Why This Works

The five phases aren't new ideas. Assess, prioritize, operate, measure, improve is effectively the PDCA cycle applied to security. What makes the SOCHUB model different is the emphasis on execution over documentation and measurement over assertion.

Every phase produces something concrete:

  • Assess → shared understanding of current state
  • Prioritize → risk register + security roadmap
  • Operate → implemented controls + tested playbooks
  • Measure → security metrics + posture trend data
  • Improve → updated roadmap + integrated lessons

No phase ends with a presentation. Each phase ends with an artifact that feeds into the next one.

SF

Santiago Friquet

Security engineer based in Buenos Aires. I write about cloud detection, incident response, and AI/ML security. Questions or want to discuss a security challenge?

More Writing

Get New Articles Delivered

Subscribe for practical security content on cloud detection, incident response, and AI/ML security.