Skip to main content

– By Victor Odico

From Snort Rules to AI: A SOC Evolution

I remember during my early career, standing up a SOC meant building a Linux server and installing an intrusion detection and prevention (IDP) solution like Snort, then configuring common signatures manually. After monitoring the network to establish a baseline, we would then identify unknown activities and manually write rules to search for those events and repeat the process multiple times.

As you can tell, this process is very manual, laborious, and prone to mistakes.

The IT security industry evolved from manual processes to automation over the years. If you were to walk into any SOC operation within recent years, you would notice that most of the tools used are automatically updating their signatures from common threat intelligence providers, corresponding with severity classifications in the national vulnerability database (NVD).

The Cloud Complication

In the last five years, there has been a strong push to migrate our operations to the cloud. This is not as easy as flipping a switch. It is an intentional process of prioritizing workloads, understanding client experience, projecting operational cost, and most importantly determining an organization’s risk tolerance.

THE HYBRID REALITY

More than 90% of organizations use hybrid cloud or multi-cloud solutions (Rackspace, Gitnux).

This creates predictable problems: data silos, limited cloud visibility, human-driven processes, and lack of standardization.

 

The Number One Problem: Skills Gap

#1

Skills gaps and staffing challenges are the top problem reported by SOC managers (SOC-CMM 2025, SANS 2025)

In a SOC environment, this translates to longer time to detection for threats, also known as Mean Time to Detect (MTTD). SOCs are built to quickly detect threats, minimize potential damage, and remediate the threat. The goal is for this to be completed in a very short time so that little or no harm is done to the organization.

Having a skill gap as the number one problem that SOC managers are concerned about makes it an Achilles heel to the entire operation of a SOC.

 

This naturally begs the question: Can introducing Artificial Intelligence to the SOC process improve our MTTD and solve the skills gaps issue?

The System I Built: A Three-Step Pipeline

Raw asset data such as endpoint detection and remediation (EDR), network detection and remediation (NDR), extended detection and response (XDR), and security orchestration, automation and response (SOAR) are the functional building blocks for threat intelligence and the best location to integrate AI to get the most valuable information.

STEP 1: DATA FUNNEL TO SIEM

┌───────┐ ┌───────┐ ┌───────┐ ┌───────┐

│  EDR  │ │  NDR  │ │  XDR  │ │ SOAR  │

└───┬───┘ └───┬───┘ └───┬───┘ └───┬───┘

    └─────────┴─────────┴─────────┘

                  ▼

    ┌─────────────────────────────────┐

    │      SIEM REPOSITORY           │

    │   Normalize to JSON format     │

    └─────────────────────────────────┘

JSON is industry preferred: lightweight, human-readable, machine-parseable

 

The next step is the AI triage and enrichment process. Here we sort through the alerts and extract entities using natural language processing (NLP) models. These models normalize alerts using the Open Cybersecurity Schema Framework (OCSF), which is good for pattern recognition. We then take advantage of LLMs that add context to the alerts by mapping them to the MITRE ATT&CK framework.

STEP 2: AI TRIAGE & ENRICHMENT

┌─────────────────────────────────────┐

│     NLP ENTITY EXTRACTION          │

│  Normalize using OCSF schema       │

└─────────────────┬───────────────────┘

                  ▼

┌─────────────────────────────────────┐

│     LLM CONTEXT ENRICHMENT         │

│  Map to MITRE ATT&CK framework     │

└─────────────────┬───────────────────┘

                  ▼

┌─────────────────────────────────────┐

│   RISK SCORE + REASON WHY          │

│   (Historical data + ML models)    │

└─────────────────────────────────────┘

The value-add: Knowing WHY an alert matters helps analysts focus instead of chasing noise

 

These enriched alerts and logs of AI decisions are passed down the pipeline to the SOAR and Incident Response platform to generate playbooks for faster and consistent deployments. These playbooks can be used in any other environment within the organization with minor customizations because the fundamental dataset originated from the systems in the same organization.

LESSON LEARNED: THE DUPLICATE RECORD PROBLEM

One key challenge we encountered: how to influence the AI decision-making process on deleting duplicate records of a lower confidence rating. Initially, the system would create duplicate records as unique records, resulting in confusion because our output dataset had more assets than the raw data assets that were ingested.

Solution: We introduced Python code to define a High Confidence Asset Record (HCAR) requirement. HCARs must contain three key identifiers: IP address, hostname, and resource ID. If an asset record lacked any of the three identifiers, it was dropped.

 

STEP 3: ANALYST CONSOLE (HUMAN IN THE LOOP)

┌─────────────────────────────────────────────┐

│          ANALYST DASHBOARD                  │

│  ┌─────────────────────────────────────┐   │

│  │ Enriched Alerts + AI Triage Summary│   │

│  │ + Recommended Actions              │   │

│  └─────────────────────────────────────┘   │

│           [REVIEW] [VERIFY] [ACT]           │

└──────────────────────┬──────────────────────┘

                       │

              ┌────────┴────────┐

              ▼                 ▼

    ┌─────────────────┐  ┌─────────────────┐

    │  SOAR/Playbook  │  │  FEEDBACK LOOP  │

    │   Generation    │  │  → AI Engine    │

    └─────────────────┘  └─────────────────┘

Actions loop back to AI engine for continuous learning

 

The final step is the analyst console where a human in the loop gets to view the enriched alerts with an AI triage summary. The summary also provides recommended actions which the analyst can review and verify. Based on the actions taken at this level, the information is looped back into the AI engine, reinforcing continuous learning so that the AI knows what actions are correct and what needs to be improved upon.

The Maturity Goal: Auto-Close Low Fidelity Tickets

As the model matures, the goal is to have it auto-close low fidelity risk tickets using historical data imported into the ML engine. This increases the efficiency of the SOC because we are freeing up the SOC analyst to work on mission critical tickets and minimizing repetitive workloads that lead to burnout.

The Answer: A Resounding Yes

Tying it all back to the question: Can introducing Artificial Intelligence to the SOC process improve our MTTD and solve the skills gaps issue?

I believe that the answer is a resounding YES.

Arriving at the decision-making stage with an AI triage summary armed with recommended actions minimizes SOC manager’s reliance on skilled personnel.

– Vick Works

An entry-level analyst who can be trained up on how to handle the information presented at the user interface in the analyst console can follow a standard operating procedure document to make informed decisions and learn on the job, thus securing the organization and building their skillsets in house.

* * *

Vick Works is a SOC architect and AI integration specialist with experience spanning from the early days of manual Snort rule configuration to modern AI-enhanced threat detection. He specializes in designing AI pipelines that enhance SOC efficiency while preserving analyst trust and decision-making authority.