Skip to main content

These 30 handpicked collections of valuable, yet completely free, resources are your ultimate guide to staying ahead in this exciting new world. Go ahead, dive in… you’ll thank me later!

  • Wiz Singularity Supply Chain Attack Analysis By Rami McCarthy

This resource contains a comprehensive postmortem of the s1ngularity supply chain attack that exploited npm publishing tokens to leak thousands of corporate secrets. This analysis reveals how the attack specifically targeted AI CLI tools to identify and exfiltrate sensitive files, affecting over 1,700 users with more than 2,000 verified secrets exposed. Dive deeper here

  • LLM-Driven Multi-Agent Cyber Threat Detection Pipeline By Ibrahim H. Koyuncu

Ever wondered how raw logs can be turned into actionable security decisions? In this piece, Ibrahim shows how LLM-powered multi-agent systems and LangGraph make it possible. A hands-on look at smarter detection. The article explores different agent architectures, orchestration models, and provides detailed insights into using LangGraph for building intelligent security workflows. Check this out

  • Block’s Democratized Detection Engineering with Goose & MCP By  Tomasz Tchorz and Glenn Edwards

In this article, Thomas and Glenn showcases how their open-source AI agent Goose, combined with Panther MCP, transforms detection engineering from a coding-heavy expert process into an accessible natural language workflow. Their approach enables analysts across the organization to create sophisticated detection rules by simply describing threats in plain English. Read more here

  • AI SOC LLM Benchmark Initiative By Igor Kozlov

Igor introduces the first comprehensive benchmarking framework specifically designed for evaluating LLM performance in Security Operations Center environments, addressing the critical need for standardized AI assessment in cybersecurity contexts. Explore this

  • Google SecOps AI Runbooks with Model Context Protocol By DanDye

In this resource, DanDye  demonstrates how Model Context Protocol servers enable LLMs to directly interact with SecOps, SOAR, and threat intelligence platforms, transforming traditional runbooks into AI-driven workflows. The implementation showcases personas for different SOC roles and automated incident response processes that leverage real security data. Learn more

  • Fundamental Limits of LLMs in Security Operations By Hamza Sayah

Hamza Sayah provides a critical examination of why LLMs struggle with systematic coverage in security investigations, revealing the architectural limitations of probabilistic text generation for exhaustive threat hunting. The analysis demonstrates how sampling-based approaches cannot guarantee comprehensive security coverage, even with advanced reasoning capabilities. Dive in here.

  • Interactive AI Feedback Loops for Cybersecurity Automation

This comprehensive guide explores the DSAEM Loop framework (Detection Engineering → SOPs → Automation & AI Agents → Threat Emulation → Metrics) and addresses the critical challenges of implementing AI SOC platforms. You’ll find practical advice for evaluating vendors too. Read here.

  • AI SecOps and the SOCless Future Oliver Rochford

Oliver Rochford explores the evolution toward autonomous security operations and the concept of “SOCless” architectures powered by AI agents, examining how current AI SecOps implementations are laying the groundwork for fully automated threat detection and response. Check it out.

  • The Cursor Moment for Security Operations By Jack Naglieri

This piece examines how Model Context Protocol is creating a “Cursor moment” for security operations, enabling natural language interaction with SIEMs and security tools similar to how Cursor transformed software development. Here, Jack explores the shift from manual detection engineering to AI-augmented strategic thinking. Explore this.

This resource demonstrates how fine-tuning a small language model (Llama 3.2 1B) for secrets detection achieves 86% precision and 82% recall while running efficiently on CPU hardware, addressing the scalability and privacy limitations of large language models. Their approach uses multi-agent data labeling and innovative training techniques to create production-ready security tools. Read more here

  • BSidesSF 2025: AI’s Bitter Lesson for SOCs By Jackie Bow and Peter Sanford

In this video, Anthropic’s Jackie Bow and Peter Sanford present their research on building AI-assisted detection and response systems, demonstrating how letting AI tackle security problems its own way (rather than imitating human workflows) leads to better outcomes. They share practical insights from building “Clue,” their AI investigation platform. Watch this

  • Cybersecurity AI (CAI) Framework By Alias Robotics

This resource introduces CAI, an open-source framework with 300+ AI models ready for offensive and defensive cybersecurity. Built with guardrails and human-in-the-loop controls, it’s designed for serious practitioners. Explore this.

  • The Single Pane of Glass Vision: MCP, A2A, and AG-UI By Filip Stojkovski

This analysis examines how emerging technologies, Model Context Protocol, Agent-to-Agent communication, and Agentic User Interfaces, could finally deliver the long-promised “single pane of glass” for security operations through intelligent integration rather than simple dashboard aggregation. Check this out.

  • AI Design Patterns for Security By Dylan Williams

Dylan Williams presents three core AI design patterns for building reliable security systems: memory streams (separating hypotheses, evidence, and decisions), structured outputs (using JSON/YAML formats), and role specialization (dividing tasks among specialized agents). A must-read for builders. Dive deeper here.

  • Boost SecOps With AI Runbooks, Claude Agents And MCP

This demo shows practical implementation of AI runbooks using Claude Code subagents and MCP servers for automated incident response, showing how AI agents can collaborate across different security roles (SOC analysts, threat hunters, CTI researchers) to investigate cases. Watch this.

  • Securing RAG Applications: A Comprehensive Guide By Axel Sirota

How do you secure Retrieval-Augmented Generation apps? Axel Sirota delivers a comprehensive security framework for Retrieval-Augmented Generation applications, covering data poisoning prevention, adversarial attack mitigation, and compliance with GDPR, CCPA, and SOC 2 requirements. Read more here

  • Managing Risks from Internal AI Systems

This comprehensive report examines the unique security and safety risks posed by internal AI systems developed by leading companies before public release, addressing vulnerabilities to misuse, theft, and sabotage by sophisticated threat actors. The analysis includes policy recommendations for government oversight and industry security measures. Read here

  • Claude Found the APT! – Splunk MCP Demo By Haggis

Security researcher MHaggis demonstrates how Claude AI connected to Splunk via MCP servers can autonomously investigate security incidents, discovering and analyzing a complete password spraying attack from initial detection tuning to root cause analysis. The demo shows AI conducting end-to-end threat investigation with 100% accuracy. Check it out.

  • Context Engineering for AI Agents: Lessons from Building Manus By Yichao ‘Peak’ Ji

Manus shares hard-earned lessons from building production AI agents, revealing how context engineering, not just prompt engineering, determines agent reliability and performance. The article covers KV-cache optimization, attention manipulation techniques, and practical strategies for scaling agent deployments. Learn more.

  • AI Security Shared Responsibility Model By Mike Privette

Mike Privette’s repository outlines a comprehensive shared responsibility model for AI security, defining clear boundaries between cloud providers, AI service vendors, and organizations in securing AI deployments. The framework addresses the complex ecosystem of AI security responsibilities in modern deployments. Explore this resource. Explore this

  • Detection-as-Code with MCP Servers for Threat Hunting By Sumitpatel

This guide walks you through creating Detection-as-Code pipelines using GitHub Actions and MCP servers. Sumitpatel shows how AI-enhanced workflows make testing and versioning security rules seamless. Read more here.

  • Building an Open Ecosystem for AI-Driven Security with MCP By Raybrian

Raybrian explores how Model Context Protocol creates an open ecosystem for AI-driven security tools, enabling standardized integration between AI systems and security platforms while fostering innovation through interoperability. Discover more.

  • AI SOC Shift Left and Shift Right Framework By Filip Stojkovski

This analysis introduces the SecOps AI Shift Map framework for evaluating AI implementations across the incident response lifecycle, explaining why most vendors started with investigations (the “sweet spot”) and how mature implementations now shift left to detections and right to response. Check it out.

  • OWASP LLM Prompt Injection Prevention Cheat Sheet

This piece contains a comprehensive technical guide for preventing prompt injection attacks against Large Language Models, covering direct and indirect injection techniques, encoding obfuscation, typoglycemia attacks, and defensive strategies including structured prompts and output validation. Read here

  • SentinelOne AI Vulnerability Management Guide

SentinelOne provides a comprehensive guide to AI vulnerability management, covering both using AI for vulnerability detection and securing AI systems themselves. The article addresses data poisoning, adversarial attacks, model theft, and integration with XDR platforms for enhanced protection. Learn more

  • AI Agents for Network Monitoring and Security By Jesse Anglen

Jesse Anglen explores how AI agents transform network monitoring through real-time performance analysis, predictive fault detection, and automated threat identification. The comprehensive guide covers machine learning fundamentals, deep learning architectures, and practical implementation strategies for intelligent network management. Explore this.

  • AI as the Foundation of Modern SOC By Michal Mamica

In this article, Michał makes the case for AI as the backbone of future SOCs. He covers how it’s reshaping detection, response, and even the SOC workforce itself. A thought-provoking piece.
 Check it out.

  • OWASP GenAI Red Teaming Guide By OWASP

This guide is a critical resource for GenAI Red Teaming, providing actionable insights for cybersecurity professionals, AI/ML engineers, and business leaders on testing AI systems for vulnerabilities. It emphasizes a holistic approach covering model evaluation, implementation testing, infrastructure assessment, and runtime behavior analysis.
 Read the guide.

  • Securing Agentic AI Framework By Gal Moyal

Gal presents a practical enterprise framework for managing the unique risks introduced by autonomous AI agents, covering visibility, risk prioritization, governance processes, and runtime guardrails. The framework addresses the shift from passive AI tools to active agents that can autonomously perform business-critical tasks.
 Read more here.

  • Microsoft Security for AI Assessment Tool By Microsoft

Curious how secure your AI systems really are? This interactive tool from Microsoft helps you assess vulnerabilities and get tailored recommendations. It’s like a health check.
Take the assessment