Blog

Analysis on enterprise AI governance, inline policy enforcement, agentic AI security, and regulatory compliance.

22-Second Breach Windows Mean Your AI Enforcement Must Be Inline

Mandiant M-Trends 2026 reports that attack handoff time collapsed from 8 hours to 22 seconds. At that tempo, log-and-alert on AI traffic is structurally incapable of preventing damage. If your AI enforcement operates on a review cycle measured in minutes, the breach is complete before the first alert fires. AI traffic enforcement must be inline and synchronous.

ai-securityai-governanceagentic-aiincident-responsereal-time-enforcement
Read post →

Fannie Mae LL-2026-04: What the First Sector-Specific AI Governance Mandate Requires from Your Platform

On April 8, Fannie Mae issued Lender Letter LL-2026-04, a governance framework for AI and ML in mortgage origination and servicing. It takes effect August 8. Freddie Mac already enforces similar requirements since March 3. Both GSEs now require approved seller/servicers to operate an auditable AI governance program. Most lenders have no infrastructure to comply.

ai-governancecompliancefinancial-servicesauditai-securityfannie-mae
Read post →

Shadow AI to $670,000 Blind Spot

IBM's Cost of Data Breach Report studied 600 breached organizations and found that one in five experienced breaches linked to shadow AI. Those breaches cost $670,000 more on average. Customer PII exposure jumped to 65%, compared to 53% across all breaches. Intellectual property carried the highest cost per record.

ai-securityshadow-aidata-loss-preventionai-governancecompliance
Read post →

You Own the AI Liability, Not the Vendor

Last week, *The Register* reached out to the major AI application vendors—Microsoft, SAP, Oracle, Salesforce, ServiceNow, and Workday—and asked a simple question: How much liability do you accept when your AI agents make bad decisions? Microsoft and SAP declined to comment. Oracle, Salesforce, ServiceNow, and Workday didn't respond. That silence is your answer. For every CISO, CRO or head of legal deploying AI today, that silence has a direct consequence: You are the insurer of last resort for your vendor's model.

AISecurityAuditArchitectureComplianceDue DiligenceDue Care
Read post →

Securing the Inference Lifecycle

On March 18, Meta's internal AI agent exposed sensitive user and company data to engineers who shouldn't have seen it. The exposure lasted two hours. Meta classified it as Sev-1. Here's the part that should concern every security architect: the agent was fully authenticated. It had valid credentials. It passed every identity check. And it still caused a data breach. This is the post-authentication gap.

AISecurityAuditComplianceEU AI Act 2026SEC
Read post →

Due Diligence is Not Due Care: The AI Compliance Gap

Last year, researchers disclosed EchoLeak (CVE-2025-32711), a zero-click Indirect Prompt Injection in Microsoft 365 Copilot. A poisoned email forced the AI assistant to silently exfiltrate sensitive business data to an external URL. The user never saw it, never clicked a link, and never authorized the transfer, but the data left anyway. Most leaders I talk to think they are "covered" because their LLM provider is SOC2 compliant or has a signed DPA. However, in the eyes of the law, the liability remains with the deployer

AISecurityAuditArchitectureComplianceDue DiligenceDue Care
Read post →

Architecting AI Agent Security to Stay Compliant with NIST's Identity and Authorization Framework

NIST's comment window on AI agent identity and authorization closes April 2. If you are deploying AI agents and haven't read the framework, this is the post. Not because the comment window matters to your engineering roadmap, but because NIST just put formal language around a structural gap that most organizations are already sitting in.

AI SecurityAgentic AICybersecurityLLMAI GovernanceNISTIdentity and Authorization
Read post →

Model Guardrails Are Not a Security Control

Stanford's Trustworthy AI research has demonstrated that model-level guardrails can be materially weakened under targeted fine-tuning and adversarial pressure. In controlled evaluations summarized by the AIUC-1 Consortium briefing, (developed with CISOs from Confluent, Elastic, UiPath, and Deutsche Börse alongside researchers from MIT Sloan, Scale AI, and Databricks), refusal behaviors were significantly degraded once safety patterns were shifted.

AI SecurityAgentic AICybersecurityLLMAI GovernanceModel Guardrails
Read post →

Detecting Model Distillation Attacks in Your AI Traffic

On February 23rd, [Anthropic published](https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks) something the industry had suspected but hadn't seen documented at this scale. Three Chinese AI labs (DeepSeek, Moonshot AI, and MiniMax) ran coordinated campaigns against the Claude API. They generated over 16 million exchanges through approximately 24,000 fraudulent accounts. The goal was not to steal user data but to steal the model itself.

AISecurityDistillationDeepSeekMiniMaxMoonshot AIAnthropicIP TheftAPI Security
Read post →

Why Connector Authorization Is Not Enough to Secure an AI Agent (SilentBridge)

Aurascape's research team this week published SilentBridge, a class of indirect prompt injection attacks against Meta's Manus AI agent. The attack exfiltrated email, extracted secrets, achieved root-level code execution, and exposed cross-tenant media files via CDN — all three variants scored CVSS 9.8 (Critical): network-exploitable, no privileges required, no user interaction. The user had authorized Gmail and the agent used it exactly as permitted. Vulnerabilities discovered September 2025, Manus mitigated November 2025, coordinated disclosure February 2026.

AISecurityPrompt InjectionAgentic AIAuthorizationZero Trust
Read post →

Making Vector Search Identity-Aware in RAG Systems

Most RAG stacks retrieve top-K chunks first and enforce permissions later in the app. At scale, this breaks the trust boundary and degrades retrieval quality. When users only have access to a subset of the corpus, post-filtering collapses top-K into a tiny context window, even when many relevant authorized chunks exist deeper in the index. The fix is to make retrieval identity-aware so authorization becomes part of ranking. In the blog, I walk through how to design identity-aware retrieval so access control is enforced during search, not after it.

RAGABACDatabase-NativeZero-TrustSecurityVector SearchData Security
Read post →

Managing the Agentic Blast Radius in Multi-Agent Systems(OWASP 2026)

The most complex risks in the 2026 OWASP list are not about a single bad action, but about how agents exist over time, interact with each other, and propagate behavior across systems. Unchecked blast radius occurs when **probabilistic agent behavior becomes persistent, trusted, and shared across systems**. This post continues from my previous two pieces on [Loss of Intent as a Failure Mode in OWASP Agentic AI Risks](/blog/loss-of-intent-as-a-failure-mode-in-owasp-agentic-ai-risks-2026) (Part 1) and [Identity and Execution Risks in Agentic AI – The Capability Gap](/blog/identity-and-execution-risks-in-agentic-ai-the-capability-gap-owasp-2026) (Part 2) and is the final part of the series.

AISecurityAgenticAIAIEngineeringCybersecurityOWASP
Read post →

Identity and Execution Risks in Agentic AI - The Capability Gap (OWASP 2026)

When moving from intent to execution, the security model for Agentic AI shifts from intent interpretation to traditional systems hardening. Once an LLM can invoke tools and assume identities, the capabilities we grant an agent become the primary attack surface. This post continues from my first piece on [Loss of Intent as a Failure Mode in OWASP's Agentic AI Risks](/blog/loss-of-intent-as-a-failure-mode-in-owasp-agentic-ai-risks-2026). Here, I focus on the second bucket in the [OWASP Top 10 for Agentic Applications 2026](https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/): agents with too much power.

AISecurityAgenticAIAIEngineeringCybersecurityOWASP
Read post →

Loss of Intent as a Failure Mode in OWASPs Agentic AI Risks (2026)

OWASP recently released the Top 10 Vulnerabilities for Agentic Applications (2026). One thing is clear that the agentic systems fail differently than traditional applications or simple LLM integrations. The failure mode is not bad output, but the system taking a valid action for the wrong reason. In this post, I break down three OWASP vulnerabilities that stem from loss of intent, explain how they show up in real systems, and outline some mitigations.

AISecurityAgenticAIAIEngineeringCybersecurityOWASP
Read post →

Unbounded Agent Execution can result in Denial-of-Service Attacks

Agents often appear structured at the planning level, but at runtime their execution becomes increasingly non-deterministic once tools, retries, partial failures, and replanning are introduced. This can easily become an economic denial of service (EDoS) attack.

AISecurityAgenticAIDistributedSystemsCloudSecuritySystemsDesignAIInfrastructureCyberSecurity
Read post →

Prompt Injection in CI/CD Pipelines – GitHub Actions Issue (PromptPwnd)

Aikido Security recently uncovered a new class of CI/CD vulnerabilities they call **PromptPwnd**. The gist of the issue is simple: steps in the CI/CD workflows (e.g. GitHub Actions and GitLab pipelines) are increasingly using AI agents like Gemini CLI, Claude Code and OpenAI Codex to triage issues, label pull requests or generate summaries. These workflows sometimes embed untrusted user content—issue titles, PR descriptions or commit messages—directly into the prompts fed to the model. In this blog I will explore the core of the issue and some potential solutions.

AIAnigravityLLM SecuritySecurityAI SecurityPrompt InjectionDev Sec Ops 'CI'CDCI CD 'GitHub Actions' 'Supply Chain Security'
Read post →

Reducing AI Agent Vulnerability to Hidden Inputs (Learning from the Antigravity Incident)

The core of the issue with the Antigravity failure was that the AI assistant treated data as instructions, then executed those instructions through its tool layer with no human in the loop. This can happen not just in IDEs but agents in general.In this blog, I will demonstrate the failure using a local model and some scripting and will present good practices on how to prevent them.

AIAntigravityLLM SecuritySecurityAI SecurityPrompt Injection
Read post →

AI Security is a Workflow Problem

From a development perspective, most AI security problems come from the workflow around the model, not the model itself. The issues usually show up in the inputs, the data paths, and the decisions that run without any guardrails.

AISecurityWorkflowLLMAI Security
Read post →

Securing AI adoption

AI adoption is accelerating across industries, transforming how businesses operate and innovate. As companies embrace AI, it is crucial to understand the security and privacy implications. This article will explore security considerations when building custom AI solutions and integrating AI into business operations.

Artificial IntelligenceAIMachine LearningMLLLMSecurityShift LeftOWASP
Read post →