
AI Agent Supply Chain Security: Malicious Plugins and Model Backdoors
Malicious plugins, backdoored models, and compromised dependencies threaten AI agent security. Learn to vet and isolate third-party components.
Insights, tutorials, and best practices for secure development

Malicious plugins, backdoored models, and compromised dependencies threaten AI agent security. Learn to vet and isolate third-party components.

AI agents accidentally expose API keys, credentials, and PII through outputs, logs, and memory. Learn zero-trust architecture for secrets in AI systems.

AI agents with excessive tool permissions create catastrophic risks. Learn how to scope agent access using least privilege and prevent destructive actions.

Prompt injection enables attackers to hijack AI agents through malicious instructions. Learn how these attacks work and proven defenses to protect your systems.

We analyzed Open Claw, an AI agent controlling 12+ messaging platforms. Here's every vulnerability class we found and how to fix them.

Prompt injection vulnerabilities are the new SQL injection for AI apps. Learn how they work, see real-world examples, and protect your stack from attacks.

Everything you need to secure AI-generated code, vibe-coded apps, and AI agent systems. Organized by topic with direct links to deep dives, audits, and tooling.

Learn what API keys are, why they matter, and how to use them securely across platforms without exposing your app to major risks.

Weak authentication is an open door for attackers. Learn how to secure login, sessions, and identities.

Supply chain attacks are on the rise. Learn how to prevent software and data integrity failures in your apps.

Learn how to run a free five-minute security audit on your Base44 project using Rafter. Secure your AI-generated code, fix vulnerabilities fast, and ship safely.

A security-focused debrief of OpenRouter's State of AI report—what modern AI coding tools enable, and the security risks teams must manage.
Showing 109–120 of 160 posts