AI Composition Analysis
Open security intelligence for the AI component ecosystem. We index, scan, and publish risk profiles for MCP servers and Agent Skills so you can make informed decisions before adoption.
AI agents rely on MCP servers and Agent Skills from public registries. Most of these components have no standardized security assessment — and many introduce real risk.
Exposed tools can leak secrets, execute arbitrary code, or exfiltrate data. Registry metadata rarely includes security context.
There is no standard way to assess risk before adding an MCP server to your stack.
SKILL.md and tool definitions can contain prompt injection, data exfiltration, or unsafe syscalls. Skills are distributed across GitHub, SkillsMP, Tessl — with no central security view.
Duplicate skill names can confuse or deceive users about what they are actually running.
We run a continuous pipeline: ingest from 10+ registries, run multi-phase security scans, and publish risk profiles with findings mapped to industry standards.
Findings are categorized and linked to established security frameworks so you can prioritize and remediate with industry context.
Agentic security inflation means too much noise and too little actionable intelligence. AICA provides open, standardized vulnerability data for the AI component ecosystem — comparable to Sonatype OSS Index or Socket.dev, but for MCP servers and Agent Skills. So developers can see risk before they adopt.
CodeThreat AppSec
Full SAST + SCA agentic security analysis for MCP servers and Skills.