davila7/claude-code-templates/transformer-lens-interpretability
Provides guidance for mechanistic interpretability research using TransformerLens to inspect and manipulate transformer internals via HookPoints and activation caching. Use when reverse-engineering model algorithms, studying attention patterns, or performing activation patching experiments.
Risk Score
0
out of 100
CodeThreat AppSec
Full SAST + SCA agentic security analysis for MCP servers and Skills.