davila7/claude-code-templates/knowledge-distillation
Compress large language models using knowledge distillation from teacher to student models. Use when deploying smaller models with retained performance, transferring GPT-4 capabilities to open-source models, or reducing inference costs. Covers temperature scaling, soft targets, reverse KLD, logit distillation, and MiniLLM training strategies.
Risk Score
0
out of 100
CodeThreat AppSec
Full SAST + SCA agentic security analysis for MCP servers and Skills.