yonatangross/orchestkit/ollama-local
Local LLM inference with Ollama. Use when setting up local models for development, CI pipelines, or cost reduction. Covers model selection, LangChain integration, and performance tuning.
Risk Score
0
out of 100
Popularity
69
Stars
5
Forks
Feb 9, 2026
Updated
Findings by Severity (Latest Scan)
CodeThreat AppSec
Full SAST + SCA agentic security analysis for MCP servers and Skills.