← Corpus / lost-in-public / other
Running the Latest and Greatest LLM Locally
Step-by-step guide for installing and configuring local LLMs—including Ollama, LiteLLM, Fabric, and Perplexica—for modern AI workflows and toolchains.
- Path
- issue-resolution/Running the latest and greatest LLM locally.md
- Authors
- Michael Staton
- Augmented with
- Windsurf Cascade on GPT 4.1
- Tags
- Local-LLM · AI-Toolkit · Home-Labs
[[MSTY]]
[[Tooling/AI-Toolkit/AI Interfaces/OLlama]]
Install [[Tooling/AI-Toolkit/AI Interfaces/OLlama]]
curl -fsSL https://ollama.com/install.sh | sh
Install [[LiteLLM]]
[[Fabric]]
[[Perplexica]]
Depends on [[Tooling/AI-Toolkit/AI Interfaces/OLlama]] for the [[Local LLM|local gateway]], but also connects to [[Anthropic]], [[Tooling/AI-Toolkit/Model Producers/Groq|Groq]], and