Back to Glossary
concepts

Ollama

Tool for easily running large language models locally on your computer.

Share:

Definition

Ollama makes it easy to run open-source LLMs locally with a simple command-line interface.

Key Features: - One-command model download - Local inference - API compatible with OpenAI - Model customization

Supported Models: - Llama 3 - Mistral/Mixtral - Phi - Gemma - Many community models

Simple Usage: - ollama run llama3 - ollama pull mistral - ollama list

Benefits: - Privacy (data stays local) - No API costs - Offline capable - Easy model switching

Requirements: - macOS, Linux, Windows - Decent CPU or GPU - RAM depends on model size

Examples

Running "ollama run llama3" to chat with Llama 3 locally.

Want more AI knowledge?

Get bite-sized AI concepts delivered to your inbox.

Free intelligence briefs. No spam, unsubscribe anytime.

Discussion