Deploy ollama to your favorite cloud provider

Logo ollama

ollama

Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models.

130.331 stars
  ollama

Ollama

Get up and running with large language models.

macOS

Download

Windows

Download

Linux
curl -fsSL https://ollama.com/install.sh | sh

Manual install instructions

Docker

The official Ollama Docker image ollama/ollama is available on Docker Hub.

Libraries
Community

Quickstart

To run and chat with Llama 3.2:

ollama run llama3.2

Model library

Ollama supports a list of models available on ollama.com/library

Here are some example models that can be downloaded:

ModelParametersSizeDownload
DeepSeek-R17B4.7GBollama run deepseek-r1
DeepSeek-R1671B404GBollama run deepseek-r1:671b
Llama 3.370B43GBollama run llama3.3
Llama 3.23B2.0GBollama run llama3.2
Llama 3.21B1.3GBollama run llama3.2:1b
Llama 3.2 Vision11B7.9GBollama run llama3.2-vision
Llama 3.2 Vision90B55GBollama run llama3.2-vision:90b
Llama 3.18B4.7GBollama run llama3.1
Llama 3.1405B231GBollama run llama3.1:405b
Phi 414B9.1GBollama run phi4
Phi 3 Mini3.8B2.3GBollama run phi3
Gemma 22B1.6GBollama run gemma2:2b
Gemma 29B5.5GBollama run gemma2
Gemma 227B16GBollama run gemma2:27b
Mistral7B4.1GBollama run mistral
Moondream 21.4B829MBollama run moondream
Neural Chat7B4.1GBollama run neural-chat
Starling7B4.1GBollama run starling-lm
Code Llama7B3.8GBollama run codellama
Llama 2 Uncensored7B3.8GBollama run llama2-uncensored
LLaVA7B4.5GBollama run llava
Solar10.7B6.1GBollama run solar
Tip
You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.

Customize a model

Import from GGUF

Ollama supports importing GGUF models in the Modelfile:

  1. Create a file named Modelfile, with a FROM instruction with the local filepath to the model you want to import.
    FROM ./vicuna-33b.Q4_0.gguf
    
  2. Create the model in Ollama
   ollama create example -f Modelfile
  1. Run the model
   ollama run example
Import from Safetensors

See the guide on importing models for more information.

Customize a prompt

Models from the Ollama library can be customized with a prompt. For example, to customize the llama3.2 model:

ollama pull llama3.2

Create a Modelfile:

FROM llama3.2


### set the temperature to 1 [higher is more creative, lower is more coherent]

PARAMETER temperature 1


### set the system message

SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

For more information on working with a Modelfile, see the Modelfile documentation.

CLI Reference

Create a model

ollama create is used to create a model from a Modelfile.

ollama create mymodel -f ./Modelfile
Pull a model
ollama pull llama3.2

This command can also be used to update a local model. Only the diff will be pulled.

Remove a model
ollama rm llama3.2
Copy a model
ollama cp llama3.2 my-model
Multiline input

For multiline input, you can wrap text with """:

>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
Multimodal models
ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"

Output: The image features a yellow smiley face, which is likely the central focus of the picture.

Pass the prompt as an argument
ollama run llama3.2 "Summarize this file: $(cat README.md)"

Output: Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.

Show model information
ollama show llama3.2
List models on your computer
ollama list
List which models are currently loaded
ollama ps
Stop a model which is currently running
ollama stop llama3.2
Start Ollama

ollama serve is used when you want to start ollama without running the desktop application.

Building

See the developer guide

Running local builds

Next, start the server:

./ollama serve

Finally, in a separate shell, run a model:

./ollama run llama3.2

REST API

Ollama has a REST API for running and managing models.

Generate a response
curl http://localhost:11434/api/generate -d '{
  "model": "llama3.2",
  "prompt":"Why is the sky blue?"
}'
Chat with a model
curl http://localhost:11434/api/chat -d '{
  "model": "llama3.2",
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
  ]
}'

See the API documentation for all endpoints.

Community Integrations

Web & Desktop
Cloud
Terminal
Apple Vision Pro
Database
  • pgai - PostgreSQL as a vector database (Create and search embeddings from Ollama models using pgvector)
  • MindsDB (Connects Ollama models with nearly 200 data platforms and apps)
  • chromem-go with example
  • Kangaroo (AI-powered SQL client and admin tool for popular databases)
Package managers
Libraries
Mobile
  • Maid
  • Ollama App (Modern and easy-to-use multi-platform client for Ollama)
  • ConfiChat (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
Extensions & Plugins
Supported backends
  • llama.cpp project founded by Georgi Gerganov.
Observability
  • Lunary is the leading open-source LLM observability platform. It provides a variety of enterprise-grade features such as real-time analytics, prompt templates management, PII masking, and comprehensive agent tracing.
  • OpenLIT is an OpenTelemetry-native tool for monitoring Ollama Applications & GPUs using traces and metrics.
  • HoneyHive is an AI observability and evaluation platform for AI agents. Use HoneyHive to evaluate agent performance, interrogate failures, and monitor quality in production.
  • Langfuse is an open source LLM observability platform that enables teams to collaboratively monitor, evaluate and debug AI applications.
  • MLflow Tracing is an open source LLM observability tool with a convenient API to log and visualize traces, making it easy to debug and evaluate GenAI applications.