Ollama: Difference between revisions
(Created page with "# Ollama Ollama is an open-source tool that allows users to run large language models (LLMs) locally on their machines. It provides a simple interface to download, run, and interact with state-of-the-art language models like LLaMA, Mistral, and others. --- ## Table of Contents 1. [What is Ollama?](#what-is-ollama) 2. [Key Features](#key-features) 3. [How Ollama Works](#how-ollama-works) 4. [Installation](#installation) 5. [Usage](#usage) 6. [Supported Models](#supp...") |
mNo edit summary |
||
| (3 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
Ollama is an open-source tool that allows users to run large language models (LLMs) | Ollama is an open-source tool that allows users to run large language models (LLMs) | ||
locally on their machines. It provides a simple interface to download, run, and interact | locally on their machines. It provides a simple interface to download, run, and interact | ||
with state-of-the-art language models like LLaMA, Mistral, and others. | with state-of-the-art language models like LLaMA, Mistral, and others. | ||
== What is Ollama? == | |||
Ollama is a tool designed to bring large language models to everyone. It enables users | Ollama is a tool designed to bring large language models to everyone. It enables users | ||
to: | to: | ||
* Run powerful AI models locally on their devices. | |||
* Interact with models through a simple command-line interface (CLI) or API. | |||
* Avoid dependency on cloud services for processing. | |||
Ollama supports models such as LLaMA, Mistral, Phi, GPT-J, GPT-NeoX, and more. It is | Ollama supports models such as LLaMA, Mistral, Phi, GPT-J, GPT-NeoX, and more. It is | ||
| Line 36: | Line 17: | ||
--- | --- | ||
== Key Features == | |||
* [[Local Execution]]: Models run entirely on your machine. | |||
* [[Simple CLI]]: Easy-to-use commands for model interaction. | |||
* [[Model Management]]: Download, update, and manage models seamlessly. | |||
* [[API Integration]]: Expose model capabilities via a REST API. | |||
* [[Cross-Platform]]: Works on Windows, macOS, and Linux. | |||
* [[Privacy-Focused]]: No data sent to external servers. | |||
--- | --- | ||
== How Ollama Works == | |||
1. [[Model Downloading]]: Users can pull models from repositories like Hugging Face or | |||
1. | |||
the Ollama Hub. | the Ollama Hub. | ||
2. | 2. [[Containerization]]: Models are run in containers for isolation and ease of use. | ||
3. | 3. [[Inference]]: Users can query models via the CLI or API. | ||
4. | 4. [[Quantization]]: Ollama supports quantized models for faster performance on | ||
lower-end hardware. | lower-end hardware. | ||
--- | --- | ||
== Installation == | |||
=== For Linux/macOS === | |||
Visit the Ollama Website for installation details. | |||
== Usage == | |||
=== List Available Models === | |||
<pre> | |||
ollama list | ollama list | ||
</pre> | |||
=== Start a Model === | |||
<pre> | |||
ollama serve # Starts the API server | ollama serve # Starts the API server | ||
ollama pull llama2 # Downloads the LLaMA 2 model | ollama pull llama2 # Downloads the LLaMA 2 model | ||
</pre> | |||
=== Interact with a Model === | |||
<pre> | |||
ollama run llama2 | ollama run llama2 | ||
</pre> | |||
=== Use the API === | |||
Send HTTP requests to `http://localhost:11434/api/generate`. | Send HTTP requests to `http://localhost:11434/api/generate`. | ||
| Line 102: | Line 72: | ||
--- | --- | ||
== Supported Models == | |||
Ollama supports a wide range of models, including: | Ollama supports a wide range of models, including: | ||
* LLaMA (Meta) | |||
* Mistral (Sideline) | |||
* Phi (Microsoft) | |||
* GPT-J | |||
* GPT-NeoX | |||
* Falcon (Tiihs) | |||
* and many more. | |||
Check the [ | Check the [[Ollama Hub]] for the latest list. | ||
--- | --- | ||
== Examples == | |||
=== Generate Text === | |||
<pre> | |||
ollama run llama2 "Write a poem about artificial intelligence." | ollama run llama2 "Write a poem about artificial intelligence." | ||
</pre> | |||
=== Stream Output === | |||
<pre> | |||
ollama generate llama2 -p "Explain quantum computing in simple terms." | ollama generate llama2 -p "Explain quantum computing in simple terms." | ||
</pre> | |||
=== Use with Python === | |||
```python | ```python | ||
| Line 148: | Line 118: | ||
print(response.json()["response"]) | print(response.json()["response"]) | ||
``` | ``` | ||
<!-- Note: In MediaWiki, Python code blocks are typically displayed without syntax | |||
highlighting by default. For better formatting, you might need to use a different | |||
approach or install extensions. --> | |||
--- | --- | ||
== Community and Resources == | |||
* [[GitHub]]: | |||
[https://github.com/jmorganca/ollama](https://github.com/jmorganca/ollama) | |||
* [[Documentation]]: [https://ollama.com/docs](https://ollama.com/docs) | |||
* [[Community]]: Join discussions on the [[Ollama Forum]] (https://forum.ollama.com) | |||
--- | --- | ||
== Contributing == | |||
Contributions are welcome! Check the [GitHub | Contributions are welcome! Check the [[GitHub repository]] for guidelines. | ||
repository] | |||
--- | --- | ||
== License == | |||
Ollama is released under the MIT License. See the | Ollama is released under the [[MIT License]]. See the | ||
[LICENSE](https://github.com/jmorganca/ollama/blob/main/LICENSE) file for details. | [LICENSE](https://github.com/jmorganca/ollama/blob/main/LICENSE) file for details. | ||
--- | --- | ||
This page was last updated on 2025-04-08. | This page was last updated on 2025-04-08. | ||
Latest revision as of 00:11, 17 November 2025
Ollama is an open-source tool that allows users to run large language models (LLMs) locally on their machines. It provides a simple interface to download, run, and interact with state-of-the-art language models like LLaMA, Mistral, and others.
What is Ollama?
Ollama is a tool designed to bring large language models to everyone. It enables users to:
- Run powerful AI models locally on their devices.
- Interact with models through a simple command-line interface (CLI) or API.
- Avoid dependency on cloud services for processing.
Ollama supports models such as LLaMA, Mistral, Phi, GPT-J, GPT-NeoX, and more. It is particularly useful for users who prioritize privacy, control, or offline use.
---
Key Features
- Local Execution: Models run entirely on your machine.
- Simple CLI: Easy-to-use commands for model interaction.
- Model Management: Download, update, and manage models seamlessly.
- API Integration: Expose model capabilities via a REST API.
- Cross-Platform: Works on Windows, macOS, and Linux.
- Privacy-Focused: No data sent to external servers.
---
How Ollama Works
1. Model Downloading: Users can pull models from repositories like Hugging Face or the Ollama Hub. 2. Containerization: Models are run in containers for isolation and ease of use. 3. Inference: Users can query models via the CLI or API. 4. Quantization: Ollama supports quantized models for faster performance on lower-end hardware.
---
Installation
For Linux/macOS
Visit the Ollama Website for installation details.
Usage
List Available Models
ollama list
Start a Model
ollama serve # Starts the API server ollama pull llama2 # Downloads the LLaMA 2 model
Interact with a Model
ollama run llama2
Use the API
Send HTTP requests to `http://localhost:11434/api/generate`.
---
Supported Models
Ollama supports a wide range of models, including:
- LLaMA (Meta)
- Mistral (Sideline)
- Phi (Microsoft)
- GPT-J
- GPT-NeoX
- Falcon (Tiihs)
- and many more.
Check the Ollama Hub for the latest list.
---
Examples
Generate Text
ollama run llama2 "Write a poem about artificial intelligence."
Stream Output
ollama generate llama2 -p "Explain quantum computing in simple terms."
Use with Python
```python import requests
response = requests.post(
"http://localhost:11434/api/generate", json={ "model": "llama2", "prompt": "What is the meaning of life?", "stream": False }
)
print(response.json()["response"]) ```
---
Community and Resources
[1](https://github.com/jmorganca/ollama)
- Documentation: [2](https://ollama.com/docs)
- Community: Join discussions on the Ollama Forum (https://forum.ollama.com)
---
Contributing
Contributions are welcome! Check the GitHub repository for guidelines.
---
License
Ollama is released under the MIT License. See the [LICENSE](https://github.com/jmorganca/ollama/blob/main/LICENSE) file for details.
---
This page was last updated on 2025-04-08.