Ollama Memory Embeddings

vidarbrekkeBy vidarbrekke

Description

Configure OpenClaw memory search to use Ollama as the embeddings server (OpenAI-compatible /v1/embeddings) instead of the built-in node-llama-cpp local GGUF loading. Includes interactive model selection and optional import of an existing local embedding GGUF into Ollama.

Install

npx clawhub@latest install ollama-memory-embeddings

Click to copy the install command