Chat with local LLMs using n8n and Ollama

Nodes

58d17629-f698-41fa-b847-0dea05ec78e9

Created by

MiMihai Farcas

Last edited 9 days ago

Chat with local LLMs using n8n and Ollama

This n8n workflow allows you to seamlessly interact with your self-hosted Large Language Models (LLMs) through a user-friendly chat interface. By connecting to Ollama, a powerful tool for managing local LLMs, you can send prompts and receive AI-generated responses directly within n8n.

Use cases

  • Private AI Interactions Ideal for scenarios where data privacy and confidentiality are important.
  • Cost-Effective LLM Usage Avoid ongoing cloud API costs by running models on your own hardware.
  • Experimentation & Learning A great way to explore and experiment with different LLMs in a local, controlled environment.
  • Prototyping & Development Build and test AI-powered applications without relying on external services.

How it works

  1. When chat message received: Captures the user's input from the chat interface.
  2. Chat LLM Chain: Sends the input to the Ollama server and receives the AI-generated response.
  3. Delivers the LLM's response back to the chat interface.

Set up steps

  • Make sure Ollama is installed and running on your machine before executing this workflow.
  • Edit the Ollama address if different from the default.

New to n8n?

Need help building new n8n workflows? Process automation for you or your company will save you time and money, and it's completely free!