Chat with local LLMs using n8n and Ollama
Last edited 9 days ago
Chat with local LLMs using n8n and Ollama
This n8n workflow allows you to seamlessly interact with your self-hosted Large Language Models (LLMs) through a user-friendly chat interface. By connecting to Ollama, a powerful tool for managing local LLMs, you can send prompts and receive AI-generated responses directly within n8n.
Use cases
- Private AI Interactions Ideal for scenarios where data privacy and confidentiality are important.
- Cost-Effective LLM Usage Avoid ongoing cloud API costs by running models on your own hardware.
- Experimentation & Learning A great way to explore and experiment with different LLMs in a local, controlled environment.
- Prototyping & Development Build and test AI-powered applications without relying on external services.
How it works
- When chat message received: Captures the user's input from the chat interface.
- Chat LLM Chain: Sends the input to the Ollama server and receives the AI-generated response.
- Delivers the LLM's response back to the chat interface.
Set up steps
- Make sure Ollama is installed and running on your machine before executing this workflow.
- Edit the Ollama address if different from the default.
You may also like
New to n8n?
Need help building new n8n workflows? Process automation for you or your company will save you time and money, and it's completely free!