πŸ”πŸ¦™πŸ€– Private & Local Ollama Self-Hosted AI Assistant

Nodes

20aa9cce-f05d-4d19-afd9-2c3b89753a78

Created by

JoJoseph LePage

Last edited 9 days ago

Transform your local N8N instance into a powerful chat interface using any local & private Ollama model, with zero cloud dependencies ☁️. This workflow creates a structured chat experience that processes messages locally through a language model chain and returns formatted responses πŸ’¬.

How it works πŸ”„

  • πŸ’­ Chat messages trigger the workflow
  • 🧠 Messages are processed through Llama 3.2 via Ollama (or any other Ollama compatible model)
  • πŸ“Š Responses are formatted as structured JSON
  • ⚑ Error handling ensures robust operation

Set up steps πŸ› οΈ

  • πŸ“₯ Install N8N and Ollama
  • βš™οΈ Download Ollama 3.2 model (or other model)
  • πŸ”‘ Configure Ollama API credentials
  • ✨ Import and activate workflow

This template provides a foundation for building AI-powered chat applications while maintaining full control over your data and infrastructure πŸš€.

New to n8n?

Need help building new n8n workflows? Process automation for you or your company will save you time and money, and it's completely free!