Build Custom AI Agent with LangChain & Gemini (Self-Hosted)
Last edited 9 days ago
Overview
This workflow leverages the LangChain code node to implement a fully customizable conversational agent. Ideal for users who need granular control over their agent's prompts while reducing unnecessary token consumption from reserved tool-calling functionality (compared to n8n's built-in Conversation Agent).
Setup Instructions
- Configure Gemini Credentials: Set up your Google Gemini API key (Get API key here if needed). Alternatively, you may use other AI provider nodes.
- Interaction Methods:
- Test directly in the workflow editor using the "Chat" button
- Activate the workflow and access the chat interface via the URL provided by the
When Chat Message Received
node
Customization Options
- Interface Settings: Configure chat UI elements (e.g., title) in the
When Chat Message Received
node - Prompt Engineering:
- Define agent personality and conversation structure in the
Construct & Execute LLM Prompt
node's template variable - ⚠️ Template must preserve
{chat_history}
and{input}
placeholders for proper LangChain operation
- Define agent personality and conversation structure in the
- Model Selection: Swap language models through the
language model
input field inConstruct & Execute LLM Prompt
- Memory Control: Adjust conversation history length in the
Store Conversation History
node
Requirements:
⚠️ This workflow uses the LangChain Code node, which only works on self-hosted n8n.
(Refer to LangChain Code node docs)
You may also like
New to n8n?
Need help building new n8n workflows? Process automation for you or your company will save you time and money, and it's completely free!