Message Buffer System with Redis and GPT-4 for Efficient Processing
Created by
Last edited 56 days ago
Description
This workflow implements a message-batching buffer using Redis for temporary storage and GPT-4 for consolidated response generation. Incoming user messages are collected in a Redis list; once a configurable “inactivity” window elapses or a batch size threshold is reached, all buffered messages are sent to GPT-4 in a single prompt. The system then clears the buffer and returns the consolidated reply.
Key Features
- Redis-backed buffer to queue incoming messages per user session
- Dynamic wait time (shorter for long messages, longer for short messages)
- Batch trigger on inactivity timeout or minimum message count
- GPT-4 consolidation: merges all buffered messages into one coherent response
Setup Instructions
-
Map Input
- Rename node to “Extract Session & Message”
- Assign
context_id
andmessage
from webhook or manual trigger
-
Compute Wait Time
-
Rename node to “Determine Inactivity Timeout”
-
JS Code:
const wordCount = $json.message.split(' ').filter(w=>w).length; return [{ json: { context_id: $json.context_id, message: $json.message, waitSeconds: wordCount < 5 ? 45 : 30 }}];
-
-
Buffer Message in Redis
- Push into list
buffer_in:{{$json.context_id}}
- INCR key
buffer_count:{{$json.context_id}}
with TTL{{$json.waitSeconds + 60}}
- Push into list
-
Mark Waiting State
- GET
waiting_reply:{{$json.context_id}}
→ if null, SET it totrue
with TTL{{$json.waitSeconds}}
- Rename nodes to “Check Waiting Flag” / “Set Waiting Flag”
- GET
-
Wait for Inactivity
- Wait node: pause for
{{$json.waitSeconds}}
seconds
- Wait node: pause for
-
Check Batch Trigger
-
GET keys:
last_seen:{{$json.context_id}}
buffer_count:{{$json.context_id}}
-
IF both:
buffer_count >= 1
(now – last_seen) >= waitSeconds * 1000
-
Rename node to “Trigger Batch on Inactivity or Count”
-
-
Fetch & Consolidate
-
GET entire list
buffer_in:{{$json.context_id}}
-
Information Extractor → rename to “Consolidate Messages”
- System prompt: “You are an expert at merging multiple messages into one clear paragraph without duplicates.”
-
-
GPT-4 Chat
- OpenAI Chat Model (GPT-4)
-
Cleanup & Respond
-
Delete Redis keys:
buffer_in:{{$json.context_id}}
waiting_reply:{{$json.context_id}}
buffer_count:{{$json.context_id}}
-
Return the consolidated reply to the user
-
Customization Guidance
- Batch Size Trigger: Add an additional IF to fire when
buffer_count
reaches your desired batch size. - Timeout Policy: Adjust the word-count thresholds or replace with character-count logic.
- Multi-Channel Support: Change the trigger from a manual test node to any webhook (e.g., chat, SMS, email).
- Error Handling: Insert a fallback branch to catch Redis timeouts or OpenAI API errors and notify users.
📣 ¡Conéctate con Innovatex!
🔗 Encuentra todos nuestros enlaces en Linktree: innovatexiot.carrd.co
🔗 Conecta conmigo en LinkedIn: Edison Andrés García Herrera
You may also like
New to n8n?
Need help building new n8n workflows? Process automation for you or your company will save you time and money, and it's completely free!