Evaluation metric example: Categorization
Last edited 39 days ago
AI evaluation in n8n
This is a template for n8n's evaluation feature.
Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow.
By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't.
How it works
This template shows how to calculate a workflow evaluation metric: whether a category matches the expected one.
The workflow takes support tickets and generates a category and priority, which is then compared with the correct answers in the dataset.
- We use an evaluation trigger to read in our dataset
- It is wired up in parallel with the regular trigger so that the workflow can be started from either one. More info
- Once the category is generated by the agent, we check whether it matches the expected one in the dataset
- Finally we pass this information back to n8n as a metric
You may also like
New to n8n?
Need help building new n8n workflows? Process automation for you or your company will save you time and money, and it's completely free!