Evaluate AI Agent Response Relevance using OpenAI and Cosine Similarity

Nodes

24e3b914-15fa-444f-80e3-ca29bdacaf40+4

Created by

JiJimleuk

Last edited 39 days ago

This n8n template demonstrates how to calculate the evaluation metric "Relevance" which in this scenario, measures the relevance of the agent's response to the user's question.

The scoring approach is adapted from the open-source evaluations project RAGAS and you can see the source here https://github.com/explodinggradients/ragas/blob/main/ragas/src/ragas/metrics/_answer_relevance.py

How it works

  • This evaluation works best for Q&A agents.
  • For our scoring, we analyse the agent's response and ask another AI to generate a question from it. This generated question is then compared to the original question using cosine similarity.
  • A high score indicates relevance and the agent's successful ability to answer the question whereas a low score means agent may have added too much irrelevant info, went off script or hallucinated.

Requirements

New to n8n?

Need help building new n8n workflows? Process automation for you or your company will save you time and money, and it's completely free!