Evaluate RAG Response Accuracy with OpenAI: Document Groundedness Metric

Nodes

3b9f1b78-6b80-4575-aa6c-9669870003891841584d-f090-40d7-8a99-6d8ec8df82fd57932fab-a25f-4afc-8917-5e99648bfd3c24e3b914-15fa-444f-80e3-ca29bdacaf40+8

Created by

JiJimleuk

Last edited 39 days ago

This n8n template demonstrates how to calculate the evaluation metric "RAG document groundedness" which in this scenario, measures the ability to provide or reference information included only in retrieved vector store documents.

The scoring approach is adapted from https://cloud.google.com/vertex-ai/generative-ai/docs/models/metrics-templates#pointwise_groundedness

How it works

  • This evaluation works best for an agent that requires document retrieval from a vector store or similar source.
  • For our scoring, we need to collect the agent's response and the documents retrieved and use an LLM to assess if the former is based off the latter.
  • A key factor is to look out information in the response which is not mentioned in the documents.
  • A high score indicates LLM adherence and alignment whereas a low score could signal inadequate prompt or model hallucination.

Requirements

New to n8n?

Need help building new n8n workflows? Process automation for you or your company will save you time and money, and it's completely free!