RAG pipeline evaluation with Ragas and Langfuse

Ragas and Langfuse is a powerful combination that can help you evaluate and monitor your Retrieval-Augmented Generation (RAG) pipelines.

What is Langfuse?

Langfuse (GitHub) is a platform for LLM tracing, prompt management, and evaluation. It allows you to score your traces and spans, providing insights into the performance of your RAG pipelines. Langfuse supports various integrations, including OpenAI, Langchain, and more.

What is Ragas?

Ragas is an open-source tool designed for model-based evaluation of RAG pipelines. It performs reference-free evaluations, meaning you don’t need ground-truth data to assess your system’s performance. Ragas can evaluate various aspects of your RAG pipeline, such as faithfulness, answer relevancy, and context precision.

Common use cases for Ragas include:

  • Evaluate RAG Pipelines: Generate synthetic test sets and assess pipeline performance using metrics like faithfulness and answer relevance.
  • Custom Prompt Adaptation: Write and optimize custom prompts with automatic adaptation for improved retrieval and generation.
  • CI Pipeline Integration: Integrate Ragas with CI pipelines using Pytest for automated evaluation and monitoring.

Key Benefits of using Langfuse with Ragas

  • Score Traces: Score your traces and spans, providing insights into the performance of your RAG pipelines.
  • Detailed Analytics: Segment and analyze traces to identify low-quality scores and improve your system’s performance.
  • Score Reporting: Drill down into detailed reports for specific use cases and user segments.

Getting Started

Check out the notebook for end-to-end examples of RAG evaluations with Ragas and Langfuse:

Was this page useful?

Questions? We're here to help

Subscribe to updates