Easy Learning with How to test or Evaluate Gen AI, LLM, RAG, Agentic AI
IT & Software > Other IT & Software
2h 47m
£14.99 £12.99
4.2
1776 students

Enroll Now

Language: English

Practical LLM Evaluation & Gen AI Testing: RAG, Agentic AI with Ragas, DeepEval, LangSmith

What you will learn:

  • Master the complete lifecycle of LLM application evaluation, from defining quality criteria to selecting appropriate evaluation methods and metrics for RAG and Agentic AI.
  • Gain expertise in evaluating RAG systems using the RAGAs framework, understanding RAG components and their specific evaluation needs.
  • Implement RAG evaluation with advanced metrics like context precision and recall, and learn to test RAG applications effectively using Python and RAGAs.
  • Develop skills in testing and evaluating RAG applications through Pytest, including API automation for robust RAG quality assurance.
  • Learn to test and evaluate complex Agentic AI applications using DeepEval, incorporating automated testing with Pytest for multi-agent systems.
  • Utilize LangSmith for comprehensive tracing of RAG applications, create custom evaluation datasets programmatically with Python, and perform AI application evaluations using these datasets within LangSmith.

Description

Ensuring the reliability, accuracy, and trustworthiness of Large Language Model (LLM) applications is paramount as they become integral to modern solutions. This immersive, hands-on course equips you with the essential skills to navigate the entire evaluation lifecycle of LLM-powered systems, with a specialized emphasis on Retrieval-Augmented Generation (RAG) and sophisticated Agentic AI architectures.

You'll kickstart your learning journey by grasping the fundamental principles of the evaluation process, meticulously exploring how to assess quality across every critical stage of a RAG pipeline. The course then dives profoundly into RAGAs – the widely adopted, community-driven evaluation framework. You'll gain practical expertise in calculating crucial metrics such as context relevancy, faithfulness, and hallucination rate using cutting-edge open-source tooling.

Through a series of engaging and practical labs, you will develop the ability to construct and automate sophisticated tests with Pytest, rigorously evaluate complex multi-agent systems, and flawlessly implement evaluation protocols using DeepEval. Furthermore, you will master the art of tracing and debugging your intricate LLM workflows with LangSmith, providing unparalleled visibility into the operational nuances of each component within your RAG or Agentic AI ecosystem.

Upon successful completion of this course, you will possess the expertise to engineer custom evaluation datasets and confidently validate LLM outputs against precise ground truth responses. Whether you are an aspiring AI developer, a dedicated quality assurance engineer, or a passionate AI enthusiast eager to delve into advanced concepts, this course will furnish you with the indispensable practical tools and advanced techniques required to deploy trustworthy, production-grade LLM applications.

No prior experience with specific evaluation frameworks is necessary; a foundational understanding of Python and an enthusiastic curiosity for exploring the frontiers of AI quality will suffice. Enroll today and transform your capability to evaluate and rigorously test Generative AI applications with confidence and precision!

Curriculum

Introduction

This introductory section provides a comprehensive overview of the course, outlining the key learning objectives, the relevance of Gen AI and LLM evaluation, and what participants can expect to achieve. It sets the foundation for understanding the critical role of testing and quality assurance in the development of robust AI applications.

Theory - Evaluation process and RAG

Dive into the theoretical underpinnings of LLM evaluation. This section demystifies the core evaluation process, explaining various methodologies and their importance. It then provides a detailed exploration of Retrieval-Augmented Generation (RAG) systems, identifying their key components and pinpointing exactly what aspects require rigorous evaluation to ensure optimal performance and reliability.

RAG Evaluation with RAGAs Framework

Unlock the power of the RAGAs framework for comprehensive RAG evaluation. This section begins with an overview of RAGAs, followed by hands-on coding exercises to implement the framework alongside a sample RAG application. You'll gain practical experience in measuring critical metrics like Context Precision, Context Recall, and Faithfulness. The learning culminates in a complete RAG application evaluation, including understanding its API and implementing robust API testing using Pytest, preparing you for effective RAG quality assurance.

DeepEval

Explore DeepEval, a powerful framework for AI evaluation. This section starts with an overview and your first hands-on evaluation exercise. You will then delve into a practical use case involving a 'Travel Agent Agentic AI' system. The culmination of this section involves a detailed, hands-on evaluation of this complex Agentic AI using DeepEval, equipping you with skills to assess advanced AI behaviors.

Langsmith

Master LangSmith for advanced LLM workflow tracing and evaluation. This section begins with crucial instructions and code setup. You will learn to trace complex RAG applications, gaining deep insights into their operational flow. The course then guides you through creating custom evaluation datasets programmatically using Python and effectively using these datasets to evaluate your AI applications within the LangSmith ecosystem.

Quizzes

Test your knowledge and reinforce your learning with a set of engaging quizzes. This section includes 5 thought-provoking questions designed to assess your understanding of the core concepts, frameworks, and practical techniques covered throughout the course, ensuring a solid grasp of LLM and Gen AI evaluation.