Easy Learning with Mastering LlamaIndex: Build Smart AI-Powered Data Solutions
Development > Data Science
12.5 h
£39.99 £12.99
0.0
2542 students

Enroll Now

Language: English

LlamaIndex Mastery: Build Powerful AI Data Solutions

What you will learn:

  • Build robust AI-powered data solutions with LlamaIndex.
  • Master diverse data loading techniques.
  • Design, customize, and optimize efficient RAG pipelines.
  • Generate high-quality embeddings using HuggingFace and OpenAI.
  • Develop expertise in advanced query engines and vector indexing.
  • Utilize powerful debugging and monitoring tools.
  • Craft impactful prompts and response synthesizers.
  • Implement rigorous evaluation techniques for system validation.

Description

Unlock the power of AI-driven data solutions with our comprehensive LlamaIndex mastery course. Designed for developers, data scientists, and AI enthusiasts, this program provides practical, hands-on training in building intelligent data workflows using LlamaIndex and its advanced tools. Learn to design, implement, and optimize Retrieval-Augmented Generation (RAG) pipelines, leverage embeddings, and conquer complex data challenges with cutting-edge AI techniques.

This course goes beyond the basics, equipping you with the skills to integrate LLMs seamlessly with various data sources. Master advanced techniques such as data ingestion, efficient indexing, optimized querying, and robust evaluation methods. Explore tools like HuggingFace, OpenAI APIs, vector databases, and debugging tools to refine your AI systems for optimal performance and scalability. Whether you're working with structured or unstructured data, including PDFs and databases, this course empowers you to create sophisticated, production-ready AI applications.

Key Highlights:

  • Comprehensive RAG Pipeline Development: Build efficient and scalable pipelines from data ingestion to optimized querying.
  • Advanced Query Engine Techniques: Master precision querying with tools like JSONQueryEngine and text-to-SQL systems, employing sentence reranking and recency filters.
  • Practical Projects and Case Studies: Build real-world AI solutions through hands-on projects and detailed case studies.
  • Effective Debugging and Observability: Utilize state-of-the-art debugging and monitoring tools for robust system development.
  • Thorough System Evaluation: Implement rigorous evaluation strategies (correctness, relevancy, faithfulness) to ensure high-performing AI systems.

Transform your data handling capabilities and start building intelligent AI applications today!

Curriculum

Introduction: Embarking on Your AI Data Journey

This introductory section sets the stage for your LlamaIndex adventure. You'll be welcomed to the course, gain an understanding of generative AI's landscape, and establish your development environment. Key concepts like the architecture of LLMs, RAG, and the LlamaIndex framework are thoroughly explained, providing a strong foundation for the rest of the course. Lectures cover welcome, generative AI exploration, Git setup, AI model foundations, LLM architecture and RAG, and an introduction to the LlamaIndex framework.

Installation and Setup: Building Your AI Workspace

This section guides you through setting up your development environment for LlamaIndex projects. You'll learn to configure LlamaIndex in Google Colab, integrate OpenAI API keys, and take your first steps with a beginner-friendly LlamaIndex demo. Lectures include setting up LlamaIndex in Google Colab, configuring OpenAI API keys, and a first LlamaIndex demo.

Ollama Exploration: Local LLM Power

Dive into Ollama, a powerful tool for running local LLMs. This section covers Ollama's functionalities, its configuration for your local environment, and its integration with Visual Studio Code. Lectures cover an Ollama overview, local environment configuration, and Visual Studio Code integration.

Unpacking RAG: Core Stages of Retrieval-Augmented Generation

This in-depth section dissects the Retrieval-Augmented Generation (RAG) pipeline. You'll learn to load data using the LlamaIndex CLI and SimpleDirectoryReader, master document chunking, explore interactive embeddings, and generate embeddings with HuggingFace and OpenAI APIs. Furthermore, you'll dive into index retrievers, vector databases (including ChromaDB), response synthesizers, and gain a comprehensive understanding of RAG's different stages. Lectures cover RAG stages, data loading with the CLI and SimpleDirectoryReader, node chunking, interactive embeddings, embedding generation (HuggingFace and OpenAI), indexes and VectorStore indexing, index retrievers, vector databases (ChromaDB), response synthesizers, and a RAG recap.

Advanced Data Loading: Mastering Data Ingestion

This section delves into advanced techniques for data loading and processing. You'll learn to use SimpleDirectoryReader efficiently, implement parallel processing, integrate remote file systems, parse HTML, access deep data with DeepLake Reader, and interact with databases using Database Reader. You'll also learn about Google Drive integration, the document and node structure in LlamaIndex, and advanced node customization techniques. Lectures include advanced workflow introduction, efficient SimpleDirectoryReader use, parallel processing, remote file system integration, HTML parsing, DeepLake and Database Readers, Google Drive integration, documents and nodes, and advanced node customization.

Building Efficient Ingestion Pipelines

This section focuses on creating and optimizing ingestion pipelines. You’ll learn how to extract metadata from structured and unstructured data, including PDF metadata extraction and entity extraction using various extractors. You'll also learn to design custom transformations and handle multiple extractors within your pipelines. Lectures cover ingestion pipeline overview, pipeline demonstration, metadata extraction (structured and unstructured, including PDFs), summary and entity extraction, custom transformation design, and handling multiple extractors.

Smart Data Storage: LlamaIndex Storage Techniques

This section covers effective data storage within the LlamaIndex framework. You'll learn about DocStore, efficient storage management, persistent storage on local disk, and leveraging MongoDB and Redis for scalable storage. Lectures include introduction to storage, DocStore, DocStore management, local disk persistence, MongoDB integration (saving and loading), and Redis integration.

Mastering Indexing: Efficient Data Organization

This section focuses on indexing strategies for optimal query performance. You'll explore various retrievers and indexes, including vector indexes, summary indexes, keyword table indexes, document summary indexes, and property graph indexes. Lectures include indexing fundamentals, retrievers, vector indexes, summary indexes, keyword table indexes, document summary indexes, and graph indexes.

Optimized Querying: Precise Information Retrieval

This section covers advanced querying techniques. You'll learn to customize query stages for enhanced precision, utilize sentence reranking and recency filters, work with metadata replacement, query structured data using text-to-SQL systems, explore synthesizer response types, and query JSON data. You'll also learn about retriever techniques, comparing retriever and response modes, query fusion, and dynamic query routing. Lectures cover querying basics, query stage customization, sentence reranking, recency filters, metadata replacement, text-to-SQL, synthesizer response types, JSON querying, real-time streaming, retriever techniques, comparing retriever/response modes, query fusion, and dynamic query routing.

Empowering Conversations: Chat Engine Frameworks

This section introduces chat engine frameworks. You'll learn about chat engines, the ReAct mode, and how to add personality to your chat applications. Lectures include introduction to chat engines, ReAct mode, and adding personality to chat engines.

Agents: Automating AI Workflows

This section covers the use of agents for automating AI-powered workflows. You'll learn about agents, using OpenAI and ReAct agents, and working with Agent Runner APIs and the ReAct framework in a chat REPL environment. Lectures include agent introduction, OpenAI and ReAct agents, Agent Runner APIs, and the ReAct framework.

Prompt Engineering: Crafting Effective Prompts

Learn the art of prompt engineering to get the best results from your AI models. This section covers crafting effective prompts, using default templates, defining custom prompts, building dynamic conversations, using partial prompts, variable and function mapping. Lectures cover prompt crafting, default templates, custom prompts, dynamic conversations, partial prompts, variable mapping, and function mapping.

Crafting Intelligent AI Pipelines

This section delves into workflow creation for more complex AI systems. You will cover workflow basics, context management, event handling, human-in-the-loop integration, multistep reasoning, and building long RAG workflows. Lectures include workflow introduction, workflow creation, context and event management, event triggering and streaming, human-in-the-loop concepts, multistep reasoning, and long RAG workflow implementation.

Optimizing AI with Precision Metrics

This section focuses on evaluating and optimizing your AI systems. You'll learn various evaluation techniques, including correctness, relevancy, faithfulness, and using tools like Tonic Validate. Lectures cover evaluation techniques, code structuring for evaluation, correctness, relevancy, embedding similarity, faithfulness, guideline, pairwise, and retriever evaluators, and Tonic Validate.

Observability and Instrumentation: Monitoring and Debugging

This section introduces observability and instrumentation tools for monitoring and debugging your applications. You'll learn about Traceloop, Phoenix Arize, and MLFlow for enhancing system monitoring. Lectures include introduction to observability and instrumentation, Traceloop, Phoenix Arize, and MLFlow.

Course Summary and Next Steps

This final section provides a summary and links to additional resources. Lectures include a final Github code link.