Edocti
Advanced Technical Training for the Software Engineer of Tomorrow
Edocti Training

Applied Generative AI: Building RAG Systems for Enterprise

Intermediate
14 h
4.8 (68 reviews)

Scheduled sessions

No sessions are available at the moment.
Applied Generative AI: Building RAG Systems for Enterprise

Beyond the chat box: Move from simple LLM wrappers to robust, enterprise-grade Generative AI applications using your company's private data.

Master the RAG (Retrieval-Augmented Generation) architecture. Learn how to ingest complex documents, apply advanced chunking strategies, and connect semantic search with Large Language Models.

Frameworks & Tooling: Get hands-on with industry-standard orchestration frameworks like LangChain and LlamaIndex to build complex AI pipelines.

Production Focus: ~70% hands-on labs focused not just on building RAG, but on evaluating it. Learn techniques to mitigate hallucinations, measure recall/precision (using RAGAS), and handle prompt injection.

Who it’s for: Software Engineers, AI Developers, and Data Scientists tasked with delivering reliable Generative AI solutions.

Skills You Will Learn

RAG Architecture LangChain & LlamaIndex LLM API Integration Prompt Engineering Document Chunking RAG Evaluation (RAGAS) Hallucination Mitigation Vector Database Interfacing

Curriculum

LLM Fundamentals & API Integration

  • Understanding LLMs: Tokens, context windows, temperature, and generation limits
  • Interacting with LLM APIs (OpenAI) and local models (Ollama / HuggingFace)
  • Advanced Prompt Engineering: Few-shot prompting, Chain-of-Thought (CoT), and formatting instructions
  • Lab: Building a structured data extractor using function calling / JSON mode

The RAG Architecture & Data Ingestion

  • What is Retrieval-Augmented Generation (RAG) and why is it necessary?
  • Document Loaders: Ingesting PDFs, Confluence pages, and Markdown files
  • Advanced Chunking Strategies: Recursive character splitting, semantic chunking, and handling overlapping
  • Lab: Building an automated data ingestion pipeline into a vector store

Orchestration Frameworks: LangChain & LlamaIndex

  • Introduction to LangChain: Chains, Prompts, and Output Parsers
  • LlamaIndex basics: Nodes, Indices, and Query Engines
  • Connecting retrievers to LLMs: Stuffing, Map-Reduce, and Refine document chains
  • Lab: Building a conversational QA bot over a technical documentation repository

Evaluating and Productionizing RAG

  • The problem of hallucinations and how to mitigate them
  • RAG Evaluation: Measuring context precision, recall, and answer relevancy (using RAGAS/TruLens)
  • Query transformations: Query re-writing, sub-queries, and hybrid search
  • Lab: Implementing an evaluation pipeline and tuning chunk sizes to improve mAP (Mean Average Precision)

Optional modules

Optional — Advanced Retrieval Techniques

  • Implementing Re-ranking (Cross-Encoders) to improve top-K results
  • Parent-Document Retrieval and small-to-big retrieval patterns
  • Self-RAG: Teaching the LLM to critique its own retrieved context

Course Day Structure

  • Part 1: 09:00–10:30
  • Break: 10:30–10:45
  • Part 2: 10:45–12:15
  • Lunch break: 12:15–13:15
  • Part 3: 13:15–15:15
  • Break: 15:15–15:30
  • Part 4: 15:30–17:30

Want to find out more? We are here to help!

Or email us directly at training@edocti.com.