- Overview
- Prerequisites
- Audience
- Curriculum
Description:
This intensive 3-day course will provide students with an in-depth understanding of Large Language Models (LLMs), their real-world applications, and how to engineer and deploy cutting-edge AI systems. As LLMs continue to revolutionize industries, they enable businesses to automate complex processes, enhance customer experiences, and solve challenges that were previously insurmountable with traditional methods.
In today's rapidly evolving AI landscape, companies are increasingly leveraging LLMs for a variety of applications such as customer support automation, content generation, intelligent search engines, code generation, document processing, and more. Whether it's streamlining workflows, improving decision-making, or enabling personalized experiences, LLMs have become central to business success. This course will equip students with the tools to create and optimize LLM-powered solutions, giving them the practical skills to drive innovation in any busines.
Students will explore topics like open-source and frontier models, building multimodal chatbots, automated solutions using Hugging Face, evaluating models for code generation, building advanced retrieval-augmented systems (RAG) with vector embeddings, fine-tuning models using LoRA/QLoRA, and building autonomous multi-agent systems. By learning how to use LLMs for these cutting-edge tasks, students will gain the technical know-how to tackle real-world challenges and understand why companies worldwide are adopting LLMs at an accelerated pace
Duration: 3 Days
Course Code: BDT405
Learning Objectives:
After this course, you will be able to:
- Understand and work with open-source and frontier LLMs
- Building multi-modal AI chatbots using APIs and Gradio
- Utilizing HuggingFace models, datasets, and tokenization techniques
- Develop RAG pipelines with LangChain & vector embeddings
- Fine tune LLMs using LoRA/QLoRA & Parameter Efficient Fine-Tuning (PEFT)
Understanding Agentic AI & multi agent autonomous system
- Python Programming experience and libraries such as NumPy & Pandas
- Conceptual understanding of machine learning concepts.
- Familiarity with neural networks and transformers (helpful but not mandatory)
- AI/ML Engineers & Data Scientists
- Software Developers working with LLMs
- Professional interested in AI-powered automation
- Researcher & innovators in the AI field
Course Outline:
- Introduction to Open-Source & Frontier models
- Overview of key open-source LLMs (e.g. Gemma, Llama, Phi, Mistral, etc.)
- Understanding frontier (closed source) LLMs (e.g. GPT, Claude, Gemini, etc.)
- Tokenization and output formatting (JSON, Markdown) using HuggingFace
- Lab: Build first LLM powered application
- Building a Multi-Modal Chatbot
- Working with Claude, Gemini, and OpenAI APIs
- Streaming responses for real-time interactions
- Building a chatbot using Gradio and Python
- Lab: Develop a multi-modal chatbot integrating different models
- Working with Open-Source Models & HuggingFace
- Exploring HuggingFace models and datasets
- Building transformer pipelines for NLP tasks (sentiment analysis, translation, etc)
- Tokenization with AutoTokenizer for different LLM models
- Introduction to Quantization (reducing weights from 32-bit to 8-bit)
- Lab: Experiment with pipelines, tokenizers and quantization
- Using RAG to build advanced solutions with LangChain
- Understanding Retrieval Augmented Generation (RAG)
- Chunking optimization for better retrieval performance
- Working with Vector Databases (FAISS, Chroma, Pinecone)
- Lab: Build RAG pipeline application
- Curating and working with HuggingFace datasets
- Using HuggingFace datasets to curate own data
- Predict product prices with traditional Machine Learning and NLP on dataset
- Use frontier LLM and prompt engineering to predict the prices
- Lab: Develop data curation & price prediction pipeline
- Fine Tuning LLMs with LoRA/QLoRA
- Understanding Transfer Learning & Fine-tuning in LLMs
- Modifying neural network layers for fine-tuning
- Exploring LoRA and QLoRA hyperparameters for efficient fine-tuning
- Using HuggingFace PEFT(Parameter Efficient Fine-Tuning)
- Lab: Fine Tuning Frontier LLM using LoRA/QLoRA
- Agentic AI: Multi agent autonomous system
- Understanding Agent AI
- Building Agentic workflow architecture
- Understanding Agentic AI
- Capstone Project: Develop an AI-powered price prediction system
Training material provided: Yes (Digital format)
Hands-on Lab: Use Google’s Colab to perform the hands-on labs, also will provide instructions to run on local machine