- Overview
- Prerequisites
- Curriculum
Description:
This hands-on course introduces the fundamentals of deep learning, neural networks, and PyTorch—one of the most widely used deep learning frameworks. Participants will build foundational knowledge of deep learning principles, explore the PyTorch ecosystem, and apply these skills through practical exercises. The course will also introduce large language models (LLMs), explaining the concepts of pre-training, fine-tuning, and practical use cases leveraging open-source models from Hugging Face.
By the end of the day, learners will have built and trained deep neural networks, and will know how to load and interact with powerful pretrained large language models using PyTorch.
Duration: 1 Day
Course Code: BDT500
Learning Objectives:
After this course, you will be able to:
- Understand the basics of neural networks and deep learning.
- Write and train deep neural networks using PyTorch.
- Understand the structure and training process of large language models (LLMs).
- Fine-tune a pretrained model for a specific task.
- Use Hugging Face to access and use open-source models.
Basic Python programming experience. No prior deep learning knowledge required.
Course Outline:
Introduction to Deep Learning and Neural Networks
- What is deep learning?
- Neural network architecture
- Key components: neurons, layers, activation functions
- How machines "learn" patterns
Introduction to PyTorch
- What is PyTorch?
- Core PyTorch concepts:
- Tensors
- Autograd
- Neural network modules
- Setting up your environment (brief walkthrough)
- Hands-on: Tensor operations and basic computations in PyTorch
Building Neural Networks with PyTorch
- Defining models using Module
- Loss functions and optimizers
- Training loops explained
- Hands-on: Build and train a simple deep learning model
Large Language Models (LLMs) Fundamentals
- What are LLMs?
- Transformer architecture
- Popular LLMs: GPT, BERT
- Representation models vs. generative models
- Tokens and embeddings
- Practical challenges: dataset size, computational requirements, ethical issues
Using Pretrained Models with Hugging Face
- Introduction to the Hugging Face ecosystem
- Exploring transformers library
- Loading and using a model
- Hands-on: Text classification with a pretrained model
Fine-tuning Pretrained Models
- Overview of fine-tuning
- Fine-tuning a Hugging Face model on custom data (small dataset)
- Hands-on: Fine-tuning exercise
Best Practices and Real-World Applications
- Choosing the right model for your task
- Monitoring training and evaluating performance
- Real-world examples: Chatbots, document summarization, code generation
Training material provided: Yes (Digital format)