Byte-Sized Deep Learning Series: Training Optimization of Neural Networks
- Created By shambhvi
- Posted on May 2nd, 2025
- Overview
- Prerequisites
- Audience
- Curriculum
Description:
Building a neural network is only the beginning — training it successfully is where real deep learning expertise shines.
In this 90-minute session, you’ll learn the critical hyperparameters that control how your model learns, including optimizers, loss functions, evaluation metrics, and callbacks.
We’ll explore popular Keras optimizers like SGD, Adam, and RMSprop, discuss why choosing the right loss function matters, and learn how to track performance using metrics.
You’ll also see how callbacks (like Early Stopping and ModelCheckpoint) can automate smarter training workflows.
Beyond that, we'll dive into key design decisions: How many epochs to train for? How many layers and neurons should your network have?
By the end, you'll be able to train models more effectively, spot when things are going wrong, and tune hyperparameters for better performance.
If you want to turn "good enough" models into great ones, this session is a must!
Duration: 90 mins
Course Code: BDT496
Learning Objectives:
After this course, you will be able to:
- Key training hyper parameters
- Choosing optimizers and loss functions
- Tracking training progress and metrics
- Making training smarter with Callbacks
Learners familiar with concepts like model layers, activation functions, and compiling a model
Machine learning students and practitioners who know how to build basic Keras models. Ideal for those who want to go beyond "default settings" and fine-tune their training process for better results
Course Outline:
- Training Hyper Parameters
- What are hyper parameters?
- Understanding various hyper parameters for neural networks
- Epochs, Number of layers, number of neurons
- Choosing Optimizers and Right Loss Function
- What an optimizer does: updating weights to minimize loss
- Key Keras Optimizers: SGD, Adam, RMSprop
- Learning rate: most important optimizer parameter
- Why does loss function matter?
- Loss functions: Binary, Categorical Cross Entropy, Mean Squared Error
- Tracking training progress and metrics
- Metrics ≠ Loss, metrics tell you “How good” the model is from user’s perspective
- Common metrics: accuracy, precision, recall, AUC, MSE
- Making training smarter with Callbacks
- What are callbacks: functions trigged during training
- Keras callbacks: Early Stopping, Model Check point, Reduce LR on Plateau
- Hands-on: Use different callbacks during model training
Training material provided: Yes (Digital format)
Hands-on Lab: Instructions will be provided to install Jupyter notebook and other required python libraries. Students can opt to use ‘Google Colaboratory’ if they do not want to install these tools