First-order and Stochastic Optimization Methods for Machine Learning

(OPTIMIZE-ML.AU1) / ISBN : 979-8-90059-015-8
Lessons
Lab
AI Tutor (Add-on)
Get A Free Trial

Skills You’ll Get

1

Regularization Techniques for Generalization

  • Linear Regression
  • Logistic Regression
  • Generalized Linear Models
  • Support Vector Machines
  • Regularization, Lasso, and Ridge Regression
  • Population Risk Minimization
  • Neural Networks
  • Exercises
2

Convergence Analysis of Optimization Algorithms

  • Convex Sets
  • Convex Functions
  • Lagrange Duality
  • Legendre–Fenchel Conjugate Duality
  • Exercises
3

Deterministic Convex Optimization

  • Subgradient Descent
  • Mirror Descent
  • Accelerated Gradient Descent
  • Game Interpretation for Accelerated Gradient Descent
  • Smoothing Scheme for Nonsmooth Problems
  • Primal–Dual Method for Saddle-Point Optimization
  • Alternating Direction Method of Multipliers
  • Mirror-Prox Method for Variational Inequalities
  • Accelerated Level Method
  • Exercises
4

Stochastic Convex Optimization

  • Stochastic Mirror Descent
  • Stochastic Accelerated Gradient Descent
  • Stochastic Convex–Concave Saddle Point Problems
  • Stochastic Accelerated Primal–Dual Method
  • Stochastic Accelerated Mirror-Prox Method
  • Stochastic Block Mirror Descent Method
  • Exercises
5

Convex Finite-Sum and Distributed Optimization

  • Random Primal–Dual Gradient Method
  • Random Gradient Extrapolation Method
  • Variance-Reduced Mirror Descent
  • Variance-Reduced Accelerated Gradient Descent
  • Exercises
6

Nonconvex Optimization

  • Unconstrained Nonconvex Stochastic Optimization
  • Nonconvex Stochastic Composite Optimization
  • Nonconvex Stochastic Block Mirror Descent
  • Nonconvex Stochastic Accelerated Gradient Descent
  • Nonconvex Variance-Reduced Mirror Descent
  • Randomized Accelerated Proximal-Point Methods
  • Exercises
7

Advanced Gradient-Based Optimization

  • Conditional Gradient Method
  • Conditional Gradient Sliding Method
  • Nonconvex Conditional Gradient Method
  • Stochastic Nonconvex Conditional Gradient
  • Stochastic Nonconvex Conditional Gradient Sliding
  • Exercises
8

Operator Sliding and Decentralized Optimization

  • Gradient Sliding for Composite Optimization
  • Accelerated Gradient Sliding
  • Communication Sliding and Decentralized Optimization
  • Exercises

1

Regularization Techniques for Generalization

  • Performing Linear Regression Using OLS
  • Performing Logistic Regression for Binary Classification
  • Performing Classification Using SVM
  • Training a Neural Network Using the Adam Optimizer
2

Convergence Analysis of Optimization Algorithms

  • Exploring and Visualizing Convex Sets Using Python
  • Analyzing and Visualizing Convex Functions with Python
  • Visualizing Legendre-Fenchel Conjugate Duality
3

Deterministic Convex Optimization

  • Comparing the Convergence of Optimizers on a Loss Landscape
4

Stochastic Convex Optimization

  • Applying SMD on a Convex Function
  • Implementing the SAGD Algorithm
  • Optimizing Stochastic Convex–Concave Saddle Points
5

Convex Finite-Sum and Distributed Optimization

  • Improving Model Performance with Regularization
  • Implementing the RPDG Method on Distributed Data
  • Simulating RGE for Multi-Worker Training
6

Nonconvex Optimization

  • Solving Convex and Non-Convex Optimization Problems
  • Implementing Nonconvex Stochastic Optimization
  • Comparing Nonconvex Mirror Descent and Accelerated Gradient Descent
7

Advanced Gradient-Based Optimization

  • Implementing Conditional Gradient Algorithm
  • Implementing the SCG Algorithm
  • Fine-Tuning a Pretrained Model with Advanced Optimizers
8

Operator Sliding and Decentralized Optimization

  • Simulating Communication-Efficient Distributed Optimization
  • Applying Gradient Sliding for Composite Convex Optimization

Any questions?
Check out the FAQs

Still have unanswered questions and need to get in touch?

Contact Us Now

Related Courses

All Courses
scroll to top