video thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization

Published on Feb 4, 20254263 Views

Stochastic gradient descent (SGD) is a simple and popular method to solve stochastic optimization problems which arise in machine learning. For strongly convex problems, its convergence rate was kno

Related categories

Presentation

Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization00:00
Stochastic Convex Optimization 41:01
Strongly Convex Stochastic Optimization - 0112:58:08
Strongly Convex Stochastic Optimization - 0219:03:01
Better Algorithms32:40:28
This Work - 0162:24:23
This Work - 0266:21:36
This Work - 0378:30:39
This Work - 0485:07:34
This Work - 0589:50:27
This Work - 0691:37:10
Smooth F - 0197:36:17
Smooth F - 02102:48:03
Smooth F - 03110:37:44
Non-Smooth F130:55:41
Warm-up140:16:43
Second Example164:31:17
Fixing SGD - 01181:45:23
Fixing SGD - 02190:31:58
Experiments - 01205:45:09
Experiments - 02240:32:04
Conclusions and Open Problems250:04:11