Wait, what happened? Our training process literally blew up, leading to losses becom- ing inf. This is a clear sign that params is receiving updates that are too large, and their values start...

machine learning codingWait, what happened? Our training process literally blew up, leading to losses becom-<br>ing inf. This is a clear sign that params is receiving updates that are too large, and<br>their values start oscillating back and forth as each update overshoots and the next<br>overcorrects even more. The optimization process is unstable: it diverges instead of<br>converging to a minimum. We want to see smaller and smaller updates to params, not<br>larger, as shown in figure 5.8.<br>A<br>2<br>E<br>Figure 5.8 Top: Diverging optimization on a convex function (parabola-like) due to large steps.<br>Bottom: Converging optimization with small steps.<br>How can we limit the magnitude of learning_rate * grad? Well, that looks easy. We<br>could simply choose a smaller learning_rate, and indeed, the learning rate is one of<br>the things we typically change when training does not go as well as we would like. We<br>usually change learning rates by orders of magnitude, so we might try with le-3 or<br>le-4, which would decrease the magnitude of the updates by orders of magnitude.<br>Let's go with le-4 and see how it works out:<br>

Extracted text: Wait, what happened? Our training process literally blew up, leading to losses becom- ing inf. This is a clear sign that params is receiving updates that are too large, and their values start oscillating back and forth as each update overshoots and the next overcorrects even more. The optimization process is unstable: it diverges instead of converging to a minimum. We want to see smaller and smaller updates to params, not larger, as shown in figure 5.8. A 2 E Figure 5.8 Top: Diverging optimization on a convex function (parabola-like) due to large steps. Bottom: Converging optimization with small steps. How can we limit the magnitude of learning_rate * grad? Well, that looks easy. We could simply choose a smaller learning_rate, and indeed, the learning rate is one of the things we typically change when training does not go as well as we would like. We usually change learning rates by orders of magnitude, so we might try with le-3 or le-4, which would decrease the magnitude of the updates by orders of magnitude. Let's go with le-4 and see how it works out:

Jun 11, 2022
SOLUTION.PDF

Get Answer To This Question

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here