As we mentioned in the introduction of this section, the regularized decomposition algorithm works with a more general regularizing term of the form
(a) Observe that the proof of convergence relies on strict convexity of the objective function (Lemma 5), thus α > 0 is needed. It also relies on which is simply obtained by taking a finite α . The algorithm can thus be tuned for any positive α and α can vary within the algorithm. (b) Taking the same starting point and data as in Exercise 2, show that by selecting different values of α , any point in ]−20,20] can be obtained as a solution of the regularized master at the second iteration (where 20 is the upper bound on x and the first iteration only consists of adding cuts on θ1and θ2).
(c) Again taking the same starting point and data as in Exercise 2, how would you take α to reduce the number of iterations? Discuss some alternatives.
(d) Let α = 1 for Iterations 1 and 2. As of Iteration 2, consider the following rule for changing α dynamically. For each null step, α is doubled. At each exact step, α is halved. Show why this would improve the performance of the regularized decomposition in the case of Exercise 2. Consider the starting point x1 = −0.5 as in Example 1 and observe that the same path as before is followed.
Already registered? Login
Not Account? Sign up
Enter your email address to reset your password
Back to Login? Click here