views:

39

answers:

2

It is "well-known" that the BFGS optimization algorithm is superlinearly convergent for strictly convex problems, but is there any analysis for problems that are non-strictly convex. For example, suppose that f(x) is convex for some scalar x. Then, suppose we optimize over g(x1,x2)=f(x1+x2). Will this still always be superlinearly convergent?

A: 

Correct me if I'm wrong, but won't the "solution" in this case actually be a line, not a single point? If x' is a minimizer for f(x), then the best you can hope for when applying any method to g(x1, x2) is for it to converge to the line x2 = x' - x1.

celion
A: 

Whether BFGS converges at all on non-convex problems is still an open problem. In fact, in 1984 Powell gave a counter-example that shows that BFGS with an inexact line-search search may fail to converge. What can be made are local statements, such as: Given a local minima x*, if you eventually enter a region of space near x* BFGS will converge super-linearly. The reason for this is that near x*, the objective function can be accurately modelled by a convex quadratic.

As for what is known for the composition function you gave, I am not sure. For a detailed explanation of the properties of BFGS, see either Dennis and Schnabel or Nocedal and Wright.

Best of luck.

SplittingField