Matlab: poor accuracy of optimizers/solvers

I am having difficulty achieving sufficient accuracy in a root-finding problem on Matlab. I have a function, Lik(k) , and want to find the value of k where Lik(k)=L0 . Basically, the problem is that various built-in Matlab solvers ( fzero , fminbnd , fmincon ) are not getting as close to the solution as I would like or expect.

Lik() is a user-defined function which involves extensive coding to compute a numerical inverse Laplace transform, etc., and I therefore do not include the full code. However, I have used this function extensively and it appears to work properly. Lik() actually takes several input parameters, but for the current step, all of these are fixed except k . So it is really a one-dimensional root-finding problem.

I want to find the value of k >= 165.95 for which Lik(k)-L0 = 0 . Lik(165.95) is less than L0 and I expect Lik(k) to increase monotonically from here. In fact, I can evaluate Lik(k)-L0 in the range of interest and it appears to smoothly cross zero: eg Lik(165.95)-L0 = -0.7465, ..., Lik(170.5)-L0 = -0.1594, Lik(171)-L0 = -0.0344, Lik(171.5)-L0 = 0.1015, ... Lik(173)-L0 = 0.5730, ..., Lik(200)-L0 = 19.80 . So it appears that the function is behaving nicely.

However, I have tried to "automatically" find the root with several different methods and the accuracy is not as good as I would expect...

Using fzero(@(k) Lik(k)-L0) : If constrained to the interval (165.95,173) , fzero returns k=170.96 with Lik(k)-L0=-0.045 . Okay, although not great. And for practical purposes, I would not know such a precise upper bound without a lot of manual trial and error. If I use the interval (165.95,200) , fzero returns k=167.19 where Lik(k)-L0 = -0.65 , which is rather poor. I have been running these tests with Display set to iter so I can see what's going on, and it appears that fzero hits 167.19 on the 4th iteration and then stays there on the 5th iteration, meaning that the change in k from one iteration to the next is less than TolX (set to 0.001) and thus the procedure ends. The exit flag indicates that it successfully converged to a solution.

I also tried minimizing abs(Lik(k)-L0) using fminbnd (giving upper and lower bounds on k ) and fmincon (giving a starting point for k ) and ran into similar accuracy issues. In particular, with fmincon one can set both TolX and TolFun , but playing around with these (down to 10^-6, much higher precision than I need) did not make any difference. Confusingly, sometimes the optimizer even finds a k-value on an earlier iteration that is closer to making the objective function zero than the final k-value it returns.

So, it appears that the algorithm is iterating to a certain point, then failing to take any further step of sufficient size to find a better solution. Does anyone know why the algorithm does not take another, larger step? Is there anything I can adjust to change this? (I have looked at the list under optimset but did not come up with anything useful.)

Thanks a lot!


As you seem to have a 'wild' function that does appear to be monotone in the region, a fairly small range of interest, and not a very high requirement in precision I think all criteria are met for recommending the brute force approach.

Assuming it does not take too much time to evaluate the function in a point, please try this:

Find an upperbound xmax and a lower bound xmin , choose a preferred stepsize and evaluate your function at

xmin:stepsize:xmax 

If required (and monotonicity really applies) you can get another upper and lower bound by doing this and repeat the process for better accuracy.


I also encountered this problem while using fmincon. Here is how I fixed it.

I needed to find the solution of a function (single variable) within an optimization loop (multiple variables). Because of this, I needed to provide a large interval for the solution of the single variable function. The problem is that fmincon (or fzero) does not converge to a solution if the search interval is too large. To get past this, I solve the problem inside a while loop, with a huge starting upperbound (1e200) with the constraint made on the fval value resulting from the solver. If the resulting fval is not small enough, I decrease the upperbound by a factor. The code looks something like this:

fval = 1;
factor = 1;
while fval>1e-7
    UB = factor*1e200;
    [x,fval,exitflag] = fminbnd(@(x)function(x,...),LB,UB,options);
    factor = factor * 0.001;
end

The solver exits the while when a good solution is found. You can of course play also with the LB by introducing another factor and/or increase the factor step. My 1st language isn't English so I apologize for any mistakes made.

Cheers, Cristian


Why not use a simple bisection method? You always evaluate the middle of a certain interval and then reduce this to the right or left part so that you always have one bound giving a negative and the other bound giving a positive value. You can reduce to arbitrary precision very quickly. Since you reduce the interval in half each time it should converge very quickly.

I would suspect however there is some other problem with that function in that it has discontinuities. It seems strange that fzero would work so badly. It's a deterministic function right?

链接地址: http://www.djcxy.com/p/85734.html

上一篇: 长双的用法

下一篇: Matlab:优化器/求解器的准确性差