非線性最小二乘法Levenberg-Marquardtmethod_第1頁
非線性最小二乘法Levenberg-Marquardtmethod_第2頁
非線性最小二乘法Levenberg-Marquardtmethod_第3頁
非線性最小二乘法Levenberg-Marquardtmethod_第4頁
非線性最小二乘法Levenberg-Marquardtmethod_第5頁
已閱讀5頁,還剩6頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、麥夸爾特法)Leve nberg-Marquardt Method(Levenberg-Marquardt is a popular alternative to the Gauss -Newton method of finding the minimum of a function that is a sum of squares of nonlinear functions,| *F何為倒 £ f=iLet the Jacobianof be denoted,then the Levenberg-Marquardt method searches in theMarquardt

2、, D. "An Algor ithm for Least-Squares Estimation of Nonlinear Parameters."SIAM J. Appl. Math. 11,431-441, 1963.Marquardt, D. "An Algor ithm for Least-Squares Estimation of Nonlinear Parameters."SIAM J. Appl. Math. 11,431-441, 1963.direction given by the solution I to the equation

3、swhere魏 are nonnegative scalars andis the identity matrix . The method has the nice property that, forsome scalar-related to耿、,the vector:誡 is the solution of the constrained subproblem of minimizing二;(GiH et al. 1981, p. 136).The method is used by the commandFindMinimumf, x, x0 when given the Metho

4、d ->Levenberg Marquardt option.SEE A LSO: Minimum , OptimizationREFERENCES:Bates, D. M. and Watts, D. G. Nonlinear Regr ession and Its Applications. New York: Wiley, 1988.Gill, P. R.; Murray, W.; and Wright, M. H. "The Levenberg-Academic Press, pp. 136-137, 1981.Marquardt Method."§

5、.7.3 in Practical Optim ization. London:Levenberg, K. "A Method for the Solution of Certain Problems in Least Squares."Quart. Appl. Math. 2, 164-168, 1944.Marquardt, D. "An Algor ithm for Least-Squares Estimation of Nonlinear Parameters."SIAM J. Appl. Math. 11,431-441, 1963.Leve

6、nberg Marquardt algorithmFrom Wikipedia, the free en cyclopediaJump to: navigation , searchIn mathematics and computing, the Levenberg Marquardt algorithm (LMA)- provides a numerical solution to the problem of minimizing a function, gen erally non li near, over a space of parameters of the function.

7、 These minimization problems arise especially in least squares curve fitting and non li near program mingThe LMA in terpolates betwee n the Gauss- Newt on algorithm (GNA) and the method of gradient descent . The LMA is more robust than the GNA, which means that in many cases it finds a solution even

8、 if it starts very far off the final minimum. For well-behaved functions and reas on able start ing parameters, the LMA tends to be a bit slower than the GNA. LMA can also be viewed as Gauss - Newt on using a trust regi on approach.The LMA is a very popular curve-fitti ng algorithm used in many soft

9、ware applicati ons for sol ving gen eric curve-fitt ingproblems. However, the LMAfinds only alocal minimum , not a global minimum .Contentshide«1 Caveat Emptor* 2The problem* 3The soluti ono3.1 Choice of damp ing parameter* 4Example* 5Notes* 6See also* 7Refere nces«8 Exter nal li nkso8.1 D

10、escripti onso8.2 Impleme ntatio nsedit Caveat EmptorOne important limitation that is very often over-looked is that it only optimises for residual errors in the dependant variable (y). It thereby implicitly assumes that any errors in the in depe ndent variable are zero or at least ratio of the two i

11、s so small as to be negligible. This is not a defect, it is inten ti on al, but it must be take n into acco unt whe n decid ing whether to use this tech nique to do a fit. While this may be suitable in con text of a con trolledexperime nt there are many situati onswhere this assumption cannot be mad

12、e. In such situations either non-least squares methods should be used or the least-squares fit should be done in proportion to the relative errors in the two variables, not simply the vertical "y" error. Failing to recognise this can lead to a fit which is sig nifica ntly in correct and fu

13、n dame ntally wron g. It will usually un derestimate the slope. This may or may not be obvious to the eye.MicroSoft Excel's chart offers a trend fit that has this limitationthatis un docume nted. Users ofte n fall into this trap assu ming the fit is correctly calculated for all situati ons. Ope

14、nOffice spreadsheet copied this feature and prese nts the same problem.edit The problemThe primary application of the Levenberg - Marquardt algorithm is in the least squares curve fitting problem: given a set of m empirical datum pairs of independent and dependent variables,(Xi, yj, optimize the par

15、ametersB of the model curve f(x, B ) so that the sum of the squares of the deviatio nsms=工血一 0)Fi=lbecomes mini mal.edit The solutionLike other numeric minimization algorithms, the Levenberg - Marquardt algorithm is an iterativeprocedure. To start a minimization,the userhas to provide an initialgues

16、s for the parameter vector, B . In manycases, an unin formed sta ndard guess like B T=(1,1,.,1)will work fine;in other cases, the algorithm conv erges only if the in itial guess is already somewhat close to the final soluti on.f 仏 0 + 6)areIn each iteration step, the parameter vector, B , is replace

17、d by a newestimate, B + S . To determineS , the functionsapproximated by their lin earizati onsf(Xi. 0 + 旬 q f(Xi, (3) + Ji6whereJi =df3is the gradient (row-vector in this case) of f with respect to B .At its minimum, the sumof squares, S( B,)he gradient of S with respect to S will be zero. The abov

18、e first-order approximation of ;"卜'' givesmS(0 + §)怎(f/i f (%0)厶& izzlOr in vector no tatio n,S(0 + 6) *|y f (Q) J6Taking the derivative with respect to S and setting the result to zero gives:where Jis the Jacobia n matrix whose ith row equals Ji,and where t and、are vectors wit

19、hand yi, respectively.Thisith componentis a set of linearequati ons which can be solved forLevenberg's contribution is to replace this equation by a "damped versi on",(JTJ I AI)<5 = JTy -where I is the identity matrix, giving as the in creme nt,S , to the estimated parameter vector,

20、B .The (non-negative) damping factor,入,isadjusted at each iteratio n. If reducti on ofS israpid, a smaller value can be used, bringing the algorithm closer to the Gauss- Newt on algorithm , whereas if an iteration gives insufficient reducti on in the residual,入 can be in creased,giving a step closer

21、 to the gradie nt desce nt directi on. Note that thegradie nt of S withrespect to B equals -】i :丨廠.Therefore, for large values of入,the step willbe take n approximately in the direct ion of the gradie nt. If either the len gth of the calculated step, S , or the reduction of sum of squares from the la

22、test parameter vector,B + S , fall belowpredefined limits, iteration stops and the last parameter vector, B , is considered to be the soluti on.Leve nberg's algorithm has the disadva ntage that if the value of damping factor,入,is large,in vert ingJTJ + 入 I is not used at all. Marquardtprovided t

23、he in sight that we can scale each comp onent of the gradie nt accord ing to the curvature so that there is larger moveme nt along the direct ionswhere the gradie nt is smaller. Thisavoids slow conv erge nee in the direct ion of small gradie nt. Therefore, Marquardt replaced the identity matrix, I ,

24、 with the diagonal matrix con sisti ng of the diago nal eleme nts ofJTJ,resultingin the Levenberg - Marquardt algorithm:(JTJ + Adiag(JTJ)5 二 JTy-f()3)A similar damp ing factor appears inTikh onovregularization , which is used to solve linear ill-posed problems , as well as in ridge regression , an e

25、stimation technique in statistics .edit Choice of damping parameterVarious more-or-less heuristicarguments havebee n put forward for the best choice for the damping parameter 入.Theoretical arguments exist show ing why some of these choices guaranteed local convergenee of the algorithm; however these

26、 choices can make the global con verge nee of the algorithm suffer from the un desirable properties of steepest-desce nt , in particular very slow conv erge nee close to the optimum.The absolute values of any choice depe nds on how well-scaled the in itial problem is.Marquardt recommendedstarting wi

27、th a value 入 o and a factor v >1. Initiallysetting入=X o andcomputing the residual sum of squares S( B ) after one step from the starting point with the damp ing factor ofX = X o and sec on dly withX 0/ v . If both of these are worse than the in itialpoint the n the damp ing is in creased bysucces

28、sive multiplication by v until a better point is found with a new damp ing factor of X 0 v k for some k.If use of the damping factor X / v results in a reduct ion in squared residual the n this is take n as the new value of X (and the new optimum location is taken as that obtained with this damping

29、factor) and the process continues; if using X / v resulted in a worse residual, but using X resulted in a better residual then X is left unchanged and the new optimum is taken as the value obtained with X as damping factor.edit ExamplePoor FitBetter FitBest FitIn this example we try to fit the fun c

30、ti ony=acos(bX) + bsin( aX) usi ng theLevenberg - Marquardt algorithm implemented in GNUOctave as the leasqr function. The 3 graphs Fig 1,2,3 show progressively better fitting for the parameters a=100, b=102 used in the initialcurve. Only when the parameters in Fig3 are chose n closest to the origi

31、nal, are thecurves fitting exactly. This equation is an example of very sen sitive in itial con diti ons for the Levenberg- Marquardt algorithm. Onereas on for this sen sitivity is the existe neeof multiple minima the function COS( X) hasand - minima at parameter valueedit Notes1. A The algorithm wa

32、s first published by Kenn eth Leve nberg, while work ing at the Frankford Army Arsenal . It was rediscovered by Donald Marquardt who worked as a statisticianat DuPont and independently byGirard, Wynn and Morris on.edit See also« Trust regionedit References« Kenneth Levenberg (1944). "

33、A Method for the Solution of Certain Non-Linear Problems in Least Squares". The Quarterly of Applied Mathematics 2: 164 - 168.* A. Girard (1958). Rev. Opt 37: 225, 397.* C.G. Wynne (1959). "Lens Designing byElectro nic Digital Computer:'.Proc.Phys. Soc. L on don 73 :777.doi : 10.1088/0

34、370-1328/73/5/310.* Jorje J. Moreand Daniel C. Sorensen (1983)."Computi ng a Trust-Regio n Step". SIAM J. Sci. Stat. Comput. (4): 553- 572.* D.D. Morrison (1960). Jet Propulsi onLaboratory Seminar proceedi ngs* Donald Marquardt (1963). "An Algorithm forLeast-Squares Estimation of Nonl

35、inear Parameters". SIAM Journal on AppliedMathematics 11 (2): 431- 441.doi : 10.1137/0111030 . Philip E. Gill a nd Walter Murray (1978). "Algorithms for the solutio n of thenon li near least-squares problem".SIAMJour nal on Numerical An alysis 15 (5): 977 - 992. doi : 10.1137/0715063

36、.* Nocedal, Jorge; Wright, Stephen J. (2006).Numerical Optimizati on, 2nd Editi onSpringer. ISBN 0-387-30303-0 .edit External linksedit Descriptions« Detailed descriptionof the algorithm can befound in Numerical Recipes in C, Chapter 15.5: Non li near models« C. T. Kelley, Iterative Method

37、s for Optimization , SIAM Frontiers in Applied Mathematics, no 18, 1999, ISBN 0-89871-433-8 . Online copy* History of the algorithm in SIAM news« A tutorial by Ananth Ranganathan* Methods for Non-Linear Least SquaresProblems by K. Madse n, H.B. Nielse n, O.Tin gleff is a tutorial discuss ing no

38、n-I in ear least-squares in gen eral and the Leve nberg-Marquardt method in particular« T. Strutz: Data Fitting and Uncertainty (A practical in troduct ion to weighted least squares and beyond) . Vieweg+Teubner, ISBN 978-3-8348-1022-9 .edit Implementations Levenberg-Marquardt is a built-inalgor

39、ithmwith Mathematica Levenberg-Marquardt is a built-inalgorithmwith Matlab« The oldest implementation still in use is Imdif , fromMINPACKnFortran , in thepublic domain . See also:o lmfit , a translationof Imdif into C C+with an easy-to-use wrapper for curve fitting, public domain.o The GNU Scientific Librarylibrary hasa C in terface to MINPACK.o C/C+ Min pack in cludes the Levenberg - Marquardt algorithm.o Several high-level languages and mathematical packages have wrappers for the MINPACKbutines, among them: Python libraryscipy , mo

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論