




版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認(rèn)領(lǐng)
文檔簡介
1、麥夸爾特法)-Newton method of finding the minimum of aLevenberg-Marquardt Method(Levenberg-Marquardt is a popular alternative to the Gaussfunction'凸跖hat isa sum of squares of nonlinear functions,Let the Jacobianof 如江be denoted- 口1 ', then the Levenberg-Marquardt method searches in thedirection giv
2、en by the solution I to the equationsis the identity matrix . The method has the nice property that, for some scalar - related to 斗S, the vector 成 is the solution of the constrained subproblem of minimizingwhere ' are nonnegative scalars and炒-滬'5 偵"subject to 嗟'%Gill et al. 1981, p.
3、 136).The method is used by the commandFindMinimumf, x, x0 when given the Method ->Levenberg Marquardt option.SEE A LSO: Minimum , OptimizationREFERENCES:Bates, D. M. and Watts, D. G. Nonlinear Regr ession and Its Applications. New York: Wiley, 1988.Gill, P. R.; Murray, W.; and Wright, M. H. &quo
4、t;The Levenberg- Marquardt Method."*.7.3 in Practical Optim ization. London:Academic Press, pp. 136-137, 1981.Levenberg, K. "A Method for the Solution of Certain Problems in Least Squares."Quart. Appl. Math. 2, 164-168, 1944.Marquardt, D. "An Algor ithm for Least-Squares Estimati
5、on of Nonlinear Parameters.SIAM J. Appl. Math. 11,431-441, 1963.Levenberg - Marquardt algorithmFrom Wikipedia, the free encyclopediaJump to: navigation , search1 In mathematics and computing, the Levenberg - Marquardt algorithm (LMA) provides a numerical solution to the problem of minimizing a funct
6、ion, generally nonlinear, over a space of parameters of the function. These minimization problems arise especially in least squares curve fitting and nonlinear programming .The LMA interpolates between the Gauss- Newton algorithm (GNA) and the method of gradient descent . The LMA is more robust than
7、 the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be a bit slower than the GNA. LMA can also be viewed as Gauss - Newton using a trust region approach.The LMA
8、is a very popular curve-fitting algorithm used in many software applications for solving generic curve-fitting problems. However, the LMA finds only a local minimum , not a global minimum .Contentshide 1 Caveat Emptor 2The problem 3The solutiono3.1 Choice of damping parameter 4Example 5Notes 6See al
9、so 7References 8External linkso8.1 Descriptionso8.2 Implementationsedit Caveat EmptorOne important limitation that is very often over-looked is that it only optimises for residual errors in the dependant variable (y). It thereby implicitly assumes that any errors in the independent variable are zero
10、 or at least ratio of the two is so small as to be negligible. This is not a defect, it is intentional, but it must be taken into account when deciding whether to use this technique to do a fit. While this may be suitable in context of a controlled experiment there are many situations where this ass
11、umption cannot be made. In such situations either non-least squares methods should be used or the least-squares fit should be done in proportion to the relative errors in the two variables, not simply the vertical "y" error. Failing to recognise this can lead to a fit which is significantl
12、y incorrect and fundamentally wrong. It will usually underestimate the slope. This may or may not be obvious to the eye.MicroSoft Excel's chart offers a trend fit that has this limitation that is undocumented. Users often fall into this trap assuming the fit is correctly calculated for all situa
13、tions. OpenOffice spreadsheet copied this feature and presents the same problem.edit The problemThe primary application of the Levenberg - Marquardt algorithm is in the least squares curve fitting problem: given a set of m empirical datum pairs of independent and dependent variables,(xi, yi), optimi
14、ze the parameters6 of the model curve f(x, 6 ) so that the sum of the squares of the deviationsms(P) = e弟-z?)2becomes minimal.edit The solutionLike other numeric minimization algorithms, the Levenberg - Marquardt algorithm is an iterativeprocedure. To start a minimization,the userhas to provide an i
15、nitialguess for the parameter vector, 6 . In manycases, an uninformed standard guess like 6 T=(1,1,.,1)will work fine;in other cases, the algorithm converges only if the initial guess is already somewhat close to the final solution.In each iteration step, the parameter vector, 6 , isestimate,6 + a .
16、 To determine a , the functionsreplaced by a new/(4 3 + S' areapproximated by their linearizationsf 怎 + S) a H (3) + J.Swhered(3is the gradient (row-vector in this case) of f with respect toAt its minimum, the sumof squares, S( 3, )the gradient of S with respect to a will be zero. The above firs
17、t-order approximation of "3”:' 一 仁 igivesmS(P + S) a (f/i J0)2i=l.Or in vector notation,S(/3+B)4|y_f(/3)_ J邪.Taking the derivative with respect to 8 and setting the result to zero gives:= JTy - f(0)and where t and are vectors withand yi, respectively.Thisith componentis a set of linearwhere
18、 Jis the Jacobian matrix whose ith row equals Ji,equations which can be solved forLevenberg's contribution is to replace this equation by a "damped version",(JTJ I AI)<5 = JTy - f(/3)where I is the identity matrix, giving as the increment, a , to the estimated parameter vector, 6 .T
19、he (non-negative) damping factor, 入,is adjusted at each iteration. If reduction ofS israpid, a smaller value can be used, bringing the algorithm closer to the Gauss- Newton algorithm , whereas if an iteration gives insufficient reduction in the residual, 入 can be increased, giving a step closer to t
20、he gradient descent direction. Note that the gradient of S with respect to 6 equals 一2(<1丁" 一 f (0)'.Therefore, for large values of 入,the step will be taken approximately in the direction of the gradient. If either the length of the calculated step, a , or the reduction of sum of squares
21、 from the latest parameter vector, 6 + a , fall below predefined limits, iteration stops and the last parameter vector, 6 , is considered to be the solution.Levenberg's algorithm has the disadvantage that if the value of damping factor,入,is large,invertingJTJ + 入 I is not used at all. Marquardtp
22、rovided the insight that we can scale each component of the gradient according to the curvature so that there is larger movement along the directions where the gradient is smaller. This avoids slow convergence in the direction of small gradient. Therefore, Marquardt replaced the identity matrix, I ,
23、 with the diagonal matrix consisting of the diagonal elements ofJTJ,resulting in the Levenberg Marquardt algorithm:(JTJ + Adiag(JTJ)5 = jTy -f(0)A similar damping factor appears in Tikhonov regularization , which is used to solve linear ill-posed problems , as well as in ridge regression , an estima
24、tion technique in statistics .edit Choice of damping parameterVarious more-or-less heuristic arguments have been put forward for the best choice for the damping parameter 入.Theoretical arguments exist showing why some of these choices guaranteed local convergence of the algorithm; however these choi
25、ces can make the global convergence of the algorithm suffer from the undesirable properties of steepest-descent , in particular very slow convergence close to the optimum.The absolute values of any choice depends on how well-scaled the initial problem is.Marquardt recommendedstarting with a value 入
26、o and a factor v >1. Initially setting 入=入 o and computing the residual sum of squares S( 0 ) after one step from the starting point with the damping factor of 入=入 o and secondly with 入 o/ v . If both of these are worse than the initial point then the damping is increased by successive multiplica
27、tion by v until a better point is found with a new damping factor of 入 o v k for some k.If use of the damping factor 入 / v results in a reduction in squared residual then this is taken as the new value of 入(and the new optimum location is taken as that obtained with this damping factor) and the proc
28、ess continues; if using 入 / v resulted in a worse residual, but using 入 resulted in a better residual then 入 is left unchanged and the new optimum is taken as the value obtained with 入 as damping factor.edit ExamplePoor FitIn this example we try to fit the functiony=acos(bX) + bsin(aX) using theLeve
29、nberg - Marquardt algorithm implemented in GNUOctave as the leasqr function. The 3 graphs Fig 1,2,3 show progressively better fitting for the parameters a=100, b=102 used in the initial curve. Only when the parameters in Fig 3 are chosen closest to the original, are thecurves fitting exactly. This e
30、quation is an example of very sensitive initial conditions for the Levenberg - Marquardt algorithm. One reason for this sensitivity is the existenceand - of multiple minima the function cos( x) hasminima at parameter valueedit Notes1. A The algorithm was first published by Kenneth Levenberg, while w
31、orking at the Frankford Army Arsenal . It was rediscovered by Donald Marquardt who worked as a statistician at DuPont and independently by Girard, Wynn and Morrison.edit See also, Trust regionedit References, Kenneth Levenberg (1944). "A Method for the Solution of Certain Non-Linear Problems in
32、 Least Squares". The Quarterly of Applied Mathematics 2: 164 - 168., A. Girard (1958). Rev. Opt 37: 225, 397. C.G. Wynne (1959). "Lens Designing byElectronic Digital Computer: I".Proc.Phys. Soc. London 73 :777.doi : 10.1088/0370-1328/73/5/310 . Jorje J. More and Daniel C. Sorensen (19
33、83)."Computing a Trust-Region Step". SIAM J. Sci. Stat. Comput. (4): 553- 572. D.D. Morrison (1960). Jet PropulsionLaboratory Seminar proceedings . Donald Marquardt (1963). "An Algorithm forLeast-Squares Estimation of Nonlinear Parameters". SIAM Journal on AppliedMathematics 11 (
34、2): 431- 441.doi : 10.1137/0111030 ., Philip E. Gill and Walter Murray (1978). "Algorithms for the solution of the nonlinear least-squares problem". SIAM Journal on Numerical Analysis 15 (5): 977 - 992. doi : 10.1137/0715063 ., Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimizat
35、ion, 2nd Edition . Springer. ISBN 0-387-30303-0 .edit External linksedit Descriptions Detailed description of the algorithm can be found in Numerical Recipes in C, Chapter 15.5: Nonlinear models C. T. Kelley, Iterative Methods forOptimization , SIAM Frontiers in Applied Mathematics, no 18, 1999, ISB
36、N 0-89871-433-8. Online copy History of the algorithm in SIAM news A tutorial by Ananth Ranganathan Methods for Non-Linear Least SquaresProblems by K. Madsen, H.B. Nielsen, O. Tingleff is a tutorial discussing non-linear least-squares in general and the Levenberg-Marquardt method in particular T. St
37、rutz: Data Fitting and Uncertainty (Apractical introduction to weighted least squares and beyond) . Vieweg+Teubner, ISBN 978-3-8348-1022-9 .edit Implementations Levenberg-Marquardt is a built-in algorithm with Mathematica Levenberg-Marquardt is a built-in algorithm with Matlab The oldest implementat
38、ion still in use isImdif , from MINPACKn Fortran , in the public domain . See also:o lmfit , a translation of lmdif into C C+ with an easy-to-use wrapper for curve fitting, public domain.o The GNU Scientific Library library has a C interface to MINPACK.o C/C+ Minpack includes the Levenberg - Marquardt algorithm.o Several high-level languages and mathematical packages have wrappers for the MINPACKbutines, among them:Python library scipy , module
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 鋰云母焙燒隧道窯碼窯結(jié)構(gòu)優(yōu)化及預(yù)熱帶氣幕擾流裝置數(shù)值模擬
- 社交媒體與時間管理的關(guān)系探討
- 大麻紗線企業(yè)ESG實踐與創(chuàng)新戰(zhàn)略研究報告
- 平茬和不同水分處理對四翅濱藜生理及生長特性的影響
- 勃拉姆斯鋼琴小品Op. 76的藝術(shù)特色與演奏技法分析
- 滑雪服企業(yè)縣域市場拓展與下沉戰(zhàn)略研究報告
- 科技企業(yè)如何實施有效的管理戰(zhàn)略
- 女性健康新藥行業(yè)深度調(diào)研及發(fā)展戰(zhàn)略咨詢報告
- 基于CPU-FPGA的光伏電站實時仿真研究
- 酒店購置合同范本
- 腦血管造影術(shù)患者的護理查房課件
- Illustrator設(shè)計教案(第一講)課件
- 我國的雙重國籍問題研究 邊防管理專業(yè)
- 廣東義務(wù)教育標(biāo)準(zhǔn)化學(xué)校
- (完整)藥劑學(xué)教案
- 提案改善課件全員版
- 2022年全國新高考Ⅰ卷:馮至《江上》
- 銅陵油庫重油罐區(qū)工藝設(shè)計
- 質(zhì)量手冊CCC認(rèn)證完整
- DB51∕T 2767-2021 安全生產(chǎn)風(fēng)險分級管控體系通則
- 反興奮劑考試試題與解析
評論
0/150
提交評論