




版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡介
1、Levenberg-Marquardt Method(麥夸爾特法)Levenberg-Marquardt is a popular alternative to the Gauss-Newton method of finding the minimum of a function that is a sum of squares of nonlinear functions, Let the Jacobian of be denoted , then the Levenberg-Marquardt method searches in the direction given by the s
2、olution to the equations where are nonnegative scalars and is the identity matrix. The method has the nice property that, for some scalar related to , the vector is the solution of the constrained subproblem of minimizing subject to (Gill et al. 1981, p. 136). The method is used by the command
3、FindMinimumf, x, x0 when given the Method -> Levenberg Marquardt option. 窗體頂端SEE ALSO: Minimum, Optimization 窗體底端REFERENCES: Bates, D. M. and Watts, D. G. Nonlinear Regression and Its Applications. New York: Wiley, 1988. Gill, P. R.; Murray, W.; and Wright, M. H. "The Lev
4、enberg-Marquardt Method." §4.7.3 in Practical Optimization. London: Academic Press, pp. 136-137, 1981. Levenberg, K. "A Method for the Solution of Certain Problems in Least Squares." Quart. Appl. Math. 2, 164-168, 1944. Marquardt, D. "An Algorithm for Least-Squares Esti
5、mation of Nonlinear Parameters." SIAM J. Appl. Math. 11, 431-441, 1963. LevenbergMarquardt algorithmFrom Wikipedia, the free encyclopediaJump to: navigation, search In mathematics and computing, the LevenbergMarquardt algorithm (LMA)1 provides a numerical solution to the problem of minimizing a
6、 function, generally nonlinear, over a space of parameters of the function. These minimization problems arise especially in least squares curve fitting and nonlinear programming.The LMA interpolates between the GaussNewton algorithm (GNA) and the method of gradient descent. The LMA is more robust th
7、an the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be a bit slower than the GNA. LMA can also be viewed as GaussNewton using a trust region approach.The LMA i
8、s a very popular curve-fitting algorithm used in many software applications for solving generic curve-fitting problems. However, the LMA finds only a local minimum, not a global minimum.Contentshide· 1 Caveat Emptor · 2 The problem · 3 The solution o 3.1 Choice of damping parameter
9、183; 4 Example · 5 Notes · 6 See also · 7 References · 8 External links o 8.1 Descriptions o 8.2 Implementations edit Caveat EmptorOne important limitation that is very often over-looked is that it only optimises for residual errors in the dependant variable (y). It thereby impli
10、citly assumes that any errors in the independent variable are zero or at least ratio of the two is so small as to be negligible. This is not a defect, it is intentional, but it must be taken into account when deciding whether to use this technique to do a fit. While this may be suitable in context o
11、f a controlled experiment there are many situations where this assumption cannot be made. In such situations either non-least squares methods should be used or the least-squares fit should be done in proportion to the relative errors in the two variables, not simply the vertical "y" error.
12、 Failing to recognise this can lead to a fit which is significantly incorrect and fundamentally wrong. It will usually underestimate the slope. This may or may not be obvious to the eye.MicroSoft Excel's chart offers a trend fit that has this limitation that is undocumented. Users often fall int
13、o this trap assuming the fit is correctly calculated for all situations. OpenOffice spreadsheet copied this feature and presents the same problem.edit The problemThe primary application of the LevenbergMarquardt algorithm is in the least squares curve fitting problem: given a set of m empirical datu
14、m pairs of independent and dependent variables, (xi, yi), optimize the parameters of the model curve f(x,) so that the sum of the squares of the deviationsbecomes minimal.edit The solutionLike other numeric minimization algorithms, the LevenbergMarquardt algorithm is an iterative procedure. To start
15、 a minimization, the user has to provide an initial guess for the parameter vector, . In many cases, an uninformed standard guess like T=(1,1,.,1) will work fine; in other cases, the algorithm converges only if the initial guess is already somewhat close to the final solution.In each iteration step,
16、 the parameter vector, , is replaced by a new estimate, + . To determine , the functions are approximated by their linearizationswhereis the gradient (row-vector in this case) of f with respect to .At its minimum, the sum of squares, S(), the gradient of S with respect to will be zero. The above fir
17、st-order approximation of gives. Or in vector notation,. Taking the derivative with respect to and setting the result to zero gives:where is the Jacobian matrix whose ith row equals Ji, and where and are vectors with ith component and yi, respectively. This is a set of linear equations which can be
18、solved for .Levenberg's contribution is to replace this equation by a "damped version",where I is the identity matrix, giving as the increment, , to the estimated parameter vector, .The (non-negative) damping factor, , is adjusted at each iteration. If reduction of S is rapid, a smalle
19、r value can be used, bringing the algorithm closer to the GaussNewton algorithm, whereas if an iteration gives insufficient reduction in the residual, can be increased, giving a step closer to the gradient descent direction. Note that the gradient of S with respect to equals . Therefore, for large v
20、alues of , the step will be taken approximately in the direction of the gradient. If either the length of the calculated step, , or the reduction of sum of squares from the latest parameter vector, + , fall below predefined limits, iteration stops and the last parameter vector, , is considered to be
21、 the solution.Levenberg's algorithm has the disadvantage that if the value of damping factor, , is large, inverting JTJ + I is not used at all. Marquardt provided the insight that we can scale each component of the gradient according to the curvature so that there is larger movement al
22、ong the directions where the gradient is smaller. This avoids slow convergence in the direction of small gradient. Therefore, Marquardt replaced the identity matrix, I, with the diagonal matrix consisting of the diagonal elements of JTJ, resulting in the LevenbergMarquardt algorithm:. A similar damp
23、ing factor appears in Tikhonov regularization, which is used to solve linear ill-posed problems, as well as in ridge regression, an estimation technique in statistics.edit Choice of damping parameterVarious more-or-less heuristic arguments have been put forward for the best choice for the damping pa
24、rameter . Theoretical arguments exist showing why some of these choices guaranteed local convergence of the algorithm; however these choices can make the global convergence of the algorithm suffer from the undesirable properties of steepest-descent, in particular very slow convergence close to the o
25、ptimum.The absolute values of any choice depends on how well-scaled the initial problem is. Marquardt recommended starting with a value 0 and a factor >1. Initially setting =0 and computing the residual sum of squares S() after one step from the starting point with the damping factor of =0 and se
26、condly with 0/. If both of these are worse than the initial point then the damping is increased by successive multiplication by until a better point is found with a new damping factor of 0k for some k.If use of the damping factor / results in a reduction in squared residual then this is taken as the
27、 new value of (and the new optimum location is taken as that obtained with this damping factor) and the process continues; if using / resulted in a worse residual, but using resulted in a better residual then is left unchanged and the new optimum is taken as the value obtained with as damping factor
28、.edit ExamplePoor FitBetter FitBest FitIn this example we try to fit the function y = acos(bX) + bsin(aX) using the LevenbergMarquardt algorithm implemented in GNU Octave as the leasqr function. The 3 graphs Fig 1,2,3 show progressively better fitting for the parameters a=100, b=102 used in the init
29、ial curve. Only when the parameters in Fig 3 are chosen closest to the original, are the curves fitting exactly. This equation is an example of very sensitive initial conditions for the LevenbergMarquardt algorithm. One reason for this sensitivity is the existence of multiple minima the function cos
30、(x) has minima at parameter value and edit Notes1. The algorithm was first published by Kenneth Levenberg, while working at the Frankford Army Arsenal. It was rediscovered by Donald Marquardt who worked as a statistician at DuPont and independently by Girard, Wynn and Morrison. edit See also· T
31、rust region edit References· Kenneth Levenberg (1944). "A Method for the Solution of Certain Non-Linear Problems in Least Squares". The Quarterly of Applied Mathematics 2: 164168. · A. Girard (1958). Rev. Opt 37: 225, 397. · C.G. Wynne (1959). "Lens Designin
32、g by Electronic Digital Computer: I". Proc. Phys. Soc. London 73 (5): 777. doi:10.1088/0370-1328/73/5/310. · Jorje J. Moré and Daniel C. Sorensen (1983). "Computing a Trust-Region Step". SIAM J. Sci. Stat. Comput. (4): 553572. · D.D. Morrison (1960). Jet Pro
33、pulsion Laboratory Seminar proceedings. · Donald Marquardt (1963). "An Algorithm for Least-Squares Estimation of Nonlinear Parameters". SIAM Journal on Applied Mathematics 11 (2): 431441. doi:10.1137/0111030. · Philip E. Gill and Walter Murray (1978). "Algorithms
34、 for the solution of the nonlinear least-squares problem". SIAM Journal on Numerical Analysis 15 (5): 977992. doi:10.1137/0715063. · Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization, 2nd Edition. Springer. ISBN 0-387-30303-0. edit External linksedit Descri
35、ptions· Detailed description of the algorithm can be found in Numerical Recipes in C, Chapter 15.5: Nonlinear models · C. T. Kelley, Iterative Methods for Optimization, SIAM Frontiers in Applied Mathematics, no 18, 1999, ISBN 0-89871-433-8. Online copy · History of the algorithm in SI
36、AM news · A tutorial by Ananth Ranganathan · Methods for Non-Linear Least Squares Problems by K. Madsen, H.B. Nielsen, O. Tingleff is a tutorial discussing non-linear least-squares in general and the Levenberg-Marquardt method in particular · T. Strutz: Data Fitting and Uncertainty (A
37、 practical introduction to weighted least squares and beyond). Vieweg+Teubner, ISBN 978-3-8348-1022-9. edit Implementations· Levenberg-Marquardt is a built-in algorithm with Mathematica · Levenberg-Marquardt is a built-in algorithm with Matlab · The oldest implementation still in use
38、is lmdif, from MINPACK, in Fortran, in the public domain. See also: o lmfit, a translation of lmdif into C/C+ with an easy-to-use wrapper for curve fitting, public domain. o The GNU Scientific Library library has a C interface to MINPACK. o C/C+ Minpack includes the LevenbergMarquardt algorithm. o Several high-level languages and mathematical packages have wrappers for the MINPACK routines, among them: § Python library scipy, module , § IDL, add-on MPFIT. § R (programming language) has the min
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 水利ppp合同范本
- 委托簡易合同范本
- 科技企業(yè)的知識(shí)產(chǎn)權(quán)管理策略與實(shí)踐
- 文山泡沫滅火系統(tǒng)施工方案
- 2025年幼兒園中班音樂標(biāo)準(zhǔn)教案《蜜蜂做工》含反思
- 荊州粘貼碳纖維施工方案
- 2025年幼兒園中班美術(shù)標(biāo)準(zhǔn)教案《斑斑點(diǎn)點(diǎn)的樹》含反思
- 2024年12月2025年上半年浙江舟山市定海區(qū)融媒體中心公開招聘記者1人筆試歷年典型考題(歷年真題考點(diǎn))解題思路附帶答案詳解-1
- 遼寧工業(yè)大學(xué)《外國法制史》2023-2024學(xué)年第二學(xué)期期末試卷
- 桂林生命與健康職業(yè)技術(shù)學(xué)院《痕量元素地球化學(xué)》2023-2024學(xué)年第二學(xué)期期末試卷
- 張岱年:《中國文化概論》
- 繪本成語故事:四面楚歌
- HCIE-Transmission H12-931認(rèn)證培訓(xùn)考試題庫匯總(含答案)
- 造血細(xì)胞與基本檢驗(yàn)方法-細(xì)胞化學(xué)染色(血液學(xué)檢驗(yàn)課件)
- 領(lǐng)子的分類詳解課件
- 產(chǎn)品質(zhì)量保證書
- 工廠員工消防安全培訓(xùn)內(nèi)容
- 調(diào)節(jié)與集合的相關(guān)性 相對(duì)調(diào)節(jié)和相對(duì)集合的關(guān)系
- 《金融工程》課程教案
- 水輪機(jī)結(jié)構(gòu)總體介紹
- 十八項(xiàng)護(hù)理核心制度培訓(xùn)課件
評(píng)論
0/150
提交評(píng)論