版權說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權,請進行舉報或認領
文檔簡介
1、6. Markov ChainState SpaceThe state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations of a random walker, or the coordinates of set of molecules, or spin configurations of the Ising model.Markov ProcessA stochastic process is a seque
2、nce of random variables X0, X1, , Xn, The process is characterized by the joint probability distribution P(X0, X1, )If P(Xn+1|X0, X1, Xn) = P(Xn+1|Xn) then it is a Markov process.Markov ChainA Markov chain is completely characterized by an initial probability distribution P0(X0), and the transition
3、matrix W(Xn-Xn+1) = P(Xn+1|Xn).Thus, the probability that a sequence of X0=a, X1=b, , Xn= n appears, is P0(a)W(a-b)W(b-c) W(.-n).Properties of Transition MatrixSince W(x-y) = P(y|x) is a conditional probability, we must have W(x-y) 0.Probability of going anywhere is 1, soy W(x - Y) = 1.EvolutionGive
4、n the current distribution, Pn(X), the distribution at the next step, n +1, is obtained fromPn+1(Y) = x Pn(X) W( X - Y) In matrix form, this is Pn+1 = Pn W.Chapman-Kolmogorov EquationWe note that the conditional probability of state after k step is P(Xk=b|X0=a) = Wkab. We havewhich, in matrix notati
5、on, is Wk+s=Wk Ws.Probability Distribution of States at Step nGiven the probability distribution P0 initially at n = 0, the distribution at step n isPn = P0 Wn (n-th matrix power of W)Example: Random WalkerA drinking walker walks in discrete steps. In each step, he has probability walk to the right,
6、 and probability to the left. He doesnt remember his previous steps.The QuestionsUnder what conditions Pn(X) is independent of time (or step) n and initial condition P0? And approaches a limit P(X)?Given W(X-X), compute P(X)Given P(X), how to construct W(X-X) ?Some Definitions: Recurrence and Transi
7、enceA state i is recurrent if we visit it infinite number of times when n - .P(Xn = i for infinitely many n) = 1.For a transient state j, we visit it only a finite number of times as n - . IrreducibleFrom any state I and any other state J, there is a nonzero probability that one can go from I to J a
8、fter some n steps.I.e., WnIJ 0, for some n.Absorbing StateA state, once it is there, can not move to anywhere else.Closed subset: once it is there, there is no escape from the set.Example125431,5 is closed, 3 is closed/absorbing.It is not irreducible. Aperiodic StateA state I is called aperiodic if
9、WnII 0 for all sufficiently large n.This means that probability for state I to go back to I after n step for all n nmax is nonzero.Invariant or Equilibrium DistributionIfwe say that the probability distribution P(x) is invariant with respect to the transition matrix W(x-x ).Convergence to Equilibriu
10、mLet W be irreducible and aperiodic, and suppose that W has an invariant distribution p. Then for any initial distribution, P(Xn=j) - pj, as n - for all j.This theorem tell us when do we expect a unique limiting distribution.Limit DistributionOne also hasindependent of the initial state i, such that
11、 P = P W, Pj = pj.Condition for Approaching EquilibriumThe irreducible and aperiodic condition can be combined to mean:For all state j and k, Wnjk 0 for sufficiently large n.This is also referred to as ergodic.Urn ExampleThere are two urns. Urn A has two balls, urn B has three balls. One draws a bal
12、l in each and switch them. There are two white balls, and three red balls.What are the states, the transition matrix W, and the equilibrium distribution P?The Transition MatrixNote that elements of W2 are all positive.12311/61/32/3Eigenvalue ProblemDetermine P is an eigenvalue problem:P = P WThe sol
13、ution isP1 = 1/10, P2 = 6/10, P3 = 3/10.What is the physical meaning of the above numbers?Convergence to Equilibrium DistributionLet P0 = (1, 0, 0)P1 = P0 W = (0, 1, 0)P2 = P1 W = P0 W2 = (1/6,1/2,1/3)P3 = P2 W = P0 W3 = (1/12,23/36,5/18)P4 = P3 W = P0 W4 = (0.106,0.587,0.3)P5 = P4 W = P0 W5 = (0.10
14、07, 0.5986, 0.3007) . . . P0 W = (0.1, 0.6, 0.3)Time ReversalSuppose X0, X1, , XN is a Markov chain with (irreducible) transition matrix W(X-X) and an equilibrium distribution P(X), what transition probability would result in a time-reversed process Y0 = XN, Y1=XN-1, YN=X0?AnswerThe new WR should be
15、 such thatP(x) WR(x-x) = P(x)W(x-x) (*)Original process P(x0,x1,.,xN) = P(x0) W(x0-x1) W(x1-x2) W(xN-1-xN) must be equal to reversed process P(xN,xN-1,x0) = P(XN) WR(XN-XN-1) WR(xN-1-XN-2) WR(x1-x0). The equation (*) satisfies this.Reversible Markov ChainA Markov chain is said reversible if it satis
16、fies detailed balance:P(X) W(X - Y) = P(Y) W(Y -X)Nearly all the Markov chains used in Monte Carlo method satisfy this condition by construction.An example of a chain that does not satisfy detailed balance1232/31/32/31/32/31/3Equilibrium distribution is P=(1/3,1/3,1/3). The reverse chain has transition matrix WR = WT (transpose of W). WR W.Realization of Samples in Monte Carlo and Markov Chain The
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經(jīng)權益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
- 6. 下載文件中如有侵權或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 《證劵基礎知識最終》課件
- 《激光切割工藝》課件
- 荒山綠化項目可行性研究報告
- 《人力資源管理奧秘》課件
- 股份解禁協(xié)議三篇
- 專業(yè)畢業(yè)實習報告4篇
- 2023年-2024年企業(yè)主要負責人安全教育培訓試題及答案(易錯題)
- 2024員工三級安全培訓考試題帶解析答案可打印
- 2023年-2024年項目部安全管理人員安全培訓考試題附答案【培優(yōu)A卷】
- 2023年-2024年企業(yè)主要負責人安全培訓考試題(預熱題)
- (完整word版)Word信紙(A4橫條直接打印版)模板
- 鋼結(jié)構(gòu)件運輸專項方案
- 物業(yè)公司車輛進出登記表
- DCS基礎培訓課程(和利時)課件
- 員工消防安全教育培訓
- HART-375手操器說明書
- 文學批評與實踐-四川大學中國大學mooc課后章節(jié)答案期末考試題庫2023年
- (52)-12.1服裝的審美形態(tài)11.4
- 力行“五育”并舉融合“文化”育人
- 上海中心大廈介紹
- 管道試壓記錄表
評論
0/150
提交評論