




版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、Machine Learning Foundations (_hx?)Lecture 2: Learning to Answer Yes/NoHsuan-Tien Lin (?0).twDepartment of Computer Science & Information EngineeringNationalUniversity( ?c'x? ? ?)Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations0/22Learning to Answer Yes/No1 When Can Mac
2、hines Learn?2 Why Can Machines Learn?3 How Can Machines Learn?4 How Can Machines Learn Better?Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations1/22Lecture 2: Learning to Answer Yes/NoPerceptron Hypothesis Set Perceptron Learning Algorithm (PLA) Guarantee of PLANon-Separable DataLecture 1: The Le
3、arning ProblemA takes D and H to get gLearning to Answer Yes/NoPerceptron Hypothesis SetCredit Approval Problem RevisitedApplicant Informationunknown target functionf : X Y(ideal credit approval formula)learning algorithm A(historical records in bank)(learned formula to be used)hypothesis setH(set o
4、f candidate formula)what hypothesis set can we use?Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations2/22final hypothesisg ftraining examplesD : (x1, y1), · · · , (xN , yN )age23 yearsgenderfemaleannual salaryNTD 1,000,000year in residence1 yearyear in job0.5 yearcurrent debt200,00
5、0Learning to Answer Yes/NoPerceptron Hypothesis SetA Simple Hypothesis Set: the PerceptronFor x = (x1, x2, · · · , xd ) features of customer, compute aweighted score andapprove credit ifdeny credit ifXdi=1 di=1w x > thresholdi iXw x < thresholdi i Y: +1(good), 1(bad) , 0 ignored
6、linear formula h H are !dXh(x) = signwi xi thresholdi=1called perceptron hypothesis historicallyHsuan-Tien Lin (NTU CSIE)Machine Learning Foundations3/22age23 yearsannual salaryNTD 1,000,000year in job0.5 yearcurrent debt200,000Learning to Answer Yes/NoPerceptron Hypothesis SetVector Form of Percept
7、ron Hypothesis !dXh(x)= signwi xithresholdi=1 !dX= signwi xi+ (threshold) · (+1)|wz0 | xz0 i=1 !dX= signwi xii=0= sign wT x each tall w represents a hypothesis h & is multiplied withtall x will usl versions to simplify notationwhat do perceptrons h look like?Hsuan-Tien Lin (NTU CSIE)Machine
8、 Learning Foundations4/22Learning to Answer Yes/NoPerceptron Hypothesis SetPerceptrons in R2h(x) = sign (w0 + w1x1 + w2x2)points on the plane (or points in Rd )customer features x: labels y :hypothesis h: (+1), × (-1)lines (or hyperplanes in Rd )positive on one side of a line, negative on the o
9、ther sidedifferent line classifies customers differentlyperceptrons linear (binary) classifiersHsuan-Tien Lin (NTU CSIE)Machine Learning Foundations5/22Learning to Answer Yes/NoPerceptron Hypothesis SetFun TimeConsider using a perceptron to detect spam messages.Assume that eachis represented by the
10、frequency of keywordoccurrence, and output +1 indicates a spam. Which keywords below shall have large positive weights in a good perceptron for the task?1 coffee, tea, hamburger, steak2 free, drug, fantastic, deal3 machine, learning, statistics, textbook4 national, university, courseraHsuan-Tien Lin
11、 (NTU CSIE)Machine Learning Foundations6/22Learning to Answer Yes/NoPerceptron Hypothesis SetFun TimeConsider using a perceptron to detect spam messages.Assume that eachis represented by the frequency of keywordoccurrence, and output +1 indicates a spam. Which keywords belowshall have large positive
12、 weights in a good perceptron for the task?1 coffee, tea, hamburger, steak2 free, drug, fantastic, deal3 machine, learning, statistics, textbook4 national, university, courseraReference Answer: 2The occurrence of keywords with positive weights increase the spam score, and hence those keywords should
13、 often appear in spams.Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations6/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Select g from HH = all possible perceptrons, g =?want: g f (hard when f unknown)almost necessary: g f on D, ideallyg(xn) = f (xn) = yndifficult: H is of infinit
14、e sizeidea: start from some g0, and correct its mistakes on Dwill represent g0 by its weight vector w0Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations7/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Perceptron Learning Algorithmstart from some w0 (say, 0), and correct its mistake
15、s on Dw+y xxFor t = 0, 1, . . .1 find a mistake of wt called xn(t), yn(t) w Tsign w x=6ytn(t )n(t )w2 (try to) correct the mistake byw. . . until no more mistakes return last w (called wPLA) as gxxt +1 wt + yn(t )xn(t )Thats it!A fault confessed is half redressed. :-)Hsuan-Tien Lin (NTU CSIE)Machine
16、 Learning Foundations8/22y=+1w+yy=1Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Practical Implementation of PLAstart from some w0 (say, 0), and correct its mistakes on DCyclic PLAFor t = 0, 1, . . .1 find the next mistake of wt called xn(t), yn(t) Tsign w x=6ytn(t )n(t )2 correct the
17、mistake bywt +1 wt + yn(t )xn(t ). . . until a full cycle of not encountering mistakesnext can follow naïve cycle (1, · · · , N)or precomputed random cycleHsuan-Tien Lin (NTU CSIE)Machine Learning Foundations9/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Seeing i
18、s Believingworked like a charm with < 20 lines!(note: made xi x0 = 1 for visual purpose)Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations10/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Seeing is Believingworked like a charm with < 20 lines!(note: made xi x0 = 1 for visual
19、purpose)Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations10/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Seeing is Believingworked like a charm with < 20 lines!(note: made xi x0 = 1 for visual purpose)Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations10/22Learning to Answe
20、r Yes/NoPerceptron Learning Algorithm (PLA)Seeing is Believingworked like a charm with < 20 lines!(note: made xi x0 = 1 for visual purpose)Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations10/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Seeing is Believingworked like a charm w
21、ith < 20 lines!(note: made xi x0 = 1 for visual purpose)Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations10/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Seeing is Believingworked like a charm with < 20 lines!(note: made xi x0 = 1 for visual purpose)Hsuan-Tien Lin (NTU CSIE
22、)Machine Learning Foundations10/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Seeing is Believingworked like a charm with < 20 lines!(note: made xi x0 = 1 for visual purpose)Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations10/22Learning to Answer Yes/NoPerceptron Learning Algo
23、rithm (PLA)Seeing is Believingworked like a charm with < 20 lines!(note: made xi x0 = 1 for visual purpose)Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations10/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Seeing is Believingworked like a charm with < 20 lines!(note: made xi
24、 x0 = 1 for visual purpose)Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations10/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Seeing is Believingworked like a charm with < 20 lines!(note: made xi x0 = 1 for visual purpose)Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations10/
25、22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Seeing is Believingworked like a charm with < 20 lines!(note: made xi x0 = 1 for visual purpose)Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations10/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Some Remaining Issues
26、 of PLAcorrect mistakes on D until no mistakesAlgorithmic: halt (with no mistake)?naïve cyclic: ?random cyclic: ? other variant: ?Learning: g f ?on D, if halt, yes (no mistake) outside D: ?if not halting: ?to be shown if (.), after enough corrections,any PLA variant haltsHsuan-Tien Lin (NTU CSI
27、E)Machine Learning Foundations11/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Fun TimeLets try to think about why Py work.Let n = n(t ), according to the rule of PLA below, which formula is true? Tsign w x=y ,wt +1 wt + ynxn6nntwTxn = yn1t +12 sign(wT xn) = ynt +13 ynwT x y w xTnnnt
28、 +1t4 ynwT xn < ynwT xnt +1tHsuan-Tien Lin (NTU CSIE)Machine Learning Foundations12/22Learning to Answer Yes/NoPerceptron Learning Algorithm (PLA)Fun TimeLets try to think about why Py work.Let n = n(t ), according to the rule of PLA below, which formula is true? Tsign w x=y ,wt+1 wt + ynxn6nntwT
29、xn = yn1t +12 sign(wT xn) = ynt +13 ynwT x y w xTnnnt +1t4 ynwT xn < ynwT xnt +1tReference Answer: 3Simply multiply the second part of the rule by ynxn. The result shows that the rule somewhat tries to correct the mistake.Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations12/22Learning to Answe
30、r Yes/NoGuarantee of PLALinear Separability if PLA halts (i.e. no more mistakes),(necessary condition) D allows some w to make no mistake call such D linear separable(linear separable)(not linear separable)(not linear separable)assume linear separable D,does PLA always halt?Hsuan-Tien Lin (NTU CSIE)
31、Machine Learning Foundations13/22Learning to Answer Yes/NoGuarantee of PLAPLA Fact: wt Gets More Aligned with wflinear separable D exists perfect wf such that yn = sign(wT xn)f wf perfect hence every xn correctly away from line:TTyw xmin y w x> 0n(t )n(t )nnffnwT wt by updating with any xn(t), yn
32、(t) f TTw w= ww + yxt +1tn(t )ffn(t )TT wf wt+ min ynwf xnn+ 0.> wT wtfwt appears more aligned with wf after update(really?)Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations14/22Learning to Answer Yes/NoGuarantee of PLAPLA Fact: wt Does Not Grow Too Fastwt changed only when mistake TT sign w
33、x6= y yw x 0n(t )n(t )n(t )n(t )tt mistake limits kwt k2 growth, even when updating with longest xn22kwt+1kkwt + yn(t )xn(t )k=2T2kw k + 2yw x+ kyxktn(t )tn(t )n(t ) n(t )22kwt k+ 0 + kyn(t )xn(t )k22kwt k + max ky x kn nnstart from w0 = 0, after T mistake corrections,wT wTfT · constantkwf k kw
34、T kHsuan-Tien Lin (NTU CSIE)Machine Learning Foundations15/22Learning to Answer Yes/NoGuarantee of PLAFun TimeLets upper-bound T , the number of mistakes that PLA corrects. wTDefine R2 = max k2f xnk = min yxnnkwf knnWe want to show that T . Express the upper bound by the twoterms above.1 R/2 R2/23 R
35、/24 2/R2Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations16/22Learning to Answer Yes/NoGuarantee of PLAFun TimeLets upper-bound T , the number of mistakes that PLA corrects. wTDefine R2 = max k2f xnk = min yxnnkwf knnWe want to show that T . Express the upper bound by the twoterms above.1 R/2 R2
36、/23 R/24 2/R2Reference Answer:2wTwtTheum value ofis 1. Since Tfkwf k kwt kmistake corrections increase the innerproduct by T · constant, theum2number of corrected mistakes is 1/constant .Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations16/22Learning to Answer Yes/NoNon-Separable DataMore ab
37、out PLAGuaranteeas long as linear separable and correct by mistakeinner product of wf and wt grows fast; length of wt grows slowly PLA lines are more and more aligned with wf haltsProssimple to implement, fast, works in any dimension dCons assumes linear separable D to haltproperty unknown in advanc
38、e (no need for PLA if we know wf ) not fully sure how long halting takes ( depends on wf )though practically fastwhat if D not linear separable?Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations17/22Learning to Answer Yes/NoNon-Separable DataLearning with Noisy Dataunknown target functionf : X Y+
39、 noise(ideal credit approval formula)learning algorithm A(historical records in bank)(learned formula to be used)hypothesis setH(set of candidate formula)how to at least get g f on noisy D?Hsuan-Tien Lin (NTU CSIE)Machine Learning Foundations18/22final hypothesisg ftraining examplesD : (x1, y1),
40、83; · · , (xN , yN )Learning to Answer Yes/NoNon-Separable DataLine with Noise Toleranceassume little noise: yn = f (xn) usuallyif so, g f on D yn = g(xn) usuallyhow aboutNrzXTw argminy =6sign(w x )gnnwn=1NP-hard to solve, unfortunatelycan we modify PLA to get an approximately good g?Hsuan
41、-Tien Lin (NTU CSIE)Machine Learning Foundations19/22Learning to Answer Yes/NoNon-Separable DataPockgorithmmodify PLA algorithm (black lines) by keeping best weights in pocketinitialize pocket weights wFor t = 0, 1, · · ·1 find a (random) mistake of wt called (xn(t), yn(t)2 (try to) c
42、orrect the mistake bywt +1 wt + yn(t )xn(t )3 if wt+1 makes fewer mistakes than w , replace w by wt+1.until enough iterationsreturn w (called wPOCKET) as ga simple modification of PLA to find (somewhat) best weightsHsuan-Tien Lin (NTU CSIE)Machine Learning Foundations20/22Learning to Answer Yes/NoNo
43、n-Separable DataFun TimeShould we use pocket or PLA?Since we do not know whether D is linear separable in advance, wemay decide to just go with pocket instead of PLA. If D is actually linearseparable, whats the difference between the two?1 pocket on D is slower than PLA2 pocket on D is faster than PLA3 pocke
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 淋巴瘤靶向及免疫治療手冊(cè)閱讀札記
- JavaScript Vue.js前端開發(fā)任務(wù)驅(qū)動(dòng)式教程-課件 模塊八 Vue.js基礎(chǔ)知識(shí)及應(yīng)用
- 2025年1-6年級(jí)小學(xué)語(yǔ)文成語(yǔ)+規(guī)律詞(AABB與ABCC和AABC)填空練習(xí)
- 海洋項(xiàng)目投資效益分析
- 老年護(hù)理培訓(xùn)教學(xué)課件
- 2025年按摩浴缸市場(chǎng)調(diào)查報(bào)告
- 特色燒烤店品牌授權(quán)及店鋪轉(zhuǎn)讓合同
- 機(jī)器人產(chǎn)品貨款抵押智能設(shè)備合同范本
- 保險(xiǎn)理賠信息系統(tǒng)驗(yàn)收合同
- 北京民政局離婚協(xié)議書范本編制流程與范本示例
- 境外投資項(xiàng)目的財(cái)務(wù)評(píng)估方法
- 2025屆高考英語(yǔ)二輪復(fù)習(xí)備考策略課件
- 血管加壓藥物在急診休克中的應(yīng)用專家共識(shí)2021解讀課件
- 招標(biāo)控制價(jià)論文開題報(bào)告
- 公司主數(shù)據(jù)管理細(xì)則
- 2025年廣東韶關(guān)城投集團(tuán)下屬韶關(guān)市第一建筑工程有限公司招聘筆試參考題庫(kù)附帶答案詳解
- 2025版國(guó)家開放大學(xué)法學(xué)本科《知識(shí)產(chǎn)權(quán)法》期末紙質(zhì)考試總題庫(kù)
- 2026年1月1日起施行新增值稅法全文課件
- 配電室巡檢培訓(xùn)
- 輸電線路施工培訓(xùn)
- 嗜鉻細(xì)胞瘤危象的救治策略
評(píng)論
0/150
提交評(píng)論