




版權(quán)說(shuō)明:本文檔由用戶(hù)提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
FoundationsofMachineLearning
ModelEvaluation2023/11/4ModelEvaluationLesson4-12023/11/4ModelEvaluationLesson4-2Q:
Howdoweestimatetheperformanceofthesemodelsusingdifferentmachinelearningalgorithms?2023/11/4ModelEvaluationLesson4-3Q:
Howdoweestimatetheperformanceofthesemodelswithdifferentparameters?Blue:
Observed
dataGreen:
true
distributionPolynomial
Curve
FittingRed:
Predicted
curve2023/11/4ModelEvaluationLesson4-4Q:
Howdoweestimatetheperformanceofmachinelearningmodel?Answer:①Wewanttoestimatethegeneralizationperformance,thepredictiveperformanceofourmodelonfuture(unseen)data.②Wewanttoincreasethepredictiveperformancebytweakingthelearningalgorithmandselectingthebestperformingmodelfromagivenhypothesisspace.③wewanttocomparedifferentalgorithms,selectingthebest-performingoneaswellasthebestperformingmodelfromthealgorithm’shypothesisspace.BasicConcepts2023/11/4ModelEvaluationLesson4-5
i.i.d.:Independentandidenticallydistributionmeansthatallsampleshavebeendrawnfromthesameprobabilitydistributionandarestatisticallyindependentfromeachother.
Accuracy:thenumberofcorrectpredictionsadividedbythenumberofsamplesm
ErrorRate:thenumberofwrongpredictionsbdividedbythenumberofsamplesmBasicConcepts2023/11/4ModelEvaluationLesson4-6Error(誤差):generallyspeaking,thedifferencebetweenexpectedoutputvaluefromthemodelandrealsamplevalueTrainingerror(訓(xùn)練誤差):empiricalerror(經(jīng)驗(yàn)誤差),istheerrorwegetapplyingthemodeltothesamedatafromwhichwetrained.Testerror(測(cè)試誤差):istheerrorthatweincuronnewdata..Generalizationerror(泛化誤差):out-of-sampleerror,isameasureofhowaccuratelyanalgorithmisabletopredictoutcomevaluesforunseendataPractically,thetesterrorisusedtoestimategeneralizationerrorTheoretically,generalizationerrorboundisemployedBasicConcepts2023/11/4ModelEvaluationLesson4-7Overfitting(過(guò)擬合):
LowerrorontrainingdataandhigherrorontestdataOverfittinggenerallyoccurswhenamodelisexcessivelycomplex,suchashavingtoomany
parameters
relativetothenumberofobservations.Underfitting(欠擬合):HigherrorontrainingdataUnderfittingoccurswhenastatisticalmodelormachinelearningalgorithmcannotcapturetheunderlyingtrendofthedatawhenfittingalinearmodeltonon-lineardata2023/11/4ModelEvaluationLesson4-82023/11/4ModelEvaluationLesson4-9EvaluationMethodsHoldoutMethod(留出法)K-foldCross-validation(K折交叉驗(yàn)證法)Bootstrapping(自助法)2023/11/4ModelEvaluationLesson4-10EvaluationMethodsHoldoutMethod(留出法):isinarguablythesimplestmodelevaluationtechnique.splitthedatasetintotwodisjointparts:Atrainingsetandatestset2023/11/4ModelEvaluationLesson4-11EvaluationMethodsHoldoutMethod(留出法):isinarguablythesimplestmodelevaluationtechnique.splitthedatasetintotwodisjointparts:AtrainingsetandatestsetKeepinmind:therearemanywaystosplitthedataset,anddifferentwaysbringdifferentperformancethechangeintheunderlyingsamplestatisticsalongthefeaturesaxesisstillaproblemthatbecomesmorepronouncedifweworkwithsmalldatasets2023/11/4ModelEvaluationLesson4-12EvaluationMethodsHoldoutMethod(留出法):isinarguablythesimplestmodelevaluationtechnique.splitthedatasetintotwodisjointparts:AtrainingsetandatestsetKeepinmind:therearemanywaystosplitthedataset,anddifferentwaysbringdifferentperformanceStratifiedsampling(分層采樣)Repeatholdout
method
k
timeswithdifferentrandomseedsandcomputetheaverageperformanceoverthese
k
repetitions2023/11/4ModelEvaluationLesson4-13EvaluationMethodsHoldoutMethod(留出法):isinarguablythesimplestmodelevaluationtechnique.splitthedatasetintotwodisjointparts:AtrainingsetandatestsetKeepinmind:therearemanywaystosplitthedataset,anddifferentwaysbringdifferentperformanceStratifiedsampling(分層采樣)Repeatholdout
method
k
timeswithdifferentrandomseedsandcomputetheaverageperformanceoverthese
k
repetitionsKeepinmind:thesizeofatrainingsetwillaffecttheperformance2023/11/4ModelEvaluationLesson4-14EvaluationMethodsHoldoutMethod(留出法):isinarguablythesimplestmodelevaluationtechnique.splitthedatasetintotwodisjointparts:AtrainingsetandatestsetKeepinmind:therearemanywaystosplitthedataset,anddifferentwaysbringdifferentperformanceStratifiedsampling(分層采樣)Repeatholdout
method
k
timeswithdifferentrandomseedsandcomputetheaverageperformanceoverthese
k
repetitionsKeepinmind:thesizeofatrainingsetwillaffecttheperformanceTakeabout2/3~4/5datasetastrainingdata2023/11/4ModelEvaluationLesson4-15EvaluationMethodsK-foldCross-validation(K折交叉驗(yàn)證法):isprobablyamostcommonbutmorecomputationallyintensiveapproach.Splitsthedatasetintokdisjointparts,calledfoldsTypicalchoicesforkare5,10or20K-foldcross-validationisaspecialcaseofcross-validationwhereweiterateoveradatasetset
k
timesIneachround,onepartisusedforvalidation,andtheremaining
k-1
partsaremergedintoatrainingsubsetformodelevaluation2023/11/4ModelEvaluationLesson4-16EvaluationMethodsK-foldCross-validation(K折交叉驗(yàn)證法):isprobablyamostcommonbutmorecomputationallyintensiveapproach.5-fold2023/11/4ModelEvaluationLesson4-17EvaluationMethodsK-foldCross-validation(K折交叉驗(yàn)證法):isprobablyamostcommonbutmorecomputationallyintensiveapproach.Keepinmind:therethelargerthenumberoffoldsusedink-foldCV,thebettertheerrorestimateswillbe,butthelongeryourprogramwilltaketorun.Solution:
useatleast10folds(ormore)whenyoucanLeave-One-Out(留一法):LOO,isaspecialcasewhenk=numberofdataLOOCVcanbeusefulforsmalldatasets2023/11/4ModelEvaluationLesson4-18EvaluationMethodsBootstrapping(自助法):bootstrapsamplingtechniqueforestimatingasamplingdistributiontheideaofthebootstrapmethodistogeneratenewdatafromapopulationbyrepeatedsamplingfromtheoriginaldataset
withreplacement2023/11/4ModelEvaluationLesson4-19EvaluationMethodsBootstrapping(自助法):bootstrapsamplingtechniqueforestimatingasamplingdistributiontheideaofthebootstrapmethodistogeneratenewdatafromapopulationbyrepeatedsamplingfromtheoriginaldataset
withreplacementapproximately
select0.632×n
samplesasbootstraptrainingsetsandreserve
0.368×n
out-of-bagsamplesfortestingineachiteration.2023/11/4ModelEvaluationLesson4-20EvaluationMetrics2023/11/4ModelEvaluationLesson4-21MetricsforBinaryclassificationMeasuringmodelperformancewithaccuracyFractionofcorrectlyclassifiedsamplesItisreallyonlysuitablewhenthereareanequalnumberofobservationsineachclass(whichisrarelythecase)andthatallpredictionsandpredictionerrorsareequallyimportant,whichisoftennotthecaseDefinition2023/11/4ModelEvaluationLesson4-22MetricsforBinaryclassificationMeasuringmodelperformancewithaccuracyFractionofcorrectlyclassifiedsamplesNotalwaysausefulmetric,maybemisleadingExample:EmailSpamclassification99%ofemailarereal,1%ofemailarespamCouldbuildamodelthatpredictsallemailarerealaccurcy=99%ButhorribleatactuallyclassifyingspamFailsatitsoriginalpurposeMetricsforBinaryClassification2023/11/4ModelEvaluationLesson4-23ConfusionmatrixOneofthemostcomprehensivewaystorepresenttheresultofevaluatingbinaryclassification2023/11/4ModelEvaluationLesson4-24MetricsforBinaryClassificationErrorrate&AccuracyTheerrorratecanbeunderstoodasthesumofallfalsepredictionsdividedbythenumberoftotalpredictions,andtheaccuracyiscalculatedasthesumofcorrectpredictionsdividedbythetotalnumberofpredictions,respectively:2023/11/4ModelEvaluationLesson4-25MetricsfromtheconfusionmatrixPrecision(查準(zhǔn)率)Precision:measureshowmanyofthesamplespredictedaspositiveareactuallypositivePrecisionisusedasaperformancemetricwhenthegoalistolimitthenumberoffalsepositives.Examples:Predictingwhetheranewdrugwillbeeffectiveintreatingadiseaseinclinicaltrials2023/11/4ModelEvaluationLesson4-26MetricsfromtheconfusionmatrixRecall(查全率,召回率)Recall:measureshowmanyofthepositivearecapturedbythepositivepredictionsPrecisionisusedasaperformancemetricwhenweneedtoindentifyallpositivesamples.Examples:Findpeoplethataresick2023/11/4ModelEvaluationLesson4-27MetricsfromtheconfusionmatrixTradeoffbetweenPrecisionandRecallTogethigherprecisionbyincreasingthresholdTogethigherrecallbyreducingthreshold2023/11/4ModelEvaluationLesson4-28Metrics
fromtheconfusionmatrixTradeoffbetweenPrecisionandRecallTradeoffbetweenPrecisionandRecall2023/11/4ModelEvaluationLesson4-29Metrics
fromtheconfusionmatrixTradeoffbetweenPrecisionandRecallTradeoffbetweenPrecisionandRecallF1:F-scoreorF-measureF-score:iswiththeharmonicmean(調(diào)和平均數(shù))ofprecisionandrecallAlgorithmPRAverageF1A10.50.40.450.444A20.70.10.40.175A30.0210.510.03922023/11/4ModelEvaluationLesson4-30Metrics
fromtheconfusionmatrixGeneralF-measure:FβWhenβ=1,becomingF1Whenβ>1,placingmoreemphasisonfalsenegative,andweighingrecallhigherthanprecisionWhenβ<1,attenuatingtheinfluenceoffalsenegative,andweighingrecalllowerthanprecision2023/11/4ModelEvaluationLesson4-31MetricsforBinaryClassificationGeneralF-measure:FβReceiveroperatingcharacteristics(ROC)ROC(受試者工作特征):considersallpossiblethresholdsforagivenclassifier,andshowsthefalsepositiverate(FPR)againstthetruepositiverate(TPR)2023/11/4ModelEvaluationLesson4-32MetricsforBinaryClassificationAreaUnderROCCurve(AUC)ModelSelection有了實(shí)驗(yàn)評(píng)估方法和性能度量,看起來(lái)就能對(duì)學(xué)習(xí)器的性能進(jìn)行評(píng)估比較了:先使用某種實(shí)驗(yàn)評(píng)估方法測(cè)得學(xué)習(xí)器的某個(gè)性能度量結(jié)果,然后對(duì)這些結(jié)果進(jìn)行比較.首先,我們希望比較的是泛化性能,然而通過(guò)實(shí)驗(yàn)評(píng)估方法我們獲得的是測(cè)試集上的性能,兩者的對(duì)比結(jié)果可能未必相同;第二,測(cè)試集上的性能與測(cè)試集本身的選擇有很大關(guān)系,且不論使用不同大小的測(cè)試集會(huì)得到不同的結(jié)果,即使用相同大小的測(cè)試集?若包含的測(cè)試樣例不同,測(cè)試結(jié)果也會(huì)有不同;第二,很多機(jī)器學(xué)習(xí)算法本身有一定的隨機(jī)性,即便用相同的參數(shù)設(shè)置在同一個(gè)測(cè)試集上多次運(yùn)行,其結(jié)果也會(huì)有不同.2023/11/4ModelEvaluationLesson4-33ModelSelection統(tǒng)計(jì)假設(shè)檢驗(yàn)(hypothesistest)為我們進(jìn)行學(xué)習(xí)器性能比較提供了重要依據(jù),基于假設(shè)檢驗(yàn)結(jié)果我們可:對(duì)單個(gè)學(xué)習(xí)器泛化性能的假設(shè)進(jìn)行檢對(duì)多個(gè)學(xué)習(xí)器進(jìn)行性能比較。若在測(cè)試集上觀察到學(xué)習(xí)器A比B好,則A的泛化性能是否在統(tǒng)計(jì)意義上優(yōu)于B,以及這個(gè)結(jié)論的把握有多大.2023/11/4ModelEvaluationLesson4-34ModelSelection統(tǒng)計(jì)假設(shè)檢驗(yàn)(hypothesistest)為我們進(jìn)行學(xué)習(xí)器性能比較提供了重要依據(jù),基于假設(shè)檢驗(yàn)結(jié)果我們可:對(duì)單個(gè)學(xué)習(xí)器泛化性能的假設(shè)進(jìn)行檢對(duì)多個(gè)學(xué)習(xí)器進(jìn)行性能比較。若在測(cè)試集上觀察到學(xué)習(xí)器A比B好,則A的泛化性能是否在統(tǒng)計(jì)意義上優(yōu)于B,以及這個(gè)結(jié)論的把握有多大.2023/11/4ModelEvaluationLesson4-35Anhypothesistestingproblem
ConsideramodelwithholdoutmethodSupportthatthemodelwasperformed5times,andtheaccuracyare[0.99,0.98,0.99,0.94,0.95]Canwesaythatthemeanaccuracyisdifferentfrom0.97?ConsiderthegraderoftwomodelsAhad{15,10,12,19,5,7}Bhad{14,11,11,12,6,7}CanwesayAhadbettergradesthanB?Astatistictestaimstoanswersuchquestionsconfidenceinterval(置信區(qū)間)點(diǎn)估計(jì)與區(qū)間估計(jì)點(diǎn)估計(jì):用樣本統(tǒng)計(jì)量來(lái)估計(jì)總體參數(shù),因?yàn)闃颖窘y(tǒng)計(jì)量為數(shù)軸上某一點(diǎn)值,估計(jì)的結(jié)果也以一個(gè)點(diǎn)的數(shù)值表示,所以稱(chēng)為點(diǎn)估計(jì)。點(diǎn)估計(jì)雖然給出了未知參數(shù)的估計(jì)值,但是未給出估計(jì)值的可靠程度,即估計(jì)值偏離未知參數(shù)真實(shí)值的程度。區(qū)間估計(jì):給定置信水平,根據(jù)估計(jì)值確定真實(shí)值可能出現(xiàn)的區(qū)間范圍,該區(qū)間通常以估計(jì)值為中心,該區(qū)間則為置信區(qū)間。2023/11/4ModelEvaluationLesson4-36confidenceinterval(置信區(qū)間)
2023/11/4ModelEvaluationLesson4-37confidenceinterval(置信區(qū)間)點(diǎn)估計(jì)與區(qū)間估計(jì)標(biāo)準(zhǔn)差(standarddeviation)與標(biāo)準(zhǔn)誤差(standarderror)95%的置信區(qū)間假設(shè)X服從正態(tài)分布:
X~N(μ,σ2)不斷進(jìn)行采樣,假設(shè)樣本的大小為n,則樣本的均值為:M=(X1?+X2?+?+Xn??)/n
由大數(shù)定理與中心極限定理,M
服從:M~N(μ,σ12?)2023/11/4ModelEvaluationLesson4-38confidenceinterval(置信區(qū)間)
2023/11/4ModelEvaluationLesson4-39HypothesisTestingandStatisticalSignificanceTheprocessofhypothesistestingNullhypothesis:Thenullhypothesisisamodelofthesystembasedontheassumptionthattheapparenteffectwasactuallyduetochance.p-value:Thep-valueistheprobabilityoftheapparenteffectunderthenullhypothesis.Interpretation:Basedonthep-value,w
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶(hù)所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶(hù)上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶(hù)上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶(hù)因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 2025高考生物備考教學(xué)設(shè)計(jì):生物技術(shù)的安全性和倫理問(wèn)題
- 篷房搭建合同范本
- 13 胡蘿卜先生的長(zhǎng)胡子 教學(xué)設(shè)計(jì)-2024-2025學(xué)年統(tǒng)編版語(yǔ)文三年級(jí)上冊(cè)
- Unit 1 Teenage Life Listening and Speaking 教學(xué)設(shè)計(jì) -2024-2025學(xué)年高中英語(yǔ)人教版2019 必修第一冊(cè)
- 10《吃飯有講究》第2課時(shí)(教學(xué)設(shè)計(jì))-2024-2025學(xué)年統(tǒng)編版道德與法治一年級(jí)上冊(cè)
- Module 7 Unit 2 I'll be home at seven o'clock. (教學(xué)設(shè)計(jì))-2023-2024學(xué)年外研版(三起)英語(yǔ)五年級(jí)下冊(cè)
- 11-1《過(guò)秦論》(教學(xué)設(shè)計(jì))高二語(yǔ)文同步高效課堂(統(tǒng)編版 選擇性必修中冊(cè))
- 7的乘法口訣(教學(xué)設(shè)計(jì))-2024-2025學(xué)年二年級(jí)上冊(cè)數(shù)學(xué)人教版
- 軍訓(xùn)結(jié)束匯報(bào)表演上新生代表的演講稿
- 公司推廣策劃合同范本
- DeepSeek1天開(kāi)發(fā)快速入門(mén)
- 2025書(shū)記員招聘考試題庫(kù)及參考答案
- 2024-2025年第二學(xué)期數(shù)學(xué)教研組工作計(jì)劃
- 2025輔警招聘公安基礎(chǔ)知識(shí)題庫(kù)附含參考答案
- GB/T 44927-2024知識(shí)管理體系要求
- 2025年環(huán)衛(wèi)工作計(jì)劃
- 品質(zhì)巡檢培訓(xùn)課件
- 初驗(yàn)整改報(bào)告格式范文
- 2023青島版數(shù)學(xué)三年級(jí)下冊(cè)全冊(cè)教案
- 建設(shè)工程總承包EPC建設(shè)工程項(xiàng)目管理方案1
- T-CSUS 69-2024 智慧水務(wù)技術(shù)標(biāo)準(zhǔn)
評(píng)論
0/150
提交評(píng)論