第九章+無(wú)監(jiān)督學(xué)習(xí)與半監(jiān)督學(xué)習(xí)-Clustering_第1頁(yè)
第九章+無(wú)監(jiān)督學(xué)習(xí)與半監(jiān)督學(xué)習(xí)-Clustering_第2頁(yè)
第九章+無(wú)監(jiān)督學(xué)習(xí)與半監(jiān)督學(xué)習(xí)-Clustering_第3頁(yè)
第九章+無(wú)監(jiān)督學(xué)習(xí)與半監(jiān)督學(xué)習(xí)-Clustering_第4頁(yè)
第九章+無(wú)監(jiān)督學(xué)習(xí)與半監(jiān)督學(xué)習(xí)-Clustering_第5頁(yè)
已閱讀5頁(yè),還剩71頁(yè)未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

JiafengGuoUnsupervisedLearning——ClusteringOutlineIntroductionApplicationsofClusteringDistanceFunctionsEvaluationMetricsClusteringAlgorithmsK-meansGaussianMixtureModelsandEMAlgorithmK-medoidsHierarchicalClusteringDensity-basedClusteringSupervisedvs.UnsupervisedLearning

WhydoUnsupervisedLearning?Rawdatacheap.Labeleddataexpensive.Savememory/computation.Reducenoiseinhigh-dimensionaldata.Usefulinexploratorydataanalysis.Oftenapre-processingstepforsupervisedlearning.Discovergroupssuchthatsampleswithinagrouparemoresimilartoeachotherthansamplesacrossgroups.ClusterAnalysisAvariablecanbeunobserved(latent).Itisanimaginaryquantitymeanttoprovidesomesimplifiedandabstractiveviewofthedatagenerationprocess.E.g.,speechrecognitionmodels,mixturemodelsItisareal-worldobjectand/orphenomena,butdifficultorimpossibletomeasure.E.g.,thetemperatureofastar,causesofadisease,evolutionaryancestorsItisareal-worldobjectand/orphenomena,butsometimeswasnotmeasured,becauseoffaultysensors;orwasmeasurewithanoisychannel,etc.E.g.,trafficradio,aircraftsignalonaradarscreenDiscretelatentvariablescanbeusedtopartition/clusterdataintosub-groups.Continuouslatentvariablescanbeusedfordimensionalityreduction.UnobservedVariablesOutlineIntroductionApplicationsofClusteringDistanceFunctionsEvaluationMetricsClusteringAlgorithmsK-meansGaussianMixtureModelsandEMAlgorithmK-medoidsHierarchicalClusteringDensity-basedClusteringImageSegmentation/pff/segmentHumanPopulationEranElhaiketal.NatureClusteringGraphsNewman,2008VectorquantizationtocompressimagesBishop,PRMLAdissimilarity/distancefunctionbetweensamples.Alossfunctiontoevaluateclusters.Algorithmthatoptimizesthislossfunction.IngredientsofclusteranalysisOutlineIntroductionApplicationsofClusteringDistanceFunctionsEvaluationMetricsClusteringAlgorithmsK-meansGaussianMixtureModelsandEMAlgorithmK-medoidsHierarchicalClusteringDensity-basedClusteringChoiceofdissimilarity/distancefunctionisapplicationdependent.Needtoconsiderthetypeoffeatures.Categorical,ordinalorquantitative.Possibletolearndissimilarityfromdata.Dissimilarity/DistanceFunction

DistanceFunction

StandardizationWithoutstandardizationWith

standardizationStandardizationnotalwayshelpfulWithoutstandardizationWith

standardizationOutlineIntroductionApplicationsofClusteringDistanceFunctionsEvaluationMetricsClusteringAlgorithmsK-meansGaussianMixtureModelsandEMAlgorithmK-medoidsHierarchicalClusteringDensity-basedClusteringPerformanceEvaluationofClustering:ValidityindexEvaluationmetrics:referencemodel(externalindex)comparewithreferencenon-referencemodel(internalindex)measuredistanceofinner-classandinter-classEvaluationofClustering

ReferenceModelm(m-1)/2referencesamenotclusteringsameabnotcd

ExternalIndexOnlyhavingresultofclustering,howcanweevaluateit?Intra-clustersimilarity:largerisbetterInter-clustersimilarity:smallerisbetterNon-referencemodel

Non-referencemodel

InternalIndex

OutlineIntroductionApplicationsofClusteringDistanceFunctionsEvaluationMetricsClusteringAlgorithmsK-meansGaussianMixtureModelsandEMAlgorithmK-medoidsHierarchicalClusteringDensity-basedClusteringK-means:

Idea

HowdoweminimizeJw.r.t(rik,uk)?ChickenandeggproblemIfprototypesknown,canassignresponsibilitiesIfresponsibilitiesknown,cancomputeprototypesWeuseaniterativeprocedureK-means:minimizingthelossfunction

K-meansAlgorithmsSomeheuristicsRandomlypickKdatapointsasprototypesPickprototypei+1tobethefarthestfromprototypes{1,2….i}HowdoweinitializeK-means?Evolutionofk-Means(a)originaldataset;(b)randominitialization;(c-f)illustrationofrunningtwoiterationsofk-means.(ImagesfromMichaelJordan)LossfunctionJaftereachiterationk-meansisexactlycoordinatedescentonthereconstructionerrorE.Emonotonicallydecreases,andthevalueofEconverges,sodotheclusteringresults.Itispossiblefork-meanstooscillatebetweenafewdifferentclusterings,butthisalmostneverhappensinpractice.Eisnon-convex,socoordinatedescentonEcannotguaranteedtoconvergetoglobalminimum.Onecommonthingtodoisrunningk-meansmanytimesandpickthebestone.ConvergenceofK-meansLikechoosingKinkNN.ThelossfunctionJgenerallydecreaseswithK.HowtochooseK?HowtochooseK?GapstatisticCross-validation:Partitiondataintotwosets.Estimateprototypesononeandusethesetocomputethelossfunctionontheother.Stabilityofclusters:Measurethechangeintheclustersobtainedbyresamplingorsplittingthedata.Non-parametricapproach:PlaceaprioronK.MoredetailsintheBayesiannon-parametriclecture.Hardassignmentsofdatapointstoclusterscancauseasmallperturbationtoadatapointtoflipittoanothercluster.Solution:GMMAssumessphericalclustersandequalprobabilitiesforeachcluster.Solution:GMMClusterschangearbitrarilyfordifferentK.Solution:HierarchicalclusteringSensitivetooutliers.Solution:Usearobustlossfunction.Workspoorlyonnon-convexclusters.Solution:Spectralclustering.LimitationsofK-meansOutlineIntroductionApplicationsofClusteringDistanceFunctionsEvaluationMetricsClusteringAlgorithmsK-meansGaussianMixtureModelsandEMAlgorithmK-medoidsHierarchicalClusteringDensity-basedClusteringMultivariateNormalDistribution

GaussianMixtureModel

TheLearningisHard

HowtoSolveit?

TheExpectation-Maximization(EM)AlgorithmAverygeneraltreatmentoftheEMalgorithm,andIntheprocessprovideaproofthattheEMalgorithmderivedheuristicallybeforeforGaussianmixturesdoesindeedmaximizethelikelihoodfunction,andThisdiscussionwillalsoformthebasisforthederivationofthevariationalinferenceframeworkTheEMAlgorithminGeneral

TheEMAlgorithminGeneralTheEMAlgorithminGeneral

Maximizingoverq(Z)wouldgivethetrueposteriorEM:VariationalViewpointEStepMStep

TheEMAlgorithm

InitialConfiguratinE-StepM-StepTheEMAlgorithmTheEMAlgorithm

Convergence

GMM:RelationtoK-meansIllustrationK-meansvsGMMLossfunction:minimizesumofsquareddistance.Hardassignmentofpointstoclusters.Assumessphericalclusterswithequalprobabilityofacluster.Minimizenegativeloglikelihood.Softassignmentofpointstoclusters.Canbeusedfornon-sphericalclusterswithdifferentprobabilities.K-meansGMMOutlineIntroductionApplicationsofClusteringDistanceFunctionsEvaluationMetricsClusteringAlgorithmsK-meansGaussianMixtureModelsandEMAlgorithmK-medoidsHierarchicalClusteringDensity-basedClusteringSquaredEuclideandistancelossfunctionofK-meansnotrobust.Onlythedissimilaritymatrixmaybegiven.Attributesnotquantitative.K-medoids

K-medoidsOutlineIntroductionApplicationsofClusteringDistanceFunctionsEvaluationMetricsClusteringAlgorithmsK-meansGaussianMixtureModelsandEMAlgorithmK-medoidsHierarchicalClusteringDensity-basedClusteringOrganizetheclustersinahierarchicalway.Producesarootedbinarytree(dendrogram).HierarchicalClusteringHierarchicalClusteringBottom-up(agglomerative):Recursivelymergetwogroupswiththesmallestbetween-clustersimilarity.Top-down(divisive):Recursivelysplitaleast-coherent(e.g.largestdiameter)cluster.Userscanthenchooseacutthroughthehierarchytorepresentthemostnaturaldivisionintoclusters(e.g.whereintergroupsimilarityexceedssomethreshold).

HierarchicalClusteringOutlineIntroductionApplicationsofClusteringDistanceFunctionsEvaluationMetricsClusteringAlgorithmsK-meansGaussianMixtureModelsandEMAlgorithmK-medoidsHierarchicalClusteringDensity-basedClustering

DBSCAN1Esteretal.Adensity-basedalgorithmfordiscoveringclustersinlargespatialdatabaseswithnoise.ProceedingsoftheSecondInternationalConferenceonKnowledgeDiscoveryandDataMining(KDD).1996.Twopointspandqaredensity-connectedifthereisapointosuchthatbothpandqarereachablefromoAclustersatisfiestwoproperties:Allpointswithintheclusteraremutuallydensity-connected;Ifapointisdensity-reachablefromanypointofthecluster,itispartoftheclusteraswellDBSCAN

DBSCANAdvantagesNotneedtospecifythenumberofclustersArbitraryshapeclusterRobusttooutliersDisadvantagesDifficultparameterselectionNotproperfordatasetswithlargedifferencesindensitiesAnalysisofDBSCAN

Mean-ShiftClustering2Fukunaga,Keinosuke;LarryD.Hostetler.TheEstimationoftheGradientofaDensityFunction,withApplicationsinPatternRecognition.IEEETransactionsonInformationTheory21(1):32–40.Jan.1975.Cheng,Yizong.Me

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論