版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡介
1、Chapter8k NN:k-Nearest NeighborsMichael Steinbach and Pang-Ning TanContents8.1Introduction (1518.2 Description of the Algorithm (1528.2.1 High-Level Description (1528.2.2Issues (1538.2.3Software Implementations (1558.3 Examples (1558.4 Advanced Topics (1578.5 Exercises (158Acknowledgments (159Refere
2、nces (1598.1IntroductionOne of the simplest and rather trivial classi?ers is the Rote classi?er,which memorizes the entire training data and performs classi?cation only if the attributes of the test object exactly match the attributes of one of the training objects.An obvious problem with this appro
3、ach is that many test records will not be classi?ed because they do notexactly match any of the training records.Another issue arises when two or more training records have the same attributes but different class labels.A more sophisticated approach,k-nearest neighbor(k NNclassi?cation10,11,21,?nds
4、a group of k objects in the training set that are closest to the test object,and bases the assignment of a label on the predominance of a particular class in this neighborhood.This addresses the issue that,in many data sets,it is unlikely that one object will exactly match another,as well as the fac
5、t that con?icting information about the class of an object may be provided by the objects closest to it.There are several key elements of this approach:(ithe set of labeled objects to be used for evaluating a test object ' s class,1(iia distance or similarity metric that can be used to computeTh
6、is need not be the entire training set.151152kNN:k-Nearest Neighborsthe closeness of objects,(iiithe value of k,the number of nearest neighbors,and(iv the method used to determine the class of the target object based on the classes and distances of the k nearest neighbors.In its simplest form,k NN c
7、an involve assigning an object the class of its nearest neighbor or of the majority of its nearest neighbors, but a variety of enhancements are possible and are discussed below.More generally,k NN is a special case of instance-based learning1.This includes case-based reasoning3,which deals with symb
8、olic data.The k NN approach is also an example of a lazy learning technique,that is,a technique that waits until the query arrives to generalize beyond the training data.Although k NN cl assi?cation is a classi?cation technique that is easy to understand and implement,it performs well in many situat
9、ions.In particular,a well-known result by Cover and Hart6shows that the classi?cation error2of the nearest neighbor rule isbounded above by twice the optimal Bayes error3under certain reasonable assump- tions.Furthermore,the error of the general k NN method asymptotically approaches that of the Baye
10、s error and can be used to approximate it.Also,because of its simplicity,k NN is easy to modify for more complicated classi? cation problems.For instance,k NN is particularly well-suited for multimodal classes4 as well as applications in which an object can have many class labels.To 川ustrate the las
11、t point,for the assignment of functions to genes based on microarray expression pro?les,some researchers found that k NN outperformed a support vector machine (SVMapproach,which is a much more sophisticated classi?cation scheme17. The remainder of this chapter describes the basic k NN algorithm,incl
12、uding vari-ous issues that affect both classi?cation and computational performance.Pointers are given to implementations of k NN,and examples of using the Weka machine learn-ing package to perform nearest neighbor classi?cation are also provided.Advanced teciiques are discussed brie?y and this chapt
13、er concludes with a few exercises. 8.2Description of the Algorithm1.1 .1High-Level DescriptionAlgorithm8.1provides a high-level summary of the nearestneighbor classi?cation method.Given a training set D and a test object z,which is a vector of attribute values and has an unknown class label,the algo
14、rithm computes the distance(or similarity2The classi?cation error of a classi?er is the percentage of instances it incorrectly classi?es.3The Bayes error is the class?cation error of a Bayes classi?er,that is,a classi?er that knows the underlying probability distribution of the data with respect to
15、class,and assigns each data point to the class with the highest probability density for that point.For more detail,see9.4With multimodal classes,objects of a particular class label are concentrated in several distinct areas of the data space,not just one.In statistical terms,the probability density
16、function for the class does not have a single" bump' likeeaGsussian,but ratha number of peaks.1.2 Description of the Algorithm 153Algorithm 8.1Basic k NN AlgorithmInput :D ,the set of training objects,the test object,z ,which is a vector ofattribute values,and L ,the set of classes used to
17、label the objectsOutput :c z CL ,the class of z foreach object y D do|Compute d (z ,y ,the distance between z and y ;endSelect N ? D ,the set (neighborhoodof k closest training objects for z ;c z =argmax v Ly C N I (v =class (c y ;where I ( is an indieator function that returns the value 1if its arg
18、ument is true and Ootherwise.between z and all the training objects to determine its nearest-neighbor list.It then assigns a class to z by taking the class of the majority of neighboring objects.Ties are broken in an unspeci?ed manner,forxample,randomly or by taking the most frequent class in the tr
19、aining set.The storage complexity of the algorithm is O (n ,where n is the number of training objects.The time complexity is also O (n ,since the distance needs to be computed between the target and each training object.However,there is no time taken for the construction of the classi?cation model,f
20、or example,a decision tree or seprating hyperplane.Thus,k NN is different from most other classi?cation techniques which have moderately to quite expensive model-building stages,but very inexpensive O (constant classi?cation steps.8.2.2IssuesThere are several key issues that affect the performance o
21、f k NN.One is the choice of k .This is illustrated in Figure 8.1,which shows an unlabeled test object,x ,and training objects that belong to either a"+” or “-" class.If k is too small,then the result can besensitive to noise points.On the other hand,if k is too large,then the neighborhood
22、may include too many points from other classes.An estimate of the best value for k can be obtained by cross-validation.However,it is important to point out that k =1may be able to perform other values of k particularly for small data sets,including those typically used in research or for class exerc
23、ises.However,given enough samples,larger values of k are more resistant to noise.Another issue is the approach to combining the class labels.The simplest method is to take a majority vote,but this can be a problem if the nearest neighbors vary widely in their distance and the closer neighbors more r
24、eliably indicate the class of the object.A more sophisticated approach,which is usually much less sensitive to the choice of k ,weights each object ' s vote by its distance.Various choices are possible;for example,the weight factor is often taken to be the reciprocal of the squared distance:w i
25、=1/d (y ,z 2.This amounts to replacing the last step of Algorithm 8.1with the154kNN:k-Nearest Neighbors +?+X +?+X ?+X (a Neighborhood toosmall.(b Neighborhood just right.(c Neighborhood too large.Figure 8.1k -nearest neighbor classi?cation with small,medium,and large k .following:Distance-Weighted V
26、 oting:c z =argmax v CL yCN w i I 冶=class (c y (8.1The choice of the distance measure is another important consideration.Commonly,Euclidean or Manhattan distance measures are used 21.For two points,x and y ,with n attributes,these distances are given by the following formulas:d (x ,y = n k =1(x k -
27、y k 2Euclidean distance (8.2d (x ,y = n k =1|x k -y k |Manhattan distance (8.3where x k and y k are the k th attributes (componentsof x and y respectively.Although these and various other measures can be used to compute the distance between two points,conceptually,the most desirable distance measure
28、 iswhich a smaller distance between two objects implies a greater likelihood the same class.Thus,for example,if k NN is being applied to classify it may be better to use the cosine measure rather than Euclidean distance.k NN can also be used for data with categorical or mixed categorical and numeric
29、al attributes as long as a suitable distance measure can be de?ned 21.Some distance measures can also be affected by the high dimensionality of the data.In particular# is well known that the Euclidean distance measure becomes less discriminating as the number of attributes increases.Also,attributes
30、may have to be scaled to prevent distance measures from being dominated by one of the attributes.For example,consider a data set where the height of a person varies from 1.5to 1.8m,1.3 Examples155 the weight of a person varies from90to300lb,and the income of a person varies from$10,000to$1,000,000.I
31、f a distance measure is used without scaling,the income attribute will dominate the computation of distance,and thus the assignment of class labels.8.2.3Software ImplementationsAlgorithm8.1is easy to implement in almost any programming language.However, this section contains a short guide to some re
32、adily available implementations of this algorithm and its variants for those who would rather use an existing implementation. One of the most readily available k NN implementations can be found in Weka26. The main function of interest is IBk,which is basically Algorithm8.1.However,IBk also allows yo
33、u to specify a couple of choices of distance weighting and the option to determine a value of k by using cross-validation.According to the site,Another popular nearest neighbor implementation is PEBLS(Parallel ExemplarBased Learning System5,19fromthe CMU Arti?cial Intelligence repository20.PEBLS(Par
34、allel EBampdaLearning Systemis a nearest-neighbor learning system designed for applications where the instances have symbolic feature values. ”8.3ExamplesIn this section we provide a couple of examples of the use of k NN.For these examples, we will use the Weka package described in the previous sect
35、ion.Speci?cally,we used Weka3.5.6.To begin,we applied k NN to the Iris data set that is available from the UCI Machine Learning Repository2and is also available as a sample data?le with Weka.This data set consists of150?owers split equally among three Iris species:Setosa,Versicolor, and Virginica.Ea
36、ch?ower is characterized by four measurements:petal length,petal width,sepal length,and sepal width.The Iris data set was classi?ed using the IB1algorithm,which corresponds to the IBk algorithm with k=1.In other words,the algorithm looks at the closest neighbor, as computed using Euclidean distance
37、from Equation8.2.The results are quite good, as the reader can see by examining the confusion matrix5given in Table8.1. However,further investigation shows that this is a quite easy data set to classify since the different species are relatively well separated in the data space.To 川ustrate, we show
38、a plot of the data with respect to petal length and petal width in Figure8.2. There is some mixing between the Versicolor and Virginica species with respect to 5A confusion matrix tabulates how the actual classes of various data instances(rowscompare to their predicted classes(columns.156kNN:k-Neare
39、st NeighborsTABLE 8.1Confusion Matrix for Weka k NNClassi?er IB1on the Iris Data SetActual/Predicted Setosa Versicolor Virginica Setosa5000Versic010r0473Virginica 0446their petal lengths and widths,but otherwise the species are well separated.Since the other two variables,sepal width and sepal lengt
40、h,add little if any discriminating information,the performance seen with basic k NN approach is about the best that can be achieved with a k NN approach or,indeed,any other approach.The second example uses the ionosphere data set from UCI.The data objects in this data set are radar signals sent into
41、 the ionosphere and the class value indicates whether or not the signal returned information on the structure of the ionosphere.There are 34attributes that describe the signal and 1class attribute.The IB1algorithm applied on the original data set gives an accuracy of 86.3%evaluated via tenfold cross
42、-validation,while the same algorithm applied to the ?rsnine attributes gives an accuracy of 89.4%.In other words,using fewer attributes gives better results.The confusion matrices are given below.Using cross-validation to select the number of nearest neighbors gives an accuracy of 90.8%with two near
43、est neighbors.The confusion matrices for these cases are given below in Tables 8.2,8.3,and 8.4,respectively.Adding weighting for nearest neighbors actually results in a modest drop in accuracy.The biggest improvement is due to reducing the number of attributes.4 111 I軀+ -+ 4-1+卷十+P e t a l w i d t h
44、 Petal lengthTABLE 8.2 Plot of Iris data using petal length and width.8.4Advanced Topics157TABLE8.2Confusion Matrix for Weka k NN Classi?erIB1on the Ionosphere Data Set Using All AttributesActual/Predicted Good Signal Bad SignalGood Signal8541Bad Signal72188.4Advanced TopicsTo address issues related
45、 to the distance function,a number of schemes have been developed that try to compute the weights of each individual attribute or in some other way determine a more effective distance metric based upon a training set13,15. In addition,weights can be assigned to the training objects themselves.This c
46、an give more weight to highly reliable training objects,while reducing the impact of unreliable objects.The PEBLS system by Cost and Salzberg5is a well-known example of such an approach.As mentioned,k NN classi?ers are lazy learners,that is,models are not built expl-c itly unlike eager learners(e.g.
47、,decision trees,SVM,etc.Thus,building the model is cheap,but classifying unknown objects is relatively expensive since it requires the computation of the k-nearest neighbors of the object to be labeled.This,in general, requires computing the distance of the unlabeled object to all the objects in the
48、 la-beled set,which can be expensive particularly for large training sets.A number of techniques,e.g.,multidimensional access methods12or fast approximate similar-ity search16,have been developed for ef?cient computation of -nearest neighbor distance that make use of the structure in the data to avo
49、id having to compute distance to allobjects in the training set.These techniques,which are particularly applicable for low dimensional data,can help reduce the computational cost without affecting classi?cation accuracy.The Weka package provides a choice of some of the multi-dimensional access metho
50、ds in its IBk routine.(See Exercise4.Although the basic k NN algorithm and some of its variations,such as weighted k NN and assigning weights to objects,are relatively well known,some of the more advanced techniques for k NN are much less known.For example,it is typically possible to eliminate many
51、of the stored data objects,but still retain the classi?cation accuracy of the k NN classi?er.This is known as"condensing " and can greatly speed up theclassi?cation cf new objects14.In addition,data objects can be removed to TABLE8.3Confusion Matrix for Weka k NN Classi?erIB1on the Ionosph
52、ere Data Set Using the First Nine AttributesActual/Predicted Good Signal Bad SignalGood Signal10026Bad Signal11214158kNN:k-Nearest NeighborsTABLE8.4Confusion Matrix for Weka k NNClassi?er IBk on the Ionosphere Data Set Usingthe First Nine Attributes with k=2Actual/Predicted Good Signal Bad SignalGoo
53、d Signal1039Bad Signal23216improve classi?cation accuracy,a process known asediting25.There has alsobeen a considerable amount of work on the application of proximity graphs(nearest neighbor graphs,minimum spanning trees,relative neighborhood graphs,Delaunay triangulations,and Gabriel graphsto the k
54、 NN problem.Recent papers by Toussaint 22 24,which emphasize a proximity graph viewpoint,provide an overview of work addressing these three areas and indicate some remaining open problems.Other important resources include the collection of papers by Dasarathy7and the book by Devroye,Gyor?,and Lugosi
55、8.Also,a fuzzy approach to k NN can be found in the work of Bezdek4.Finally,an extensive bibliography on this subject is also available online as part of the Annotated Computer Vision Bibliography18.8.5Exercises1 .Download the Weka machine learning package from the Weka project homepage and the Iris
56、 and ionosphere data sets from the UCI Machine Learning Repository.Repeat the analyses performed in this chapter.2 .Prove that the error of the nearest neighbor rule is bounded above by twice theBayes error under certain reasonable assumptions.3 .Prove that the error of the general k NN method asymp
57、totically approaches thatof the Bayes error and can be used to approximate it.4 .Various spatial or multidimensional access methods can be used to speed upthe nearest neighbor computation.For the k-d tree,which is one such method, estimate how much the saving would be.Comment:The IBk Weka classi?cat
58、ion algorithm allows you to specify the method of?nding nearest neighbors.Try this on on of the larger UCI data sets,for example,predicting sex on the abalone data set.5 .Consider the one-dimensional data set shown in Table8.5.TABLE8.5Data Set for Exercise5x 1.5 2.5 3.5 4.5 5.0 5.5 5.75 6.57.510.5y+-+References159(aGiven the data points listed in Table8.5,compute the class of x=5.5according to its1-,3-,6-,and9-nearest neighbors(using majority vote.(bRepeat the previous exercise,but u
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 二零二五版家電產(chǎn)品消費(fèi)者滿意度調(diào)查服務(wù)合同2篇
- 二零二五版房地產(chǎn)融資居間代理合同范本3篇
- 二零二五年電影聯(lián)合制作與市場推廣合同2篇
- 二零二五版茶葉茶具專賣店加盟管理合同3篇
- 二零二五版汽車購置貸款保證擔(dān)保合同3篇
- 二零二五年度化肥原料進(jìn)口與分銷合同3篇
- 二零二五年度航空航天股權(quán)買賣合同范本3篇
- 二零二五版戶外廣告牌定期檢查與維修合同3篇
- 二零二五年度駕校車輛購置稅承包合同3篇
- 國際貿(mào)易第六章出口合同訂立2025年綠色貿(mào)易標(biāo)準(zhǔn)與認(rèn)證3篇
- 15.5-博物館管理法律制度(政策與法律法規(guī)-第五版)
- 水泥廠鋼結(jié)構(gòu)安裝工程施工方案
- 2023光明小升初(語文)試卷
- 三年級上冊科學(xué)說課課件-1.5 水能溶解多少物質(zhì)|教科版
- GB/T 7588.2-2020電梯制造與安裝安全規(guī)范第2部分:電梯部件的設(shè)計(jì)原則、計(jì)算和檢驗(yàn)
- GB/T 14600-2009電子工業(yè)用氣體氧化亞氮
- 小學(xué)道德與法治學(xué)科高級(一級)教師職稱考試試題(有答案)
- 河北省承德市各縣區(qū)鄉(xiāng)鎮(zhèn)行政村村莊村名居民村民委員會明細(xì)
- 實(shí)用性閱讀與交流任務(wù)群設(shè)計(jì)思路與教學(xué)建議
- 應(yīng)急柜檢查表
- 通風(fēng)設(shè)施標(biāo)準(zhǔn)
評論
0/150
提交評論