版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、翻譯部分英文原文fault diagnosis of three phase induction motor using neural networktechniquesabstract: fault diagnosis of induction motor is gaining importance in industry because of the need to increase reliability and to decrease possible loss of production due to machine breakdown.due to environmental st
2、ress and many others reasons different faults occur in induction moto匚 many researchers proposed different techniques for fault detection and diagnosis.however,many techniques available presently require a good deal of expertise to apply them successfully.simpler approaches are needed which allow re
3、latively unskilled operators to make reliable decisions without a diagnosis specialist to examine data and diagnose problems.in this paper simple.reliable and economical neural network(nn)based fault classifier is proposed,in which stator current is used as input signal from moto匚thirteen statistica
4、l parameters are extracted from the stator current and pca is used to select proper input.data is generated from the experimentation on specially designed 2 hp,4 pole 50 hz.tliree phase induction motor.for classification,nns like mlrsvm and statistical classifiers based on cart and discriminant anal
5、ysis are verified.robustness of classifier to noise is also verified on unseen data by introducing controlled gaussian and uniform noise in input and output.index terms: induction motor, fault diagnosis, mlp, svm,cart, discriminant analysis, pcalintroductioninduction motors play an important role as
6、 prime movers in manufacturing,process industry and transportation due to their reliability and simplicity in construction.in spite of their robustness and reliabilitythey do occasionally fail,and unpredicted downtime is obviously costly hence they required constant attention.the faults of induction
7、 motors may not only cause the interruption of product operation but also increase costs,decrease product quality and affect the safety of operators!f the lifetime of induction machines was extended, and efficiency of manufacturing lines was improved,it would lead to smaller production expenses and
8、lower prices for the end user.in order to keep machines in good condition, some techniques i.e.,fault monitoring, fault detection, and fault diagnosis have become increasingly essential.the most common faults of induction motors are bearing failures, stator phase winding failures ,broken rotor bar o
9、r cracked rotor end-rings and air-gap irregularities.the objective of this research is to develop an alternative neural network based incipient fault-detection scheme that overcome the limitations of the present schemes in the sense that,they are costly, applicable for large motors, furthermore many
10、 design parameters are requested and especially concerning to long time operating machines, these parameters cannot be available easily.as compared to existing schemes, proposed scheme is simple, accurate, reliable and economical. this research work is based on real time data and so proposed neural
11、network based classifier demonstrates the actual feasibility in a real industrial situation. four different neural network structures are presented in this paper with all kinds of performances and about 100%classification accuracy is achievedii.fault classification using nnthe proposed fault detecti
12、on and diagnosis scheme consists of four procedures as shown infig.l:1. data collection & acquisition2. feature extraction3. feature selection4. fault classificationa. data collection and data acquisitionin this paper the most common faults namely stator winding interturn short(i),rotor dynamic
13、eccentricity(e)and both of them(b)are considered.h b efig.l .general block diagram of proposed classifierfor experimentation and data generation the specially designed 2 hp, three phase,4 pole,415x50 hz induction motor is selected. experimental set up is as shown in fig.2.fig.2.experimental setupthe
14、 load of the motor was changed by adjusting the spring balance and belt.three ac current probes were used to measure the stator current signals for testing the fault diagnosis system. the maximum frequency of used signal was 5 khz and the number of sampled data was 2500.from the time waveforms of st
15、ator currents as shown in fig.3,no conspicuous difference exists among the different conditionshealthy (h)v總與j/vwwwwtiririufig.3.expenmental waveforms ol stator currentb. feature extractionthere is a need to come up with a feature extraction method to classify faults.in order to classify the differe
16、nt faults,the statistical parameters are used.to be precise, 'sample' statistics will be calculated for current data.overall thirteen parameters are calculated as input feature space.minimum set of statistics to be examined includes the root mean square (rms)of the zero mean signal(which is
17、the standard devia行on),the maximum, and minimum values the skew ness coefficient and kurtosis coefficient. pearsons coefficient of skew ness, defined by:standard3x-xwhere x denotes mean, x denotes median and sv denotes the sample deviation.the sample coefficient of variation vr is defined by;s=-xthe
18、 rth sample moment about the sample mean for a data set is given by;元)mr =nm2 denotes spread about the center,m3 refers to skewness about the center;m4 denotes how much data is massed at the cente匚 second,third and fourth moments are used to define thesample coefficient of skewness, and the sample c
19、oefficient of kurtosis, 4as follows.m3_ 加4the sample covariance between dimensions j and k is defined as;工(勺-耳)(心-瓦)/=1the ordinary correlation coefficient for dimensions j and k , rjk is defined as;(7)c. feature selectionbefore a feature set is fed into a classifier,most superior features providing
20、 dominant fault-related information should be selected from the feature set,and irrelevant or redundant features must be discarded to improve the classifier performance and avoid the curse of dimensionality.here principal component analysis(pca)technique is used to select the most superior features
21、from the original feature set.principal components(pcs)are computed by pearson rule.the fig.4 is related to a mathematical object,the eigenvalues,which reflect the quality of the projection from the 13dimensional to a lower number of dimensions-fig.4.principal component, eigenvalues and percent vari
22、abilityd. fault classifier(l) mlp nn based classifiersimple multilayer perceptron(mlp)neura 1 network is proposed as a fault classifie匚four processing elements are used in output layer for four conditions of motor namely healthy, inter turn fault,eccentricity and both faults. from results as shown i
23、n fig.5,five pcas are selected as inputs;hence number of pes in input layer is five.0 05§ £nuse&£2 mse on trxir h<seonrvr1x1410pcs inpuis*fig.5(a)<vanation of average mse on training and cv with number of pcs as inputfig.5(b). variation of average classification accuracy on
24、testing on testdata, training data and cv data with number of pcs as inputthe randomized data is fed to the neural network and is retrained five times with different random weight initialization so as to remove biasing and to ensure true learning and generalization for different hidden layers.this a
25、lso removes any affinity or dependence of choice of initial connection weights on the performance of nn.it is observed that mlp with a single hidden layer gives better performance.the number of processing elements(pes)in the hidden layer is varied.the network is trained and minimum mse is obtained w
26、hen 5 pes are used in hidden layer as indicated in fig.6.fig6.vhiiation of average mse with number of pes in hidden layervarioustransferfunctions,namely,tanh,sigmoid,liner-tanh,linear-sigmoid,softmax,bias axon, linear axon and learning rules, namely, momentum, conjugate-gradient, quick propagation,
27、delta bar delta, and step are verified for training, cross validation and testing.minimum mse and average classification accuracy on training and cv data set are compared . with above experimentations finally,the mlp nn classifier is designed with following specifications,number of inputs:5;number o
28、f hidden layers:01;number of pes in hidden layer:04;hidden layer:transfer function:tanhlearning rule:momentumstep size:0.6momentum:0.5output layer:transfer function:tanh learning rule:momentumstep size:0l momentum:0.5number of connection weights:44training time required per epoch per exemplar:0.0063
29、 ms(2) svm nn based classifierthe support vector machine(svm)is a new kind of classifier that is motivated by twoconcepts. first , transforming data into a high-dimensional space can transform complex problems (with complex decision surfaces)into simpler problems that can use linear discriminant fun
30、ctions. second, svms are motivated by the concept of training and using only those inputs that are near the decision surface since they provide the most information about the classification. it can be extended to multi-class.svms training always seek a global optimized solution and avoid over fittin
31、g,so it has ability to deal with a large number of feature.generalized algorithm for the classifier:for n dimensional space data xi (z=l .n) this algorithm can be easily extended tonetwork by substituting the inner product of patterns in the input space by the kernel function, leading to the followi
32、ng quadratic optimization problem:subject to(9)where g(x, <r2)represents a gaussian function, n is the number of samples, ai are a set of multipliers(one for each sample),nj(xz) = di(d0jg(xj - xj2*) + b)(10)and(idand choose a common starting multiplier at jearning rate 77、and a small threshold. t
33、hen, whilem>t, we choose a pattern x and calculate an update ai = “(1 - g(兀)and perform the updateif «.(n) + acr/ > 0ai (h + 1) = an) + acz (7?)b(n + 1) = b(n) + diai(12)and if a (n) + aa. < 0a. (n +1) = ai (/?)bn + 1) = b(n)(13)after adaptation only some of the a are different from ze
34、ro (called the support vectors). it is easy to implement the kernel adatron algorithm since g(x.) can be computed locally to each multiplier,provided that the desired response is available in the input file.in fact,the expression forresembles the multiplication of an error with an activation,so it c
35、an be included in theframework of neural network learningthe adatron algorithm essentially prunes the rbf network so that its output for testing is given by,iespport vectors/=1乙 /=! ;=1丿(&)二工匕一牙為工匕勺ad/ga-廠,2/)and cost function in eitor criterion is丿廳工a一 (mnhoo)?2 /=!number of pcs as input and st
36、ep size is selected by checking the average minimum mse and average classification accuracy; results are shown in fig 7.x na.cil* w 皿 inputy nu. <if 匸puuhwfig.7(a).variation of average mse on training and cv with numberw二 r* 二 2of pcs as input ttd“tcv daufig.7(b). variation of average classificat
37、ion accuracy on testing on test data,training data and cv data with number of pcs as inputfinaly the svm based classifier is designed with following specifications, number of inputs:5; step size:0.7time required per epoch per exemplar:0.693 msnumber of connection weights:264designed classifier is tr
38、ained and tested using the similardatasets and results are as shown in fig.8 and fig.97 6 5 4 3 o o o o o o o o o om=.=2art1wfig.&variation of average minimum mse on testing on test data,c vdata and training data with number of rows shifted(n)qss-i 5- =s 址zemippim tnpt am eermrnarmefp 寸 0tgroupf
39、ig.9.variation of average minimum mse on training and cv withvarious groups(3) classification and regression trees(cart)cart induces strictly binary trees through a process of binary recursively partitioning of feature space of a data set. the first phase is called tree building,and the other is tre
40、e pruning.classification tree is developed using xlstat-2009.various methods, measures andmaximum tree depth are checked and results are shown in figjo.lt is observed that optimum average classification accuracy on testing on test data and cv data is found to be 90.91 and 80 percent,respectively.on
41、tst data testms on cv datai do9080匸目 w 一5弓二纟%:<:】:<tiaid pcar«wi2: chaid-likclihacxl3: imctiiiid pear旳n4: ex<'haid likclibcickl5: c&rt gini6: c& rt-twciing 7: que口figo(a). variation of average classification accuracy on testing ontest data and cv data with method and measur
42、e of treesfig. 10(b). variation of average classification accuracy on testing ontest data and cv data with depth of treesooooooooooo(4) discriminant analysisdiscriminant analysis is a technique for classifying a set of observations into predefined classes.the purpose is to determine the class of an
43、observation based on a set of variables known as predictors or input variables.the model is built based on a set of observations for which the classes are known. based on the training set,the technique constructs a set of linear functions of the predictors,known as discriminant functions ,such thatl
44、 = bx +b2x2 + . + bnxn +c, where the bfs are discriminant coefficients, the x s are the input variables or predictors and c is a constant. discriminant analysis is done using xlstat-2009-various models are checked and results are shown in fig.l lit is observed that optimum average classification acc
45、uracy on testing on test data and cv data is found to be 91 -77 and 80 percent,respectively.80co4020cxxdxtaiorxdxtjt.123-4model of da1: stepwise lbruard2: stqpwise backward3: forward4: backwardfig.l 1 .variation of average classification accuracy on testing on testdata and cv data with model of daii
46、lnoise sustainability of classifiersince the proposed classifier is to be used in real time,where measurement noise is anticipatedjt is necessary to check the robustness of classifier to noise.to check the robustness, uniform and gaussian noise with mean value zero and variance varies from 1 to 20%i
47、s introduced in input and output and average classification accuracy on testing data i.e.unseen data is checked.lt is seen that svm based classifier is the most robust classifier in the sense that it can sustain both uniform and gaussian noise with 14%and 20%variance in input and output, respectivel
48、y. results are as shown in table ig-gaussian noiseu-uniform noisetable 1ekfiict of noise on average classification accuracywhen classieilr tes1td on testing datann.m 1.psvmnoiw minputoutputinputouiut% varixnc eaverage c'hssification accuracy im%ccncm testing cm texting dati i«c dalatyjv mnc
49、iiwgugljguglj1ioa100ioa100100100iao100210(110010(11oc>10(11(101(1010(13ioa100ioa100100igo66100ioa100ioa10066.7igoigo100510(1loo10(1i(n66.766.71(1010(16ioa100ioa100100iaoiao1007ioa100ioa100100iaoiao100卑10(110010(110010(11(101(1010(19ioa100ioa100100igoigo10010ioa100ioa10010066.7igo1001175loo10(1i(n
50、10(11(101(1010(112ioa100ioa100100iaoiao10013ioa100ioa10066.766.7iao10014ioa100ioa100100100iao10015ioa75ioa10066.766.7igo1001675100505066.766.7igo100177575757566.766.7igo66.7ik10(1100755033.333.31(1010(119ioa1007575100iaoiao66.720ioa62755033.333.3100ioaiv.results and discussionin this pa per, the aut
51、hors evaluated the performance of the developed ann based classifiers for detection of four fault conditions of three phase induction motor and examined the results.mlp nn,and svm are optimally designed and after completion of the training,the learned network is tested to detect different types of f
52、aults. similarly step size is varied in svm and 0.7 step size is found to be optimum. these confirm our idea that the proposed feature selection method based on the pca can select the most superior features from the original feature set,and thereforejs a powerful feature selection method.also propos
53、ed classifier is enough robust to the noise,in the sense that classifier gives satisfactory results for uniform and gaussian noise with 14%variance in input and with 20% variance in output.comparative results are shown in fig. 12 and table ilfig2.comparative analysis of various classifier w.r.t.aver
54、ageclassification accuracy.table iicomparative results of nn based classifierst time rcuuircd per cpcich per cxcmplir in msw - numbergi'ccmncdicwi wciditsz召n 7per firm ancetwtmg an twt ijdutedmgoncv ik。twmax.ohiecrwdmin.objcrscdavera 審sdmax.ohictvedmin. ohsenedaveragesdd umse02071570.0017730.046
55、3530.06550.126737(1.0011350.0296104130.006344percent carreclnew100*3335339825445423100w.j333396.22226.cms4pcrccnl carrcclnew1007597255jj690100x3.33333963563卻2> smse0.09926o.q5o«60.059150.0110.0941340.054510.061920.0070.693364ivrccn! carrodnew10099.6111.9141008k.889s.7223314中文譯文基于神經(jīng)網(wǎng)絡(luò)技術(shù)的三相異步電
56、動(dòng)機(jī)故障診斷摘要:異步電機(jī)故障診斷在工業(yè)中十分重耍,因?yàn)樾杷L岣呖煽啃院徒档陀捎跈C(jī)器故 障造成的生產(chǎn)損失。由于環(huán)境壓力和許多其他的原因使異步電動(dòng)機(jī)發(fā)生不同的故障。許多 研究者提出了不同的故障檢測(cè)和診斷技術(shù)。然而,hl前許多技術(shù)需要提供良好的專業(yè)知識(shí) 才能在應(yīng)用屮獲得成功。更簡(jiǎn)單的方法是可以使用相對(duì)不熟練的操作在不需要診斷專家仔 細(xì)考察數(shù)據(jù)和診斷問題的情況下做出可靠的判斷。本文提出了簡(jiǎn)單,可靠,經(jīng)濟(jì)的基于神 經(jīng)網(wǎng)絡(luò)(nn)的故障分類,其屮電機(jī)定子電流作為輸入信號(hào)。從定子電流屮提取13個(gè)參 數(shù)并使用pca來選擇合適的輸入。數(shù)據(jù)來自于特別設(shè)計(jì)的2馬力、4極50hz的三相異步電 動(dòng)機(jī)試驗(yàn)。為了分類,如神經(jīng)網(wǎng)絡(luò)像mlp、svm和基于cart的統(tǒng)計(jì)分類器以及判別分析歐 進(jìn)行了驗(yàn)證。對(duì)噪聲分類器的魯棒性的驗(yàn)證,也通過在輸入和輸出引進(jìn)高斯和統(tǒng)一控制進(jìn) 行了
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 2024年獵頭服務(wù)定制合同
- 2024年主題公園招商合同范本3篇
- 2024年學(xué)生托管服務(wù)與心理咨詢服務(wù)合作協(xié)議3篇
- 餐廳年度工作計(jì)劃11篇
- 安防風(fēng)險(xiǎn)評(píng)估報(bào)告
- 政治教師工作計(jì)劃
- 英文感謝信模板錦集10篇
- 幼兒園安全教育心得體會(huì)
- 大學(xué)個(gè)人學(xué)習(xí)規(guī)劃范文7篇
- 城南舊事的觀后感350字
- 2025年中小學(xué)春節(jié)安全教育主題班會(huì)課件
- 醫(yī)院消防安全知識(shí)培訓(xùn)課件
- 硝酸及液體硝酸銨生產(chǎn)行業(yè)風(fēng)險(xiǎn)分級(jí)管控體系實(shí)施指南
- 電廠一次調(diào)頻試驗(yàn)方案
- 裝修公司驗(yàn)收單
- 染色體標(biāo)本的制作及組型觀察
- 2003年高考全國(guó)卷.理科數(shù)學(xué)試題及答案
- 我國(guó)互聯(lián)網(wǎng)企業(yè)價(jià)值評(píng)估的研究——以阿里巴巴網(wǎng)絡(luò)公司為例
- 導(dǎo)游實(shí)務(wù)課件
- 司法部關(guān)于下發(fā)《律師刑事-訴訟格式文書》標(biāo)準(zhǔn)樣式的通知
- 藝術(shù)類核心期刊目錄
評(píng)論
0/150
提交評(píng)論