前向人工神經(jīng)網(wǎng)絡(luò)敏感性研究_第1頁(yè)
前向人工神經(jīng)網(wǎng)絡(luò)敏感性研究_第2頁(yè)
前向人工神經(jīng)網(wǎng)絡(luò)敏感性研究_第3頁(yè)
前向人工神經(jīng)網(wǎng)絡(luò)敏感性研究_第4頁(yè)
前向人工神經(jīng)網(wǎng)絡(luò)敏感性研究_第5頁(yè)
已閱讀5頁(yè),還剩33頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

前向人工神經(jīng)網(wǎng)絡(luò)敏感性研究曾曉勤河海大學(xué)計(jì)算機(jī)及信息工程學(xué)院2003年10月1一.引言

1.前向神經(jīng)網(wǎng)絡(luò)(FNN)介紹●神經(jīng)元

離散型:自適應(yīng)線性元(Adaline)–

連續(xù)型:感知機(jī)(Perceptron)●神經(jīng)網(wǎng)絡(luò)

離散型:多層自適應(yīng)線性網(wǎng)(Madaline) –

連續(xù)型:多層感知機(jī)(BP網(wǎng)或MLP)2

●問(wèn)題 –硬件精度對(duì)權(quán)的影響

–環(huán)境噪音對(duì)輸入的影響 ●動(dòng)機(jī)

參數(shù)的擾動(dòng)對(duì)網(wǎng)絡(luò)會(huì)產(chǎn)生怎樣影響?

如何衡量網(wǎng)絡(luò)輸出偏差的大?。?.研究提出3 ●建立網(wǎng)絡(luò)輸出與網(wǎng)絡(luò)參數(shù)擾動(dòng)之間的關(guān)系

●分析該關(guān)系,揭示網(wǎng)絡(luò)的行為規(guī)律

●量化網(wǎng)絡(luò)輸出偏差3.研究?jī)?nèi)容4

●指導(dǎo)網(wǎng)絡(luò)設(shè)計(jì),增強(qiáng)網(wǎng)絡(luò)抗干擾能力 ●度量網(wǎng)絡(luò)性能,如容錯(cuò)和泛化能力 ●研究其它網(wǎng)絡(luò)課題的基礎(chǔ),如網(wǎng)絡(luò)結(jié)構(gòu)的 裁剪和參數(shù)的挑選等4.研究意義5Madaline的敏感性

●n維幾何模型(超球面)

M.Stevenson,R.Winter,andB.Widrow,

“SensitivityofFeedforwardNeuralNetworks

toWeightErrors,”IEEETrans.onNeural,

Networks,vol.1,no.1,1990. ●統(tǒng)計(jì)模型(方差)

S.W.Piché,“TheSelectionofWeight AccuraciesforMadalines,”IEEETrans.on NeuralNetworks,vol.6,no.2,1995.二.研究縱覽(典型方法和文獻(xiàn))

6

●分析方法(偏微分)

S.Hashem,“SensitivityAnalysisforFeed- ForwardArtificialNeuralNetworkswith DifferentiableActivationFunctions”,Proc. ofIJCNN,vol.1,1992. ●統(tǒng)計(jì)方法(標(biāo)準(zhǔn)差) J.Y.Choi&C.H.Choi,“SensitivityAna- lysisofMultilayerPerceptronwithDiffer- entiableActivationFunctions,”IEEETrans. onNeuralNetworks,vol.3,no.1,1992.2.MLP的敏感性7 ●輸入屬性篩選 J.M.Zurada,A.Malinowski,S.Usui, “PerturbationMethodforDeletingRedundant InputsofPerceptronNetworks”, Neurocomputing,vol.14,1997. ●網(wǎng)絡(luò)結(jié)構(gòu)裁減

A.P.Engelbrecht,“ANewPruningHeuristic BasedonVarianceAnalysisofSensitivity Information”,IEEETrans.onNeural Networks,vol.12,no.6,2001.3.敏感性的應(yīng)用8

J.L.Bernieretal,“AQuantitiveStudyof FaultTolerance,NoiseImmunityand GeneralizationAbilityofMLPs,”Neural Computation,vol.12,2000.

●容錯(cuò)和泛化問(wèn)題9三.研究方法1.自底向上方法

●單個(gè)神經(jīng)元

●整個(gè)網(wǎng)絡(luò)2.

概率統(tǒng)計(jì)方法

●概率(離散型)

●均值(連續(xù)型)3.n-維幾何模型

●超矩形的頂點(diǎn)(離散型)

●超矩形體(連續(xù)型)10四.已獲成果(代表性論文)

●敏感性分析:

“SensitivityAnalysisofMultilayerPercep- trontoInputandWeightPerturbations,” IEEETrans.onNeuralNetworks,vol.12, no.6,pp.1358-1366,Nov.2001.

11

●敏感性量化:

“AQuantifiedSensitivityMeasureforMulti- layerPerceptrontoInputPerturbation,” NeuralComputation,vol.15,no.1, pp.183-212,Jan.2003.

12 ●隱層節(jié)點(diǎn)的裁剪(敏感性應(yīng)用):

“HiddenNeuronPruningforMultilayer

PerceptronsUsingSensitivityMeasure,” Proc.ofIEEEICMLC2002,pp.1751-1757, Nov.2002.

●輸入屬性重要性的判定(敏感性應(yīng)用):

“DeterminingtheRelevanceofInput FeaturesforMultilayerPerceptrons,” Proc.ofIEEESMC2003,Oct.2003.13五.未來(lái)工作

●進(jìn)一步完善已有的結(jié)果,使之更加實(shí)用

–放松限制條件

擴(kuò)大分析范圍

–精確量化計(jì)算

●進(jìn)一步應(yīng)用所得的結(jié)果,解決實(shí)際問(wèn)題

●探索新方法,研究新類型的網(wǎng)絡(luò)14結(jié)束謝謝各位!151617181920Effectsofinput&weightdeviationsonneurons’sensitivitySensitivityincreaseswithinputandweighdeviations,buttheincreasehasanupperbound.21Effectsofinputdimensiononneurons’sensitivity

Thereexistsanoptimalvalueforthedimensionofinput,whichyieldsthehighestsensitivityvalue.22Effectsofinput&weightdeviationsonMLPs’sensitivity

SensitivityofanMLPincreaseswiththeinputandweightdeviations.23EffectsofthenumberofneuronsinalayerSensitivityofMLPs:{n-2-2-1|1n10}tothedimensionofinput.

24SensitivityofMLPs:{2-n-2-1|1n10}tothenumberofneuronsinthe1stlayer.25SensitivityofMLPs:{2-2-n-1|1n10}tothenumberofneuronsinthe2ndlayer.

Thereexistsanoptimalvalueforthenumberofneuronsinalayer,whichyieldsthehighestsensitivityvalue.Theneareralayertotheoutputlayeris,Themoreeffectthenumberofneuronsinthelayerhas.26EffectsofthenumberoflayersSensitivityofMLPs:{2-1,2-2-1,..,2-2-2-2-2-2-2-2-2-2-1}tothenumberoflayers.

Sensitivitydecreaseswiththenumberincreasing,andthedecreasealmostlevelsoffwhenthenumberbecomeslarge.27Sensitivityoftheneuronswith2-dimensionalinput28Sensitivityoftheneuronswith3-dimensionalinput29Sensitivityoftheneuronswith4-dimensionalinput30Sensitivityoftheneuronswith5-dimensionalinput31SensitivityoftheMLPs:2-2-1,2-3-1,2-2-2-1

32

Simulation1(FunctionApproximation)ImplementanMLPtoapproximatethefunction:

whereImplementationconsiderationsTheMLParchitectureisrestrictedto2-n-1.TheconvergenceconditionisMES-goal=0.01&Epoch105.Thelowesttrainablenumberofhiddenneuronsisn=5.ThepruningprocessesstartwithMLPsof2-5-1andstopatanarchitectureof2-4-1.TherelevantdatausedbyandresultedfromthepruningprocessarelistedinTable1

andTable2.33TABLE1.Datafor3MLPswith5hiddenneuronstorealizethefunctionMLP2-5-1EpochMSE(training)MSE(testing)TrainedweightsandbiasMSE-(goal=0.01&epoch<=100000)Sensitivity

Relevance

1

30586

0.000999816

0.0117005[-12.9212-0.2999][33.7943-34.6057][31.4768-31.0169][-0.5607-0.8140][1.1737-1.1026][-5.450712.7341-13.0816-12.01718.7152]bias=00.0317940.0022720.0014060.0270660.0018150.17330.02890.01840.32530.0158

2

65209

0.000999959

0.0124573[32.6223-33.3731][-0.73610.7202][-31.841231.2399][-15.1872-0.0937][-0.3989-1.0028][11.9959-15.490512.2103-6.0877-12.5057]bias=00.0021760.0004630.0018210.0310170.0270680.02610.00720.02220.18880.3385

3

26094

0.000999944

0.0120354[-15.094017.6184][-19.916321.4109][-14.0535-0.8460][1.0263-0.1258][26.7757-26.1259][8.8172-18.6532-6.830716.8506-10.4671]bias=00.0135470.0066610.0262200.0283520.0023240.11940.12420.17910.47770.024334TABLE2.Dataforthe3prunedMLPswith4hiddenneuronstorealizethefunctionMLP2-4-1EpochMSE(training)MSE(testing)Retrainedweightsandbias(goal=0.01&epoch<=100000)SensitivityRelevance1(Obtainedbyremovingthe5thneuronfromtheMLPof2-5-1)

2251

0.000999998

0.0114834[-14.4387-0.7003][34.8366-35.6080][33.1285-32.6271][-1.50650.0184][-5.703613.0579-13.2457-12.1803]bias=4.23490.0270140.0021000.0014600.0313430.15410.02740.01930.38182(Obtainedbyremovingthe2ndneuronfromtheMLPof2-5-1)

1945

0.000999921

0.0119645[33.5805-34.2727][-32.931332.3172][-15.8016-0.5610][-1.33180.0103][12.626712.7961-6.1782-13.3652]bias=-7.94680.0019540.0018000.0269020.0292830.02470.02300.16620.39143(Obtainedbyremovingthe5thneuronfromtheMLPof2-5-1)

13253

0.000999971

0.011926[-34.397433.8148][-34.325034.7990][-1.29090.0198][11.80970.8879][15.7984-15.6503-12.96066.0722]bias=-1.41940.0016370.0013160.0288340.0281220.02590.02060.37370.170835

Simulation2(Classification)ImplementanMLPtosolvetheXORproblem:

0

1ImplementationconsiderationsTheMLParchitectureisrestrictedto2-n-1.TheconvergenceconditionisMES-goal=0.1&Epoch105.ThepruningprocessesstartwithMLPsof2-5-1andstopatanarchitectureof2-4-1.TherelevantdatausedbyandresultedfromthepruningprocessarelistedinTable3

andTable4.36TABLE3.Datafor3MLPswith5hiddenneuronstorealizethefunctionMLP2-5-1EpochMSE(training)MSE(testing)Trainedweightsandbias(goal=0.1&epoch<=100000)SensitivityRelevance

1

44518

0.0999997

0.109217[2.8188-8.1143][2.4420-0.5450][2.57663.7037][1.4955-2.9245][-2.5714-3.7124][14.0153-43.990728.063619.5486-68.6432]bias=00.0475990.0357470.0315180.0273550.0315130.66711.57250.88450.53482.1632

2

51098

0.0999998

0.113006[1.4852-3.8902][1.06920.1466][-1.0723-0.1455][-7.03012.5695][-3.1382-2.8094][23.9314-19.182427.156514.9694-91.6363]bias=00.0375930.0201700.0201780.0455040.0325500.89970.38690.54800.68122.9828

3

33631

0.0999994

0.11369[3.29202.9094][-1.00673.4724][-7.05782.4377][-3.2921-2.9096][1.5303-0.0606][45.7579-30.059816.5386-52.2874-29.7040]bias=00.0314980.0391660.0462100.0314970.0317151.44131.17730.76421.64690.942137

TABLE4.Dataforthe3prunedMLPswith4hiddenneuronstorealizethefunction

MLP2-4-1EpochMSE(training)MSE(testing)Retrainedwei

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論