基于深強(qiáng)化學(xué)習(xí)的flappybird_第1頁
基于深強(qiáng)化學(xué)習(xí)的flappybird_第2頁
基于深強(qiáng)化學(xué)習(xí)的flappybird_第3頁
基于深強(qiáng)化學(xué)習(xí)的flappybird_第4頁
基于深強(qiáng)化學(xué)習(xí)的flappybird_第5頁
已閱讀5頁,還剩5頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

SHANGHAIJIAOTONGUNIVERSITYProjectTitle:PlayingtheGameofFlappyBirdwithDeepReinforcementLearningGroupNumber:G-07GroupMembers:WangWenqingGaoXiaoningContents1 Introduction (10)endforEveryCstepsreset=:endforendforExperimentsThissectionwilldescribeouralgorithm’sparameterssettingandtheanalysisofexperimentresults.ParametersSettingsREF_Ref484591565\hFigure6illustratesourCNN’slayerssetting.Theneuralnetworkshas3CNNhiddenlayersfollowedby2fullyconnectedhiddenlayers.Table1showthedetailedparametersofeverylayer.HerewejustuseamaxpoolinginthefirstCNNhiddenlayer.Also,weusetheReLUactivationfunctiontoproducetheneuraloutput.FigureSEQFigure\*ARABIC6:ThelayersettingofCNN:thisCNNhas3convolutionallayersfollowedby2fullyconnectedlayers.Asfortraining,weuseAdamoptimizertoupdatetheCNN’sparameters.TableSEQTable\*ARABIC1:ThedetailedlayerssettingofCNNLayerInputFiltersizeStrideNumfiltersActivationOutputconv180×80×48×8432ReLU20×20×32max_pool20×20×322×2210×10×32conv210×10×324×4264ReLU5×5×64conv35×5×643×3164ReLU5×5×64fc45×5×64512ReLU512fc55122Linear2REF_Ref484591593\hTable1listsalltheparametersettingofDQN.Weuseadecayedrangingfrom0.1to0.001tobalanceexplorationandexploitation.What’smore,REF_Ref484591626\hTable2showsthatthebatchstochasticgradientdescentoptimizerisAdamwithbatchsizeof32.Finally,wealsoallocatealargereplaymemory.TableSEQTable\*ARABIC2:ThetrainingparametersofDQNParametersvalueObservesteps100000Exploresteps3000000Initial_epsilon0.1Final_epsilon0.001Replay_memory50000batchsize32learningrate0.000001FPS30optimizationalgorithmAdamResultsAnalysisWetrainourmodelabout4millionepochs.REF_Ref484591669\hFigure7showstheweightsandbiasesofCNN’sfirsthiddenlayer.Theweightsandbiasesfinallycentralizearound0,withlowvariance,whichdirectlystabilizeCNN’soutputQ-valueandreduceprobabilityofrandomaction.ThestabilityofCNN’sparametersleadstoobtainingoptimalpolicy.FigureSEQFigure\*ARABIC7:Left(right)figureisthehistogramofweights(biases)ofCNN’sfirsthiddenlayerREF_Ref484591680\hFigure8isthecostvalueofDQNduringtraining.Thecostfunctionhasaslowdowntrend,closeto0after3.5millionepochs.ItmeansthatDQNhaslearnedthemostcommonstatesubspaceandwillperformoptimalactionwhencomingacrossknownstate.Inaword,DQNhasobtaineditsbestactionpolicy.FigureSEQFigure\*ARABIC8:DQN’scostfunction:theplotshowsthetrainingprogressofDQN.Wetrainedourmodelabout4millionepochs.Whenplayingflappybird,ifthebirdgetsthroughthepipe,wegiveareward1,ifdead,give-1,otherwise0.1.REF_Ref484591694\hFigure9istheaveragereturnedrewardfromenvironment.Thestabiltiyinfinaltrainingstatemeansthattheagentcanautomaticallychoosethebestaction,andtheenvironmentgivesthebestrewardinturns.Weknowthattheagentandenvironmenthasenterintoafriendlyinteraction,guaranteeingthemaximaltotalreward.FigureSEQFigure\*ARABIC9:Theaveragereturnedrewardfromenvironment.Weaveragethereturnedrewardevery1000epochs.FromthisREF_Ref484591711\hFigure10,thepredictedmaxQ-valuefromCNNconvergesandstabilizesinavalueafterabout100000.ItmeansthatCNNcanaccuratelypredictthequalityofactionsinspecificstate,andwecansteadilyperformactionswithmaxQ-value.TheconvergenceofmaxQ-valuesstatesthatCNNhasexploredthestatespacewidelyandgreatlyapproximatedtheenvironmentwell.FigureSEQFigure\*ARABIC10:TheaveragemaxQ-valueobtainedfromCNN’soutput.WeaveragethemaxQ-valueevery1000epochs.REF_Ref484591726\hFigure11illustratestheDQN’sactionstrategy.IfthepredictedmaxQ-valueissohigh,weareconfidentthatwewillgetthroughthegapwhenperformtheactionwithmaxQ-valuelikeA,C.IfthemaxQ-valueisrelativelylow,andweperformtheaction,wemighthitthepipe,likeB.Inthefinalstateoftraining,themaxQ-valueisdramaticallyhigh,meaningthatweareconfidenttogetthroughthegapsifperformingtheactionswithmaxQ-value.FigureSEQFigure\*ARABIC11:TheleftmostplotshowstheCNN’spredictedmaxQ-valuefora100framessegmentofthegameflappybird.ThethreescreenshotscorrespondtotheframeslabeledbyA,B,andCrespectively.ConclusionWesuccessfullyuseDQNtoplayflappybird,whichcanoutperformhumanbeings.DQNcanautomaticallylearnknowledgefromenvironmentjustusingrawimagetoplaygameswithoutpriorknowledge.ThisfeaturegiveDQNthepowertoplayalmostsimplegames.Moreover,theuseofCNNasafunctionapproximationallowDQNtodealwithlargeenvironmentwhichhasalmostinfinitestatespace.Lastbutnotleast,CNNcanalsogreatlyrepresentfeaturespacewithouthandcraftedfeatureextractionreducingthemassivemanualwork.

ReferencesC.ClarkandA.Storkey.Teachingdeepconvolutionalneuralnetworkstoplaygo.arXivpreprintarXiv:1412.3409,2014.1.AlexKrizhevsky,IlyaSutskever,andGeoffHinton.Imagenetclassificationwithdeepconvolutionalneuralnetworks.InAdvancesinNeuralInformationProcessingSystems25,pages1106–1114,2012.GeorgeE.Dahl,DongYu,LiDeng,andAlexAcero.Context-dependentpre-traineddeepneuralnetworksforlarge-vocabularyspeechrecognition.Audio,Speech,andLanguageProcessing,IEEETransactionson,20(1):30–42,2012,1.RichardSuttonandAndrewBarto.ReinforcementLearning:AnIntroduction.MITPress,1998.BrianSallansandGeoffreyE.Hinton.Reinforcementlearningwithfactoredstatesandactions.JournalofMachineLearningResearch,5:1063–1088,2004.ChristopherJCHWatkinsandPeterDayan.Q-learning.Machinelearning,8(3-4):279–292,1992.HamidMaei,CsabaSzepesv′ari,ShalabhBhatnagar,andRichardS.Sutton.Towardoff-policylearningcontrolwithfunctionapproximation.InProceedingsofthe27thI

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論