




版權(quán)說(shuō)明:本文檔由用戶(hù)提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
SHANGHAIJIAOTONGUNIVERSITYProjectTitle:PlayingtheGameofFlappyBirdwithDeepReinforcementLearningGroupNumber:G-07GroupMembers:WangWenqingGaoXiaoningContents1 Introduction (10)endforEveryCstepsreset=:endforendforExperimentsThissectionwilldescribeouralgorithm’sparameterssettingandtheanalysisofexperimentresults.ParametersSettingsREF_Ref484591565\hFigure6illustratesourCNN’slayerssetting.Theneuralnetworkshas3CNNhiddenlayersfollowedby2fullyconnectedhiddenlayers.Table1showthedetailedparametersofeverylayer.HerewejustuseamaxpoolinginthefirstCNNhiddenlayer.Also,weusetheReLUactivationfunctiontoproducetheneuraloutput.FigureSEQFigure\*ARABIC6:ThelayersettingofCNN:thisCNNhas3convolutionallayersfollowedby2fullyconnectedlayers.Asfortraining,weuseAdamoptimizertoupdatetheCNN’sparameters.TableSEQTable\*ARABIC1:ThedetailedlayerssettingofCNNLayerInputFiltersizeStrideNumfiltersActivationOutputconv180×80×48×8432ReLU20×20×32max_pool20×20×322×2210×10×32conv210×10×324×4264ReLU5×5×64conv35×5×643×3164ReLU5×5×64fc45×5×64512ReLU512fc55122Linear2REF_Ref484591593\hTable1listsalltheparametersettingofDQN.Weuseadecayedrangingfrom0.1to0.001tobalanceexplorationandexploitation.What’smore,REF_Ref484591626\hTable2showsthatthebatchstochasticgradientdescentoptimizerisAdamwithbatchsizeof32.Finally,wealsoallocatealargereplaymemory.TableSEQTable\*ARABIC2:ThetrainingparametersofDQNParametersvalueObservesteps100000Exploresteps3000000Initial_epsilon0.1Final_epsilon0.001Replay_memory50000batchsize32learningrate0.000001FPS30optimizationalgorithmAdamResultsAnalysisWetrainourmodelabout4millionepochs.REF_Ref484591669\hFigure7showstheweightsandbiasesofCNN’sfirsthiddenlayer.Theweightsandbiasesfinallycentralizearound0,withlowvariance,whichdirectlystabilizeCNN’soutputQ-valueandreduceprobabilityofrandomaction.ThestabilityofCNN’sparametersleadstoobtainingoptimalpolicy.FigureSEQFigure\*ARABIC7:Left(right)figureisthehistogramofweights(biases)ofCNN’sfirsthiddenlayerREF_Ref484591680\hFigure8isthecostvalueofDQNduringtraining.Thecostfunctionhasaslowdowntrend,closeto0after3.5millionepochs.ItmeansthatDQNhaslearnedthemostcommonstatesubspaceandwillperformoptimalactionwhencomingacrossknownstate.Inaword,DQNhasobtaineditsbestactionpolicy.FigureSEQFigure\*ARABIC8:DQN’scostfunction:theplotshowsthetrainingprogressofDQN.Wetrainedourmodelabout4millionepochs.Whenplayingflappybird,ifthebirdgetsthroughthepipe,wegiveareward1,ifdead,give-1,otherwise0.1.REF_Ref484591694\hFigure9istheaveragereturnedrewardfromenvironment.Thestabiltiyinfinaltrainingstatemeansthattheagentcanautomaticallychoosethebestaction,andtheenvironmentgivesthebestrewardinturns.Weknowthattheagentandenvironmenthasenterintoafriendlyinteraction,guaranteeingthemaximaltotalreward.FigureSEQFigure\*ARABIC9:Theaveragereturnedrewardfromenvironment.Weaveragethereturnedrewardevery1000epochs.FromthisREF_Ref484591711\hFigure10,thepredictedmaxQ-valuefromCNNconvergesandstabilizesinavalueafterabout100000.ItmeansthatCNNcanaccuratelypredictthequalityofactionsinspecificstate,andwecansteadilyperformactionswithmaxQ-value.TheconvergenceofmaxQ-valuesstatesthatCNNhasexploredthestatespacewidelyandgreatlyapproximatedtheenvironmentwell.FigureSEQFigure\*ARABIC10:TheaveragemaxQ-valueobtainedfromCNN’soutput.WeaveragethemaxQ-valueevery1000epochs.REF_Ref484591726\hFigure11illustratestheDQN’sactionstrategy.IfthepredictedmaxQ-valueissohigh,weareconfidentthatwewillgetthroughthegapwhenperformtheactionwithmaxQ-valuelikeA,C.IfthemaxQ-valueisrelativelylow,andweperformtheaction,wemighthitthepipe,likeB.Inthefinalstateoftraining,themaxQ-valueisdramaticallyhigh,meaningthatweareconfidenttogetthroughthegapsifperformingtheactionswithmaxQ-value.FigureSEQFigure\*ARABIC11:TheleftmostplotshowstheCNN’spredictedmaxQ-valuefora100framessegmentofthegameflappybird.ThethreescreenshotscorrespondtotheframeslabeledbyA,B,andCrespectively.ConclusionWesuccessfullyuseDQNtoplayflappybird,whichcanoutperformhumanbeings.DQNcanautomaticallylearnknowledgefromenvironmentjustusingrawimagetoplaygameswithoutpriorknowledge.ThisfeaturegiveDQNthepowertoplayalmostsimplegames.Moreover,theuseofCNNasafunctionapproximationallowDQNtodealwithlargeenvironmentwhichhasalmostinfinitestatespace.Lastbutnotleast,CNNcanalsogreatlyrepresentfeaturespacewithouthandcraftedfeatureextractionreducingthemassivemanualwork.
ReferencesC.ClarkandA.Storkey.Teachingdeepconvolutionalneuralnetworkstoplaygo.arXivpreprintarXiv:1412.3409,2014.1.AlexKrizhevsky,IlyaSutskever,andGeoffHinton.Imagenetclassificationwithdeepconvolutionalneuralnetworks.InAdvancesinNeuralInformationProcessingSystems25,pages1106–1114,2012.GeorgeE.Dahl,DongYu,LiDeng,andAlexAcero.Context-dependentpre-traineddeepneuralnetworksforlarge-vocabularyspeechrecognition.Audio,Speech,andLanguageProcessing,IEEETransactionson,20(1):30–42,2012,1.RichardSuttonandAndrewBarto.ReinforcementLearning:AnIntroduction.MITPress,1998.BrianSallansandGeoffreyE.Hinton.Reinforcementlearningwithfactoredstatesandactions.JournalofMachineLearningResearch,5:1063–1088,2004.ChristopherJCHWatkinsandPeterDayan.Q-learning.Machinelearning,8(3-4):279–292,1992.HamidMaei,CsabaSzepesv′ari,ShalabhBhatnagar,andRichardS.Sutton.Towardoff-policylearningcontrolwithfunctionapproximation.InProceedingsofthe27thI
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶(hù)所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶(hù)上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶(hù)上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶(hù)因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 園區(qū)創(chuàng)新成果轉(zhuǎn)化與產(chǎn)業(yè)化路徑的探索
- 2025年醫(yī)學(xué)高級(jí)職稱(chēng)-心血管內(nèi)科(醫(yī)學(xué)高級(jí))歷年參考題庫(kù)含答案解析(5卷單選一百題)
- 2025年醫(yī)學(xué)高級(jí)職稱(chēng)-內(nèi)分泌學(xué)(醫(yī)學(xué)高級(jí))歷年參考題庫(kù)含答案解析(5卷單選一百題)
- 2025年醫(yī)學(xué)高級(jí)職稱(chēng)-中醫(yī)肛腸(醫(yī)學(xué)高級(jí))歷年參考題庫(kù)含答案解析(5卷單選一百題)
- 2025年住院醫(yī)師規(guī)范培訓(xùn)(各省)-貴州住院醫(yī)師病理科歷年參考題庫(kù)含答案解析(5卷100題)
- 2025年住院醫(yī)師規(guī)范培訓(xùn)(各省)-湖北住院醫(yī)師口腔正畸科歷年參考題庫(kù)含答案解析(5卷100題)
- 2025-2030全球及中國(guó)圖像標(biāo)記和注釋服務(wù)行業(yè)市場(chǎng)現(xiàn)狀供需分析及市場(chǎng)深度研究發(fā)展前景及規(guī)劃可行性分析研究報(bào)告
- 2025-2030全球及中國(guó)SUV和皮卡音頻揚(yáng)聲器行業(yè)市場(chǎng)現(xiàn)狀供需分析及市場(chǎng)深度研究發(fā)展前景及規(guī)劃可行性分析研究報(bào)告
- 2025年一建考試機(jī)電工程技術(shù)前沿題庫(kù)深度解析與模擬試題含答案
- 2025-2030中國(guó)采樣器行業(yè)市場(chǎng)發(fā)展趨勢(shì)與前景展望戰(zhàn)略研究報(bào)告
- 國(guó)內(nèi)高品質(zhì)膠原蛋白行業(yè)發(fā)展白皮書(shū)
- 《莊子》寓言對(duì)后世的影響
- 質(zhì)量過(guò)程報(bào)告記錄匯總表-scr與ncr表格報(bào)檢單
- 湖南省長(zhǎng)沙市2022-2023學(xué)年新高一英語(yǔ)入學(xué)分班考試試卷【含答案】
- Q∕SY 1477-2012 定向鉆穿越管道外涂層技術(shù)規(guī)范
- k-bus產(chǎn)品手冊(cè)中文版ip interface使用手冊(cè)
- 第九講有機(jī)化學(xué)結(jié)構(gòu)理論
- 工程化學(xué)復(fù)習(xí)要點(diǎn)及習(xí)題解答童志平版本PPT課件
- 論中心蝶閥、單、雙、三、四偏心蝶閥
- 《中國(guó)語(yǔ)言文化》課程教學(xué)大綱
- 庭審筆錄郭英賀駁回-離婚案件
評(píng)論
0/150
提交評(píng)論