2024微軟人工智能系統(tǒng)強化學習_第1頁
2024微軟人工智能系統(tǒng)強化學習_第2頁
2024微軟人工智能系統(tǒng)強化學習_第3頁
2024微軟人工智能系統(tǒng)強化學習_第4頁
2024微軟人工智能系統(tǒng)強化學習_第5頁
已閱讀5頁,還剩38頁未讀 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

1 signal

observation&reward

RealworldenvironmentAgent historyinfo

simulatorEachtimesteptAgentanaction????Worldupdatesgivenactionat,emitsobservationandAgentreceivesobservationandUseexperiencetoguidefuturedecisions(exploit)signal

observation&reward

RealworldenvironmentAgent historyinfo

simulatorHistory???=(??1,...,????,????,AgentchoosesactionbasedonhistoryisinformationassumedtodeterminewhathappensnextFunctionhistory=(???)Stateisifandonlyif p(????+1|,????)=p(????+1|???,????)Goalselectactionstomaximizetotalexpectedfuturerewardbalancingimmediate&long-termrewardsπdetermineshowtheagentchoosesactionsDeterministicpolicyStochasticpolicyfunctionexpecteddiscountedsumfuturerewardsunderapolicyπ initializeenvPolicymodelinitializeenvPolicymodelinitializepolicyPolicyinferenceinitializepolicyRolloutdataRolloutdataPolicyupdateUpdatepolicyUpdatepolicyHessel,Matteo,etal."Rainbow:Combiningimprovementsindeepreinforcementlearning."——給PPO帶來真正的性能上提升以及將policy約束在trustregion內的效果,都不是通過PPO論文中提出的對新的policy和原policy的比值進行裁切(clip)帶來的,而是通過code-level的一些技巧帶來的。Engstrom,Logan,etal."Implementationmattersindeeppolicygradients:AcasestudyonPPOandTRPO."Liang,Eric,etal."Rayrllib:Acomposableandscalablereinforcementlearninglibrary."Liang,Eric,etal."Rayrllib:Acomposableandscalablereinforcementlearninglibrary."新算法新算法新架構 難以復用的強化學習代碼

可擴展性的強化學習框架 TrainingDataMLModelTrainingDataMLModelTrainingsignalθ

observation&reward

RealworldenvironmentAgent historyinfo

simulator面臨的問題面臨的問題新的需求Horgan,Dan,etal."Distributedprioritizedexperiencereplay."可能傳輸大量的數(shù)據可能傳輸大量的數(shù)據GPUCPU面臨的問題面臨的問題可能的解決方案 通用的RL算法針對Env開發(fā)支持分布式Star數(shù)目RepoACME+Reverb2.1k/deepmind/acmeELF2k/facebookresearch/ELFRay+RLlib16.4k/ray-project/rayGym24.5k/openai/gymBaselines11.6k/openai/baselinesTorchBeast553/facebookresearch/torchbeastSeedRL617/google-research/seed_rlTianshuo?3.2k/thu-ml/tianshouKeras-RL5.1k/keras-rl/keras-rlRayisafastandsimpleframeworkforbuildingandrunningdistributedapplications./ray-project/ray Rayisafastandsimpleframeworkforbuildingandrunningdistributedapplications.AprocessexecutingtheuserprogramAstatelessprocessthatexecutesremotefunctionsinvokedbyadriverAstatefulprocessthatexecutesDistributedobjectIn-memorydistributedstoragetostoretheinputs/outputs,orstatelesscomputation.ImplementtheobjectstoreviasharedmemoryUseApacheArrowasdataformatsDistributedschedulerSubmittedfirsttolocalschedulerGlobalschedulerconsiderseachloadandconstraintstoschedulingdecisionsGlobalControlAkey-valuestorewithpub-subfunctionalityRLlibisanopen-sourcelibraryforreinforcementlearningthatoffersbothhighscalabilityandaunifiedAPIforavarietyofapplications.RayRayRLlib/ray-project/ray/tree/master/rllib distributedschedulerisanaturalfitforthehierarchicalcontrolmodel,asnestedcomputationcanbeimplementedinRaywithnocentraltaskschedulingbottleneck.Hierarchicalcontrol Actors/Workers RunscriptRemotedecoratorforruninremote InitrayRemotedecoratorforruninremoteInitrayExecutethetrainerandactorinremoteExecutethetrainerandactorinremoteStartthreadforasyncStartthreadforasynctrainingsignal

observation&reward

RealworldenvironmentAgent historyinfo

simulatorPolicyGraphPolicyModelPolicyOptimizerPolicyGraphPolicyModelPolicyOptimizerThepolicyoptimizerisresponsiblefortheperformance-criticaltasksofdistributedsampling,parameterupdates,andmanagingPolicyGraphPolicyModelPolicyOptimizerPseudocodeforfourRLlibpolicyoptimizerstepmethods.Eachstep()operatesalocalpolicygraphandarrayofremoteevaluatorreplicas. Serializationanddeserializationarebottlenecksinparallelanddistributedcomputing,especiallyinmachinelearningapplicationswithlargeobjectsandlargequantitiesofdata.Goalsefficientwithlargenumericaldata(e.g.NumpyandPandasdataframes)AsasPicklePythontypesCompatiblewithsharedmemory(allowingmultipleprocessestousethesamewithoutcopyingit)Deserializationshouldbeextremelylanguageindependent Makingdeserializationfastisimportant.AnobjectmaybeserializedonceandthendeserializedmanytimesAcommonpatternisformanyobjectstobeserializedinparallelandthenaggregatedanddeserializedoneatatimeonasingleworkermakingdeserializationthebottleneckDeserializationisfastandbarelyvisibleUsingonlytheschema,cancomputetheoffsetseachvalueinthedatablobwithoutscanningthroughthedatablob(unlikePickle,thisiswhatenablesfastdeserialization)copyingorotherwiseconvertinglargearraysandothervaluesduringdeserializat

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論