面向邊緣計算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡構建方法_第1頁
面向邊緣計算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡構建方法_第2頁
面向邊緣計算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡構建方法_第3頁
面向邊緣計算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡構建方法_第4頁
面向邊緣計算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡構建方法_第5頁
已閱讀5頁,還剩17頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權,請進行舉報或認領

文檔簡介

面向邊緣計算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡構建方法一、本文概述Overviewofthisarticle隨著物聯(lián)網(wǎng)和技術的快速發(fā)展,邊緣計算作為一種將數(shù)據(jù)處理和分析任務推向網(wǎng)絡邊緣的新型計算模式,正受到越來越多的關注。邊緣計算能夠降低數(shù)據(jù)傳輸延遲,提高數(shù)據(jù)處理效率,并在保障數(shù)據(jù)隱私和安全方面發(fā)揮重要作用。在邊緣計算中,嵌入式FPGA(FieldProgrammableGateArray)以其高度的并行處理能力和可編程性,成為了實現(xiàn)高效卷積神經(jīng)網(wǎng)絡(ConvolutionalNeuralNetwork,CNN)推理的理想選擇。WiththerapiddevelopmentoftheInternetofThingsandtechnology,edgecomputing,asanewcomputingmodelthatpushesdataprocessingandanalysistaskstotheedgeofthenetwork,isreceivingmoreandmoreattention.Edgecomputingcanreducedatatransmissiondelay,improvedataprocessingefficiency,andplayanimportantroleinensuringdataprivacyandsecurity.Inedgecomputing,embeddedFPGA(FieldProgrammableGateArray)hasbecomeanidealchoiceforrealizingefficientconvolutionalneuralnetwork(CNN)reasoningduetoitshighparallelprocessingabilityandprogrammability.本文旨在探討面向邊緣計算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡構建方法。我們將首先分析邊緣計算與嵌入式FPGA的結合優(yōu)勢,然后詳細介紹如何在FPGA上設計并實現(xiàn)高效的卷積神經(jīng)網(wǎng)絡結構。我們還將探討如何優(yōu)化網(wǎng)絡參數(shù)和算法,以在有限的硬件資源下實現(xiàn)最佳的性能和效率。我們將總結本文的主要貢獻,并展望未來的研究方向和應用前景。ThepurposeofthispaperistoexploretheconstructionmethodofconvolutionalneuralnetworkbasedonembeddedFPGAforedgecomputing.WewillfirstanalyzetheadvantagesofcombiningedgecomputingwithembeddedFPGA,andthenintroduceindetailhowtodesignandimplementanefficientconvolutionalneuralnetworkstructureonFPGA.Wewillalsoexplorehowtooptimizenetworkparametersandalgorithmstoachieveoptimalperformanceandefficiencyunderlimitedhardwareresources.Wewillsummarizethemaincontributionsofthisarticleandlookforwardtofutureresearchdirectionsandapplicationprospects.通過閱讀本文,讀者將能夠深入了解嵌入式FPGA在邊緣計算中的作用,以及如何利用FPGA構建和優(yōu)化卷積神經(jīng)網(wǎng)絡,從而推動邊緣計算在物聯(lián)網(wǎng)和領域的應用和發(fā)展。Byreadingthisarticle,readerswillbeabletodeeplyunderstandtheroleofembeddedFPGAinedgecomputing,andhowtouseFPGAtobuildandoptimizeconvolutionalneuralnetworks,soastopromotetheapplicationanddevelopmentofedgecomputingintheInternetofThingsandthefield.二、相關技術介紹Introductiontorelevanttechnologies隨著技術的飛速發(fā)展,卷積神經(jīng)網(wǎng)絡(ConvolutionalNeuralNetwork,CNN)在圖像識別、語音識別、自然語言處理等領域取得了顯著的成果。然而,傳統(tǒng)的CNN模型通常運行在高性能的計算服務器上,對于資源受限的邊緣設備來說,其計算能力和存儲資源難以滿足要求。因此,如何在邊緣設備上實現(xiàn)高效的CNN推理成為了一個亟待解決的問題。Withtherapiddevelopmentoftechnology,ConvolutionalNeuralNetwork(CNN)hasachievedsignificantresultsinfieldssuchasimagerecognition,speechrecognition,andnaturallanguageprocessing.However,traditionalCNNmodelstypicallyrunonhigh-performancecomputingservers,andforresourceconstrainededgedevices,theircomputingpowerandstorageresourcesaredifficulttomeettherequirements.Therefore,howtoachieveefficientCNNinferenceonedgedeviceshasbecomeanurgentproblemtobesolved.面向邊緣計算的嵌入式FPGA(Field-ProgrammableGateArray)技術為解決這一問題提供了可能。FPGA是一種可編程邏輯器件,具有高度的靈活性和并行處理能力,非常適合用于加速CNN的推理過程。通過將CNN模型映射到FPGA上,可以充分利用FPGA的并行計算能力和可配置性,實現(xiàn)高效的CNN推理。TheembeddedFPGA(FieldProgrammableGateArray)technologyforedgecomputingprovidesthepossibilitytosolvethisproblem.FPGAisaprogrammablelogicdevicewithhighflexibilityandparallelprocessingability,whichisverysuitableforacceleratingtheinferenceprocessofCNN.BymappingCNNmodelsontoFPGA,theparallelcomputingpowerandconfigurabilityofFPGAcanbefullyutilizedtoachieveefficientCNNinference.在嵌入式FPGA上構建CNN模型的關鍵在于如何將CNN模型高效地映射到FPGA上,并充分利用FPGA的硬件資源。這涉及到一系列的技術問題,包括CNN模型的壓縮與優(yōu)化、FPGA硬件資源的分配與調度、CNN模型在FPGA上的并行計算策略等。ThekeytobuildingaCNNmodelonanembeddedFPGAishowtoefficientlymaptheCNNmodeltotheFPGAandfullyutilizethehardwareresourcesoftheFPGA.Thisinvolvesaseriesoftechnicalissues,includingcompressionandoptimizationofCNNmodels,allocationandschedulingofFPGAhardwareresources,andparallelcomputingstrategiesofCNNmodelsonFPGA.CNN模型的壓縮與優(yōu)化是關鍵步驟之一。由于FPGA的硬件資源有限,需要對CNN模型進行壓縮和優(yōu)化,以減小模型的大小和計算復雜度,使其能夠在FPGA上運行。這包括剪枝、量化、模型蒸餾等技術手段,可以有效地減小模型的大小和計算復雜度,同時保持模型的性能。ThecompressionandoptimizationofCNNmodelsisoneofthekeysteps.DuetothelimitedhardwareresourcesofFPGA,itisnecessarytocompressandoptimizetheCNNmodeltoreduceitssizeandcomputationalcomplexity,sothatitcanrunonFPGA.Thisincludestechniquessuchaspruning,quantization,andmodeldistillation,whichcaneffectivelyreducethesizeandcomputationalcomplexityofthemodelwhilemaintainingitsperformance.FPGA硬件資源的分配與調度也是關鍵步驟之一。在將CNN模型映射到FPGA上時,需要合理地分配和調度FPGA的硬件資源,包括計算資源、存儲資源、IO資源等。這需要根據(jù)CNN模型的特點和FPGA的硬件特性進行綜合考慮,以實現(xiàn)最優(yōu)的性能和資源利用率。TheallocationandschedulingofFPGAhardwareresourcesisalsooneofthekeysteps.WhenmappingCNNmodelstoFPGA,itisnecessarytoallocateandscheduleFPGAhardwareresourcesreasonably,includingcomputingresources,storageresources,IOresources,etc.ThisneedstobecomprehensivelyconsideredbasedonthecharacteristicsoftheCNNmodelandthehardwarecharacteristicsoftheFPGAtoachieveoptimalperformanceandresourceutilization.CNN模型在FPGA上的并行計算策略也是關鍵步驟之一。FPGA具有高度的并行計算能力,可以充分利用這一特性加速CNN的推理過程。在將CNN模型映射到FPGA上時,需要設計合理的并行計算策略,包括數(shù)據(jù)的并行處理、計算的并行化等,以實現(xiàn)高效的CNN推理。TheparallelcomputingstrategyofCNNmodelonFPGAisalsooneofthekeysteps.FPGAhasahighdegreeofparallelcomputingcapability,whichcanfullyutilizethisfeaturetoacceleratetheinferenceprocessofCNN.WhenmappingCNNmodelstoFPGA,itisnecessarytodesignareasonableparallelcomputingstrategy,includingparallelprocessingofdata,parallelizationofcomputation,etc.,toachieveefficientCNNinference.面向邊緣計算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡構建方法涉及到CNN模型的壓縮與優(yōu)化、FPGA硬件資源的分配與調度、CNN模型在FPGA上的并行計算策略等多個方面的技術。通過綜合應用這些技術,可以在邊緣設備上實現(xiàn)高效的CNN推理,推動技術在邊緣計算領域的應用和發(fā)展。TheconstructionmethodofedgecomputingorientedembeddedFPGAconvolutionalneuralnetworkinvolvesthecompressionandoptimizationofCNNmodel,theallocationandschedulingofFPGAhardwareresources,andtheparallelcomputingstrategyofCNNmodelonFPGA.Bycomprehensivelyapplyingthesetechnologies,wecanachieveefficientCNNreasoningonedgedevices,andpromotetheapplicationanddevelopmentoftechnologiesinthefieldofedgecomputing.三、面向邊緣計算的FPGACNN構建方法FPGACNNconstructionmethodforedgecomputing邊緣計算是計算科學領域的一個重要趨勢,它將計算任務從中心化的數(shù)據(jù)中心推向網(wǎng)絡的邊緣,以提供更快的響應速度和更低的延遲。在這樣的背景下,F(xiàn)PGA(Field-ProgrammableGateArray)作為一種高度靈活和可配置的硬件平臺,為邊緣計算提供了強大的支持。特別是在卷積神經(jīng)網(wǎng)絡(CNN)的應用中,F(xiàn)PGA的并行處理能力和可定制性使其成為一種理想的硬件實現(xiàn)方案。Edgecomputingisanimportanttrendinthefieldofcomputingscience.Itpushescomputingtasksfromthecentralizeddatacentertotheedgeofthenetworktoprovidefasterresponsespeedandlowerlatency.Inthiscontext,FPGA(FieldProgrammableGateArray),asahighlyflexibleandconfigurablehardwareplatform,providesstrongsupportforedgecomputing.EspeciallyintheapplicationofConvolutionalNeuralNetworks(CNN),FPGA'sparallelprocessingcapabilityandcustomizabilitymakeitanidealhardwareimplementationsolution.CNN模型選擇與優(yōu)化:需要根據(jù)具體的應用場景選擇合適的CNN模型。考慮到邊緣設備的計算資源和功耗限制,通常需要選擇輕量級的CNN模型,如MobileNet、ShuffleNet等。為了提高模型在FPGA上的運行效率,還需要對模型進行優(yōu)化,如模型剪枝、量化等。CNNmodelselectionandoptimization:ItisnecessarytochooseasuitableCNNmodelbasedonspecificapplicationscenarios.Consideringthecomputingresourcesandpowerlimitationsofedgedevices,itisusuallynecessarytochooselightweightCNNmodels,suchasMobileNet,ShuffleNet,etc.InordertoimprovetherunningefficiencyofthemodelonFPGA,itisalsonecessarytooptimizethemodel,suchasmodelpruning,quantization,etc.硬件架構設計:根據(jù)選擇的CNN模型,設計適合FPGA實現(xiàn)的硬件架構。這包括確定計算單元的數(shù)量、類型以及它們之間的連接方式等。還需要考慮如何有效利用FPGA的并行處理能力和存儲資源。Hardwarearchitecturedesign:BasedontheselectedCNNmodel,designahardwarearchitecturesuitableforFPGAimplementation.Thisincludesdeterminingthenumberandtypeofcomputingunits,aswellastheconnectionmethodsbetweenthem.WealsoneedtoconsiderhowtoeffectivelyutilizetheparallelprocessingcapabilityandstorageresourcesofFPGA.高層次綜合(HLS)工具應用:利用高層次綜合(HLS)工具,如ilinx的VivadoHLS或Intel的HLSCompiler,將CNN模型轉換為可在FPGA上運行的硬件描述語言(HDL)代碼。HLS工具可以自動將C/C++代碼轉換為HDL代碼,從而大大簡化了硬件設計的過程。ApplicationofHighLevelSynthesis(HLS)Tools:Utilizehigh-levelsynthesis(HLS)toolssuchasilinx'sVivadoHLSorIntel'sHLSCompilertoconvertCNNmodelsintoHardwareDescriptionLanguage(HDL)codethatcanrunonFPGA.HLStoolscanautomaticallyconvertC/C++codeintoHDLcode,greatlysimplifyingthehardwaredesignprocess.硬件實現(xiàn)與驗證:將生成的HDL代碼部署到FPGA上,并進行硬件實現(xiàn)。這包括硬件資源的分配、時序優(yōu)化等步驟。實現(xiàn)完成后,需要進行硬件驗證,確保CNN模型在FPGA上的正確性和性能。Hardwareimplementationandverification:DeploythegeneratedHDLcodeontotheFPGAandperformhardwareimplementation.Thisincludesstepssuchasallocatinghardwareresourcesandoptimizingtiming.Afterimplementation,hardwarevalidationisrequiredtoensurethecorrectnessandperformanceoftheCNNmodelonFPGA.性能評估與優(yōu)化:通過性能評估工具,如ilinx的VivadoProfiler或Intel的VTuneAmplifier,對FPGA實現(xiàn)的CNN模型進行性能評估。根據(jù)評估結果,對硬件架構或代碼進行優(yōu)化,以提高模型的運行速度和能效比。Performanceevaluationandoptimization:Useperformanceevaluationtoolssuchasilinx'sVivadoProfilerorIntel'sVTuneAmplifiertoevaluatetheperformanceofCNNmodelsimplementedonFPGA.Basedontheevaluationresults,optimizethehardwarearchitectureorcodetoimprovetherunningspeedandenergyefficiencyofthemodel.通過上述步驟,可以構建出面向邊緣計算的FPGACNN系統(tǒng)。該系統(tǒng)能夠充分利用FPGA的并行處理能力和可定制性,實現(xiàn)高效的CNN推理任務,為邊緣計算應用提供強大的支持。Throughtheabovesteps,anFPGACNNsystemforedgecomputingcanbebuilt.ThesystemcanmakefulluseoftheparallelprocessingabilityandcustomizabilityofFPGAtoachieveefficientCNNreasoningtasksandprovidestrongsupportforedgecomputingapplications.四、實驗與性能分析ExperimentandPerformanceAnalysis為了驗證本文提出的面向邊緣計算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡構建方法的有效性,我們進行了一系列實驗和性能分析。本章節(jié)將詳細介紹實驗環(huán)境、數(shù)據(jù)集、網(wǎng)絡模型、對比方法以及實驗結果,并對實驗結果進行深入的分析和討論。InordertoverifytheeffectivenessoftheproposededgecomputingorientedembeddedFPGAconvolutionalneuralnetworkconstructionmethod,weconductedaseriesofexperimentsandperformanceanalysis.Thischapterwillprovideadetailedintroductiontotheexperimentalenvironment,dataset,networkmodel,comparativemethods,andexperimentalresults,andconductin-depthanalysisanddiscussionoftheexperimentalresults.實驗環(huán)境包括一臺搭載InteleonSilver4216處理器的服務器和一款基于ilinxZynq-7000系列FPGA的開發(fā)板。服務器用于訓練卷積神經(jīng)網(wǎng)絡模型,而FPGA開發(fā)板則用于部署和測試模型。我們還使用了ilinxVivadoHLS和VivadoHigh-LevelSynthesisSuite工具套件進行硬件設計和優(yōu)化。TheexperimentalenvironmentincludesaserverequippedwithanInteleonSilver4216processorandadevelopmentboardbasedontheilinxZynq-7000seriesFPGA.Theserverisusedtotrainconvolutionalneuralnetworkmodels,whiletheFPGAdevelopmentboardisusedtodeployandtestthemodels.WealsousedtheilinxVivadoHLSandVivadoHighLevelSynthesisSuitetoolkitsforhardwaredesignandoptimization.為了驗證本文方法的通用性,我們選取了兩個經(jīng)典的圖像分類數(shù)據(jù)集:CIFAR-10和ImageNet。CIFAR-10數(shù)據(jù)集包含10個類別的60000張32x32彩色圖像,其中50000張用于訓練,10000張用于測試。ImageNet數(shù)據(jù)集則包含1000個類別的128萬張圖像,用于訓練和驗證。Toverifythegeneralityofourmethod,weselectedtwoclassicimageclassificationdatasets:CIFAR-10andImageNet.TheCIFAR-10datasetcontains6000032x32colorimagesfrom10categories,ofwhich50000areusedfortrainingand10000areusedfortesting.TheImageNetdatasetcontains28millionimagesfrom1000categoriesfortrainingandvalidation.在網(wǎng)絡模型方面,我們選擇了兩個具有代表性的卷積神經(jīng)網(wǎng)絡:LeNet-5和ResNet-50。LeNet-5是一個輕量級的網(wǎng)絡,適用于小型數(shù)據(jù)集如CIFAR-10;而ResNet-50則是一個深度網(wǎng)絡,適用于大型數(shù)據(jù)集如ImageNet。Intermsofnetworkmodels,wehavechosentworepresentativeconvolutionalneuralnetworks:LeNet-5andResNet-LeNet-5isalightweightnetworksuitableforsmalldatasetssuchasCIFAR-10;ResNet-50isadeepnetworksuitableforlargedatasetssuchasImageNet.(1)CPU基準方法:在服務器上使用CPU進行卷積神經(jīng)網(wǎng)絡的推理,以評估FPGA加速的效果。(1)CPUbenchmarkmethod:UseCPUontheserverforconvolutionalneuralnetworkinferencetoevaluatetheeffectivenessofFPGAacceleration.(2)GPU基準方法:在服務器上使用GPU進行卷積神經(jīng)網(wǎng)絡的推理,以評估FPGA相對于GPU的性能優(yōu)勢。(2)GPUbenchmarkmethod:UseGPUontheserverforconvolutionalneuralnetworkinferencetoevaluatetheperformanceadvantageofFPGAoverGPU.(3)傳統(tǒng)FPGA方法:使用傳統(tǒng)的FPGA設計方法,將卷積神經(jīng)網(wǎng)絡映射到FPGA上,以評估本文方法與傳統(tǒng)方法的性能差異。(3)TraditionalFPGAmethod:UsingtraditionalFPGAdesignmethods,theconvolutionalneuralnetworkismappedontotheFPGAtoevaluatetheperformancedifferencebetweenourmethodandtraditionalmethods.在CIFAR-10數(shù)據(jù)集上,使用LeNet-5網(wǎng)絡模型的實驗結果如表1所示。從表1中可以看出,本文方法在FPGA上的推理速度明顯優(yōu)于CPU和GPU基準方法,同時功耗也較低。與傳統(tǒng)FPGA方法相比,本文方法在推理速度和功耗方面均有所提升。TheexperimentalresultsusingtheLeNet-5networkmodelontheCIFAR-10datasetareshowninTableFromTable1,itcanbeseenthattheinferencespeedofourmethodonFPGAissignificantlybetterthantheCPUandGPUbenchmarkmethods,andthepowerconsumptionisalsolower.ComparedwithtraditionalFPGAmethods,ourmethodhasimprovedinferencespeedandpowerconsumption.在ImageNet數(shù)據(jù)集上,使用ResNet-50網(wǎng)絡模型的實驗結果如表2所示。從表2中可以看出,本文方法在FPGA上的推理速度同樣優(yōu)于CPU和GPU基準方法,功耗也較低。與傳統(tǒng)FPGA方法相比,本文方法在推理速度和功耗方面同樣具有優(yōu)勢。TheexperimentalresultsusingtheResNet-50networkmodelontheImageNetdatasetareshowninTableFromTable2,itcanbeseenthattheinferencespeedofourmethodonFPGAisalsobetterthantheCPUandGPUbenchmarkmethods,andthepowerconsumptionisalsolower.ComparedwithtraditionalFPGAmethods,ourmethodalsohasadvantagesininferencespeedandpowerconsumption.為了進一步分析本文方法的性能優(yōu)勢,我們還對實驗結果進行了深入討論。本文方法通過硬件優(yōu)化和并行化策略,充分利用了FPGA的并行計算能力,從而實現(xiàn)了較高的推理速度。本文方法通過硬件資源共享和動態(tài)調度策略,有效降低了功耗和資源利用率。本文方法通過靈活的硬件設計流程,使得模型可以在不同的FPGA平臺上進行部署和優(yōu)化,從而提高了方法的通用性和可擴展性。Inordertofurtheranalyzetheperformanceadvantagesofourmethod,wealsoconductedin-depthdiscussionsontheexperimentalresults.ThismethodfullyutilizestheparallelcomputingpowerofFPGAthroughhardwareoptimizationandparallelizationstrategies,therebyachievinghighinferencespeed.Thismethodeffectivelyreducespowerconsumptionandresourceutilizationthroughhardwareresourcesharinganddynamicschedulingstrategies.ThisarticleproposesaflexiblehardwaredesignprocessthatenablesthemodeltobedeployedandoptimizedondifferentFPGAplatforms,therebyimprovingtheuniversalityandscalabilityofthemethod.實驗結果表明本文提出的面向邊緣計算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡構建方法具有顯著的性能優(yōu)勢和應用價值。TheexperimentalresultsshowthattheconstructionmethodofedgecomputingorientedembeddedFPGAconvolutionalneuralnetworkproposedinthispaperhassignificantperformanceadvantagesandapplicationvalue.五、結論與展望ConclusionandOutlook隨著邊緣計算需求的不斷增長,對于高性能、低功耗的嵌入式FPGA卷積神經(jīng)網(wǎng)絡的需求也日益凸顯。本文深入探討了面向邊緣計算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡的構建方法,旨在為相關領域的研究者與實踐者提供有益的參考。Withthegrowingdemandforedgecomputing,thedemandforhigh-performance,low-powerembeddedFPGAconvolutionalneuralnetworksisalsoincreasinglyprominent.ThispaperdeeplydiscussestheconstructionmethodofconvolutionalneuralnetworkbasedonembeddedFPGAforedgecomputing,aimingtoprovideusefulreferenceforresearchersandpractitionersinrelatedfields.在結論部分,本文首先對研究成果進行了總結。通過深入研究卷積神經(jīng)網(wǎng)絡的算法特點,結合FPGA的硬件特性,我們提出了一種針對邊緣計算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡構建方法。該方法能夠有效地利用FPGA的并行計算能力和可重構性,實現(xiàn)卷積神經(jīng)網(wǎng)絡的高效計算與低功耗運行。實驗結果表明,與傳統(tǒng)的CPU和GPU實現(xiàn)相比,該方法在性能上有了顯著的提升,同時功耗也得到了有效的控制。Intheconclusionsection,thisarticlefirstsummarizestheresearchresults.BydeeplystudyingthealgorithmcharacteristicsofconvolutionalneuralnetworkandcombiningwiththehardwarecharacteristicsofFPGA,weproposeanembeddedFPGAconvolutionalneuralnetworkconstructionmethodforedgecomputing.ThismethodcaneffectivelyutilizetheparallelcomputingpowerandreconfigurabilityofFPGAtoachieveefficientcomputationandlow-poweroperationofconvolutionalneuralnetworks.TheexperimentalresultsshowthatcomparedwithtraditionalCPUandGPUimplementations,thismethodhassignificantlyimprovedperformanceandeffectivelycontrolle

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論