Driving AI 2024 Mobileye 端到端 -2024-10-自動駕駛_第1頁
Driving AI 2024 Mobileye 端到端 -2024-10-自動駕駛_第2頁
Driving AI 2024 Mobileye 端到端 -2024-10-自動駕駛_第3頁
Driving AI 2024 Mobileye 端到端 -2024-10-自動駕駛_第4頁
Driving AI 2024 Mobileye 端到端 -2024-10-自動駕駛_第5頁
已閱讀5頁,還剩122頁未讀 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

NavigatingThePathtoAutonomous

Mobility

Prof.AmnonShashua,CEO

Prof.ShaiShalev-Schwartz,CTO

HowtoSolveAutonomy

-Reachingareal“fullselfdriving”system(eyes-off)

-Whilemaintainingasustainablebusiness

*SubjecttodefinedOperationalDesignDomainandproductsspecifications

HowtoSolveAutonomy

Sensors

AIApproach

Cost

Modularity

GeographicScalability

MTBF

Waymo

Lidar-centric

CAIS

?

Tesla

Cameraonly

End-to-end

?

Mobileye

Camera-centric

CAIS

?

HowtoSolveAutonomy

Sensors

AIApproach

Cost

Modularity

GeographicScalability

MTBF

Waymo

Lidar-centric

CAIS

?

Tesla

Cameraonly

End-to-end

?

Mobileye

Camera-centric

CAIS

?

Whichismorelikely

tosucceed?

End-to-EndApproach

PremiseReality

Nogluecode

GluecodeshiftedtoofflineRare&correctvs.common&incorrect

“AValignment”problem

UnsuperviseddataalonecanreachsufficientMTBF

Really?

-Calculator

-Shortcutlearningproblem

-Longtailproblem

“NoGlueCode”:AVAlignmentProblem

End-to-endaimstomaximizeP[ylx]whereyisthefuturetrajectoryhumanwouldtake,denotedy,giventhepreviousvideo,denotedx

Thislearningobjectiveprefers'common&incorrect'over'rare&correct’

Examples:

1.Mostdriversslowdownatastopsignbutdonotcometoafullstop

-Rollingstop三common&incorrect

-Fullstop三rare&correct

2.“Rudedrivers”thatcutinline

3.Recklessdrivers

ThisiswhyRLHFisusedinLLMs:therewardmechanismdifferentiatesbetween‘correct’and‘incorrect’

Gluecodeshiftedtooffline

CanUnsupervisedDataAloneReachHighMTBF?

Calculators

End-to-endlearningfromdataoftenmissesimportantabstractionsandthereforedoesn’tgeneralizewell

Example

Learningtomultiply2numbers,ataskwhereeventhelargestLLMsstruggle

/yuntiandeng/status/1836114401213989366

CanUnsupervisedDataAloneReachHighMTBF?

Calculators

End-to-endlearningfromdataoftenmissesimportantabstractionsandthereforedoesn’tgeneralizewell

Example

Learningtomultiply2numbers,ataskwhereeventhelargestLLMsstruggle

Whatcanbedone?

ChatGPT

Callatool(calculator)

-ProvidetoolstoLLMs

-→CompoundAISystems(CAIS)

CanUnsupervisedDataAloneReachHighMTBF?

ShortcutLearningProblem

Relyingondifferentsensormodalitiesisawell-establishedmethodologyforincreasingMTBFThequestion:Howtofusethedifferentsensors?

The“end-to-endapproach”:Justfeedallsensorsintoonebignetworkandtrainit

“TheShortcutLearningProblem”

Whendifferentinputmodalitieshavedifferentsamplecomplexities,end-to-endStochasticGradientDescentstrugglesinleveragingtheadvantagesofallmodalities

CanUnsupervisedDataAloneReachHighMTBF?

ShortcutLearningProblemConsider3typesofsensors

Radar

Lidar

Camera

SupposethateachsystemhasinherentlimitationsthatcauseafailureprobabilityofE,whereEissmall(e.g.,onein1000hours)

Additionally,assumethatthefailuresofthedifferentsensorsareindependent

Wecomparetwooptions

-Lowlevel,end-to-end,fusion(trainasystembasedonthecombinedinput)

-CAIS:Decomposabletrainingofasystempereachmodality,followedbyhigh-levelfusion

Whichoptionisbetter?

ShortcutLearningProblem:ASimpleSyntheticExample

Distribution:allvariablesareover{+1,-1},anddataiscreatedbythefollowingsimplegenerativemodel:

y-B(),r1,r2,r3-i·i·d.B,x1=yr1x2=yr2x4x5-i·i·d.B()x3=yr3x4x5

ThisisasimplemodeloffusionbetweenLidar,Radar,Camerasystemswiththefollowingproperties:

-The3systemshaveuncorrelatederrors(modeledbyr1,r2,r3)oflevele

-x1andx2are”simpler”systems(modelingradarandlidar),whiletheproductofx3x4x5equalstoyr3,andthereforeisa“complicatedtolearn”system(modelingthecamera)

Theorem:

-CaneasilyreacherrorofO(e2)withdecomposabletrainingof1-hidden-layerFCN+majority

-End-to-endSGDtrainingwillbe“stuck”atanerrorofeforT/ewhereTisthetimecomplexityoflearningthecomplicatedsystem(camera)individually

Whathappened?Isn’tend-to-endalwaysbetter?

Shortcutlearningproblem:End-to-endSGDstrugglestoleveragesystemswithdifferentsamplecomplexities

CanUnsupervisedDataAloneReachHighMTBF?

TheLongTailProblem

Intheoptimisticscenario,afewrareeventsreducetheprobabilitymassconsiderablyInthepessimisticscenario,eachrareeventhasminimalimpactontheprobabilitymass

P(event)

PessimisticScenario

ToomanyrareeventswhereeachdoesnotreduceP(event)noticeably

OptimalScenario

Events

LongTailofTeslaFSD

-TeslaFSDtrackerindicatesthatreducingvariancesolelythroughadatapipelineresultsinincrementalprogress

/news/735038/tesla-fsd-occasionally-dangerously-inept-independent-test/

*-publicdataonTesla'srecent12.5.x

HowtoSolveAutonomy

Sensors

AIApproach

Cost

Modularity

GeographicScalability

MTBF

Waymo

Lidar-centric

CAIS

?

Tesla

Cameraonly

End-to-end

?

Mobileye

Camera-centric

CAIS

?

TheBias-VarianceTradeoffinMachineLearning

Bias(‘a(chǎn)pproximationerror’)

Totalerror

ThelearningsystemcannotreflectthefullrichnessofrealityVariance(‘generalizationerror’)

Thelearningsystemoverfitstotheobserveddata,andfailstogeneralizetounseenexamples

Error

ε

VarianceBias

Totalerror

AbstractionInjections

MobileyeCompoundAISystem(CAIS)

AVAlignment

RSS

Separatescorrectfromincorrect

ReachingSufficientMTBF

Abstractions

-Sense/Plan/Act

-Analyticcalculations:RSS,time-to-contact…

Redundancies

Sensors

Algo

Highlevelfusion

MobileyeCompoundAISystem(CAIS)

AVAlignment

RSS

Separatescorrectfromincorrect

ReachingSufficientMTBF

Abstractions

-Act

ExtremelyEfficientAI

(Shaiwillcover)

Sense/Plan/

-Analyticcalculations:RSS,time-to-contact…

Redundancies

Highlevelfusion

Algo

Sensors

PGF

HighLevelFusion:HowtoPerform

Considerasimplecase

Wearefollowingaleadvehicle,andwehave3sensors

Camera

Radar

Lidar

Iftherearecontradictionsbetweenthesensors,wheresomedictateastrongbrakingwhileothersnot,whatshouldwedo?

Majority:2outof3(2oo3)Propertyofmajority

IfeachmodalityhasanerrorprobabilityofatmostE,andtheerrorsareindependent,then-majorityvotehasanerrorprobabilityofo(e2)

MajorityisNotAlwaysApplicable

Nowconsider3systems,eachonepredictswhereisourlane

Majorityisnotdefinedfornon-binarydecisions,sowhatcanbedone?

ThePrimary-Guardian-Fallback(PGF)Fusion

Weproposeageneralapproachforgeneralizingthemajorityruletononbinarydecisions

Webuild3systems

-Primary(P)-Predictswherethelaneis

-Guardian(G)-Checksifthepredictionoftheprimarysystemisvalidornot

-Fallback(F)-Predictswherethelaneis

Fusion:

-IfGuardiandictatesPrimaryisvalid,choosevalid

-Otherwise,chooseFallback

Theorem:ThePGFhasthesamepropertyofthemajorityrule

IfthefailureprobabilityofeachsystemisatmostEandtheseprobabilitiesareindependent,thenthefusedsystemhasanerrorofo(E2)

MobileyeCompoundAISystem(CAIS)

AVAlignment

RSS

Separatescorrectfromincorrect

ExtremelyEfficientAI

ReachingSufficientMTBF

Abstractions

-Act

Sense/Plan/

-Analyticcalculations:RSS,time-to-contact…

Redundancies

Sensors

Algo

Highlevelfusion

ExtremelyEfficientAI

TransformersforSensingandPlanningatx100efficiency

Inferencechip(EyeQ6H):Designforefficiency

EfficientlabelingbyAutoGroundTruth

Efficientmodularitybyteacher-studentarchitecture

Prologue

6AIRevolutions

MachineLearning

DeepLearning

GenerativeAI

UniversalLearning

Transformers

Sim2Real

Reasoning

Pre-Transformers:ObjectDetectionPipeline

Clusteringandmax

suppression

2Dto3D

ThreeRevolutionsof

GenerativePretrainedTransformers(GPTs)

Tokenizeeverything

Generative,Auto-regressive

Transformerarchitecture:’Attentionisallyouneed’

ThreeRevolutionsofGenerativePretrainedTransformers

Tokenizeeverything

Input:Transcribeeachinputmodality(e.g.,text,images)intoasequenceoftokens

Output:Transcribeeachoutputmodalityasasequenceoftokensandemploygenerative,auto-regressivemodelswithsuitablelossfunction

Accommodates:Complexinputandoutputstructures(e.g.,sets,sequences,trees)

Objectdetectionpipelineexample:

Input

Singleimage

’Tokenized’input

Sequenceofimagepatches

‘Tokenized’output

Sequenceof4coordinatesdeterminingthelocationoftheobjectsintheimage

ThreeRevolutionsofGenerativePretrainedTransformers

Generative,Auto-regressive

Previousapproach:Classificationorregressionwithfixed,smallsize,outputs(e.g.,ImageNet)Currentapproach:Learnprobabilitiesforsequencesofarbitrarylength(e.g.,sentence

generation)

KeyFeatures:ChainRule-Modelssequencedependencies

Generative-FitsdatausingmaximumlikelihoodEnables:Self-supervision(e.g.,futurewordsinadocument)

Handlesuncertainty(multiplevalidoutputsbylearningP[ylx])

ThreeRevolutionsofGenerativePretrainedTransformers

Example:Considera1000x1000pixelimagecontaining4vehicles,withtheimagedividedinto10x10pixelpatches.Whataretheprobabilitiesforidentifyingvehiclepositionswhennotusingthechainrulecomparedtowhenusingthechainrule?

x1,1,y1,1,X1,2,y1,21……,X4,1,y4,1,X4,2,y4,2

Listof4coordinatespervehicle

Usingthechainrule

PVehiclesII

=Px1)1II*Py1)1Ix1)1)I*…*Py4)2Ix1)1)…)x4)2)I

Dim=100

Withoutusingthechainrule

PVehiclesII=Px1)1)y1)1)x1)2)y1)2)….)x4)1)y4)1)x4)2)y4)2IIDim=1032

ThreeRevolutionsofGenerativePretrainedTransformers

Transformerarchitecture:’Attentionisallyouneed’

TailoredforproblemofpredictingPtokenn+1tokennstokenn1,tokenol]

...

Transformerlayern

Self-reflection

Self-attention

FCNFCNFCNFCNFCNFCNFCN

...

...

TransformersLayer:GroupThinkingAnalogy

Imagineateamdiscussingaproject

-Eachpersonhastheirownareaofexpertise

-theyallcontributetotheoveralloutcome

-Everyoneisworkingsimultaneouslyratherthanoneafteranother

Self-attention

Eachmemberlistenstoothersandrespondsin

real-time,adjustingtheirinputbasedon

importantpointsraised

Somethingis

fullyblocking

myview,maybe

atruck

Doesanyoneseea

closetruckonour

leftside?

Ipartiallysawaverybigwheel

Ihavenoidea

No

Self-reflection

Eachparticipanttakestimealonetoprocessideasandorganizetheirthoughts

TransformersLayer:Self-Reflection

-Eachtokenindividuallyprocessesits‘knowledge’usingamulti-layer-perceptron,withoutinteractingwithothertokens

n

Input

}d

Self-reflection

Self-reflection

FCN

...

FCN

...

d2n

...

Self-reflection

FCN

...

Output

TransformersLayer:Self-Attention

-Eachtokensend(query’totheothertokens,whichrespondwithvaluesiftheir(key’matchthe(query’

-Thequeryingtokenthenaveragesthereceivedvalues,facilitatinginter-tokenconnectivity

...

Questions

QueryKeyValueQueryKeyValueQueryKeyValue

ExamplefromtheGroupThinkingAnalogy

Personiasks:“Doesanyoneknowssomethingaboutx?”

Personjresponds:“Yes,Ihavewhattosayaboutit”

Personj′responds:”No,Idon’tknowanythingaboutit”

Relevancy

...

...

queryikeyj

..

i,j..

.

.

n2d

...

...

...

No,Idon’t

knowanything

aboutit

Yes,Ihave

whattosay

aboutit

Doesanyone

knowsomething

aboutx?

TransformersLayer:Self-Attention

NormalizesScores:Itconvertsrawattentionscoresintonormalized

probabilities

ProbabilityDistribution:Eachsetofattentionscoresistransformed

sothattheirprobabilitiessumto1

FocusMechanism:Thisallowsthemodeltoweighdifferentpartsof

theinputdifferently,focusingmoreon

relevantpartsbasedontheprobabilities

...

...

...

i,j

...

...

...

...

Normalize

eachrowby

SoftMax

...

Messageigetsfromthegroup

...

aijvj

j

...

...

αi,j

...

...

...

Indicateshowmuchi

wantstopayattentiontoj

Transformers:Complexity

L*(nd2+n2d)

#layers

Self

reflection

Self

attention

Costperlayerforalternativearchitectures:

FullyConnectedNetwork(FCN)Flattenndvalues

RecurrentNeuralNetwork(RNN)‘Talks’onlywithprevioustoken

...

...

Input

...

...

...

...

...

...

...

...

Output

...

Connections:d2n2

Connections:nd2

Transformers

‘EffectiveSparsity’ofTransformers

Sparserd2n+n2d

Anymodality

ConvolutionalNeural

FullyConnectedNetwork(FCN)

Networks(CNNs)

d2n2ConnectionsSparsityspecifictoimages

Denser,buteffectivelyselects

onlyafewpasttokensfor

communication

Long-Short-Term-Memory

(LSTM)

RecurrentNeuralNetworks

(RNN)

Markovsparsitycontext

representedbyastatevector

The3RevolutionsEnableaUniversalSolution

Handlealltypesofinputs

Dealswithuncertainty(bylearningprobability)Enablesalltypesofoutputs

Theultimatelearningmachine?

ATransformerEnd-to-endObjectDetectionNetwork

Input:images

Output:allobjects

ATransformerEnd-to-endObjectDetectionNetwork

The5“Multi”problems

Multi-camera:surround

Multi-frame::frommultipletimestamps

Multi-object:needstooutputall(vehicles,pedestrians,hazards,…)

Multi-scale:needstodetectfarandcloseobjectsatdifferentresolutions

Multi-lanes:needstoassignobjectstorelevantlanes/crosswalks

-UniversalityofTransformers

-Encodeimagepatches(fromdifferentcameras,differentframes,anddifferentresolutions)astokens

-Encodeobjectsasasequenceoftokens(foreachobject:position,velocity,dimensions,type)

-ApplyaTransformertogeneratetheprobabilityofoutputtokensgiveninputtokensinanAuto-Regressivemanner

NetworkArchitecture:VanillaTransformer

-CNNbackboneforcreatingimagetokens:

-C=32highresolutionimagesareconvertedto32imagesofresolution20x15yieldingNp=300"pixels))perimage,andd=256channels

-Encoder:

-WehaveN=C*Np=9600uimagetokens)),eachatdimensiond=256

-AvanillatransformernetworkwithLlayersrequireso(L*N2d+d2N)

-Encoderalonerequiresaround100TOPs(assuming10Hz,L=32)

-Decoder:

-Predictasequenceoftokensrepresentingalltheobjects(hundredsoftokens)

-AvanillaARdecodingissequential,andwithKVcache,eachiterationinvolvescomputeofatleasto(LNd)pertokenprediction(buttherealissueisIOofLNdhere)

-Around100Mbpertokenprediction!

VanillaTransformersareNotEffiecient

Transformersareabruteforceapproachwithlimitedwaytoutilizepriorknowledge

Thisisthe“darkside”ofuniversality

Self-connectivity:nd2Inter-connectivity:n2d

n2d

InAVn≈104,whichbecomesabottleneck

GPT3

d=12288n=2048

nd2=317B

Wepayboth

-Samplecomplexity(dislargeasitneedstohandlealltheinformationineachtoken)

-Computationalcomplexityofinference(n,darelarge)

-(bothissuesareknownintheliterature,andgeneralmitigationssuchas“mixture-of-experts”and“state-space-models”wereproposed)

WhatAboutEnd-to-EndFromPixelstoControlCommands

Weaknessesoftransformers

Bruteforce

Thelearningobjective(oflearningpylx])prefers‘common&incorrect’yover'rare&correct’y

QuestionablewhetheritcanreachsufficientlyhighMTBF

-Missesimportantabstractionsandthereforedoesn’tgeneralizewell

-TheShortcutLearningProblem

(aspartofCAIS,oure2earchitecturehasanadditionalheadthatoutputscontrolcommandsdirectlyaswell,whichisfineasalowMTBFredundantcomponent)

MobileyeCompoundAISystem(CAIS)

AVAlignment

RSS

Separatescorrectfromincorrect

ReachingSufficientMTBF

Abstractions

-Sense/Plan/Act

-Analyticcalculations:RSS,time-to-contact…

Redundancies

Sensors

Algo

Highlevelfusion

Implications

-MustoutputSensingState

-Eachsubsystemmustbesuperefficientbecausewedon’thaveasinglesystem

ExtremelyEfficientAI

TransformersforSensingandPlanningatx100efficiency

EfficientlabelingbyAutoGroundTruth

STAT:SparseTypedAttention

Vanillatransformer:n2d+d2n

STAT:

-TokenTypes:Eachtokenhasa“type”

-Dimensionality:ofembeddingsandself-reflectionmatricesmayvarybasedonthetokentype.

-TokenConnectivity:Theconnectivitybetweentokensissparseanddependsontheirtypes

-LinkTokens:Weadd“l(fā)ink”tokensforcontrollingtheconnectivity

-InferenceEfficiency:Forourend-to-endobjectdetectiontask,STATisx100fasteratinferencetimeandatthesametimeslightlyimprovesperformance

STAT:SparseTypedAttention

Vanillatransformer:n2d+d2n

STATEncoderforObjectDetection:

-Tokentypes:

-Imagetokens:recall,wehaveC=32imageseachwithNp=300“pixels”,yielding9600imagetokens

-WeaddNL=32“Link”tokensperimage

-STATBlock:

-Withineachimage,CrossAttentionbetweenthe300imagetokensandthe32linktokens(C?Np?NL?d)

-Acrossimages,fullselfattentionbetweenalllinktokensC?NL2d

2

-ComparedtoC?Npdinvanillatransformers,wegetafactorimprovementof,whichisapproximatelyx100fasterinourcase

-Performance:Forourend-to-endobjectdetectiontask,STATisnotonlyx100,butalsoimprovesperformance(weenlargetheexpressivityofthenetworkwhilemakingitmuchfasteratinferencetime)

...

300imagetokens

...

32Linktokens

...

300imagetokens

...

32Linktokens

...

C=32images

...

300imagetokens

...

32Linktokens

Crossattention

300imagetokens

...

...

32Linktokens

...

300imagetokens

...

32Linktokens

...

300imagetokens

...

32Linktokens

...

Crossimage

300imagetokens

...

32Linktokens

...

ParallelAuto-Regressive(PAR)

Weneedtodetectallobjectsinthescene:Whatistheorder?Auto-Regressive:Itdoesn’tmatterduetothechainrule!

Priceofsequentialdecoding

-Sequentialdecodingiscostlyonallmoderndeeplearningchips(duetoIO)

-Weaddedun-needed”fakeuncertainty”(whatistheorder)

”Truckandtrailer”problem

DeTR(DETectionTransformer,FacebookAI,May2020)

-Outputallobjectsinparallel

-Hungarianmatchingtodeterminetherelativeorderbetweenthenetwork’spredictionsandtheorderofthegroundtruth

-Problem:Doesn’tdealwellwithtrueuncertainty

-The“truckandtrailer”problem

-Streetswhichcanbe1or2lanes,etc.

Parisstreets

ParallelAuto-Regressive(PAR)

-Thedecodercontainsqueryheadswhich

performcrossattentionwiththeencoder’slinktokensentirelyinparallel

-Eachqueryheadoutputs,auto-regressively,

0/1/2objects(independentlyandinparalleltotheotherqueryheads)

-→dealingonlywith“trueuncertainties”andnotwith“fakeuncertainties”

Inputimages

CNNTokenization

STATEncoder

Outputtokens

Queryheads

IntermediateSummary

MachineLearning

TransformersrevolutionizedAI

-Thegood

-Universal,generative,AI

DeepLearning

-Thebad

Transformers

-Can’tseparate“correct&rare”from“wrong&common”

-Missimportantabstractions

GenerativeAI

-Questionablewhenveryhighaccuracyisrequired

-Theugly

-Bruteforceapproach,unnecessarilyexpensive

UniversalLearning

Workingsmarterwithtransformers

Sim2Real

-STAT:x100faster&betteraccuracy

-PAR:x10faster&embraceuncertaintyonlywhenitisneeded

Reasoning

ExtremelyEfficientAI

efficiency

Inferencechip(EyeQ6H):Designforefficiency

EfficientlabelingbyAutoGroundTruth

LowHigh

efficiencyEfficiencyefficiency

HardwareArchitecturesTradeoff:Flexibilityvs.Efficiency

●Fixed-function

GPU

.CPU

SpecialpurposeFlexibilityGeneralpurpose

EyeQ6High:5DistinctArchitectures

EyeQ6H

LowHigh

efficiencyEfficiencyefficiency

XNN

-AddressMobileye’shighefficiencyandflexibilityneeds

PMA

-Enableacceleratingrangeofparallelcomputeparadigms

VMP

MPC

MIPS

SpecialpurposeFlexibilityGeneralpurpose

5DistinctArchitectures:EnhancedParallelProcessing

●MIPS

-Ageneral-purposeCPU

MPC

-ACPUspecializedforthreadlevelparallelism

●VMP

-Very-Long-Instruction-Width(VLIW)-Single-Instruction-Multiple-Data(SIMD)

-Designedfordata-levelparallelismoffixedpointsarithmetic(e.g.,convergethe12-bitrawimageintoasetof8-bitimagesofdifferentresolutionsandtone-maps)

-Basically,performsoperationsonvectorsofintegers

●PMA

-Coarse-Grain-Reconfigurable-Array(CGRA)

-Designedfordata-levelparallelismincludingfloatingpointarithmetic

-Basically,performsoperationsonvectorsoffloats

●XNN

-Dedicatedtofixedfunctionsfordeeplearning:convolutions,matrix-multiplication/fully-connect,andrelatedactivationpost-processingcomputations:ExcelsinCNNs,FCNs,Transformers

EyeQ5H

EyeQ6H

EyeQ6Hvs.EyeQ5H:2xinTOPS,But10xinFPS!

1200

1000

Framesper

Second

800

600

400

200

0

16TOPS(int8)

27W(max)

34TOPS(int8)

33W(max)

1151

1062

975

252

EyeQ6H

126

82

25

EyeQ5H

91

WeightedAverage

PixelLabelingRoadMultiObject

Detection

NeuralNetwork

EyeQ6Hvs.Orin:It’sNotAllAboutTOPS

TheoreticalTOPS

34TOPS(int8)

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論