版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)
文檔簡介
MOOC交通數(shù)據(jù)挖掘技術(shù)(DataMiningforTransportation)-東南大學中國大學慕課答案Test11、問題:WhichoneisnotthedescriptionofDatamining?選項:A、ExtractionofinterestingpatternsorknowledgeB、Explorationsandanalysisbyautomaticorsemi-automaticmeansC、DiscovermeaningfulpatternsfromlargequantitiesofdataD、Appropriatestatisticalanalysismethodstoanalyzethedatacollected正確答案:【Appropriatestatisticalanalysismethodstoanalyzethedatacollected】2、問題:Whichonedescribestherightprocessofknowledgediscovery?選項:A、Selection-Preprocessing-Transformation-Datamining-Interpretation/EvaluationB、Preprocessing-Transformation-Datamining-Selection-Interpretation/EvaluationC、Datamining-Selection-Interpretation/Evaluation-Preprocessing-TransformationD、Transformation-Datamining-election-Preprocessing-Interpretation/Evaluation正確答案:【Selection-Preprocessing-Transformation-Datamining-Interpretation/Evaluation】3、問題:WhichoneisnotbelongtotheprocessofKDD?選項:A、DataminingB、DatadescriptionC、DatacleaningD、Dataselection正確答案:【Datadescription】4、問題:Whichoneisnottherightalternativenameofdatamining?選項:A、KnowledgeextractionB、DataarcheologyC、DatadredgingD、Dataharvesting正確答案:【Dataharvesting】5、問題:Whichoneisnotthenominalvariables?選項:A、OccupationB、EducationC、AgeD、Color正確答案:【Age】6、問題:Whichoneiswrongaboutclassificationandregression?選項:A、Regressionanalysisisastatisticalmethodologythatismostoftenusedfornumericprediction.B、Wecanconstructclassificationmodels(functions)withoutsometrainingexamples.C、Classificationpredictscategorical(discrete,unordered)labels.D、Regressionmodelspredictcontinuous-valuedfunctions.正確答案:【W(wǎng)ecanconstructclassificationmodels(functions)withoutsometrainingexamples.】7、問題:Whichoneiswrongaboutclusteringandoutliers?選項:A、Clusteringbelongstosupervisedlearning.B、Principlesofclusteringincludemaximizingintra-classsimilarityandminimizinginterclasssimilarity.C、Outlieranalysiscanbeusefulinfrauddetectionandrareeventsanalysis.D、Outliermeansadataobjectthatdoesnotcomplywiththegeneralbehaviorofthedata.正確答案:【Clusteringbelongstosupervisedlearning.】8、問題:Aboutdataprocess,whichoneiswrong?選項:A、Whenmakingdatadiscrimination,wecomparethetargetclasswithoneorasetofcomparativeclasses(thecontrastingclasses).B、Whenmakingdataclassification,wepredictcategoricallabelsexcludingunorderedone.C、Whenmakingdatacharacterization,wesummarizethedataoftheclassunderstudy(thetargetclass)ingeneralterms.D、Whenmakingdataclustering,wewouldgroupdatatoformnewcategories.正確答案:【W(wǎng)henmakingdataclassification,wepredictcategoricallabelsexcludingunorderedone.】9、問題:Outlierminingsuchasdensitybasedmethodbelongstosupervisedlearning.選項:A、正確B、錯誤正確答案:【錯誤】10、問題:Supportvectormachinescanbeusedforclassificationandregression.選項:A、正確B、錯誤正確答案:【正確】Test21、問題:Whichisnotthereasonweneedtopreprocessthedata?選項:A、tosavetimeB、tomakeresultmeetourhypothesisC、toavoidunreliableoutputD、toeliminatenoise正確答案:【tomakeresultmeetourhypothesis】2、問題:Whichisnotthemajortasksindatapreprocessing?選項:A、CleanB、IntegrationC、TransitionD、Reduction正確答案:【Transition】3、問題:HowtoconstructnewfeaturespacebyPCA?選項:A、NewfeaturespacebyPCAisconstructedbychoosingthemostimportantfeaturesyouthink.B、NewfeaturespacebyPCAisconstructedbynormalizinginputdata.C、NewfeaturespacebyPCAisconstructedbyselectingfeaturesrandomly.D、NewfeaturespacebyPCAisconstructedbyeliminatingtheweakcomponentstoreducethesizeofthedata.正確答案:【NewfeaturespacebyPCAisconstructedbyeliminatingtheweakcomponentstoreducethesizeofthedata.】4、問題:Whichoneiswrongaboutmethodsfordiscretization?選項:A、HistogramanalysisandBingingarebothunsupervisedmethods.B、Clusteringanalysisonlybelongstotop-downsplit.C、Intervalmergingbyc2Analysiscanbeappliedrecursively.D、Decision-treeanalysisisEntropy-baseddiscretization.正確答案:【Clusteringanalysisonlybelongstotop-downsplit.】5、問題:WhichoneiswrongaboutEqual-width(distance)partitioningandEqual-depth(frequency)partitioning?選項:A、Equal-widthpartitioningisthemoststraightforward,butoutliersmaydominatepresentation.B、Equal-depthpartitioningdividestherangeintoNintervals,eachcontainingapproximatelysamenumberofsamples.C、Theintervaloftheformeroneisnotequal.D、Thenumberoftuplesisthesamewhenusingthelatterone.正確答案:【Theintervaloftheformeroneisnotequal.】6、問題:Whichoneiswrongwaytonormalizedata?選項:A、Min-maxnormalizationB、SimplescalingC、Z-scorenormalizationD、Normalizationbydecimalscaling正確答案:【Simplescaling】7、問題:Whicharetherightwaytofillinmissingvalues?選項:A、SmartmeanB、ProbablevalueC、IgnoreD、Falsify正確答案:【Smartmean#Probablevalue#Ignore】8、問題:Whicharetherightwaytohandlenoisedata?選項:A、RegressionB、ClusterC、WTD、Manual正確答案:【Regression#Cluster#WT#Manual】9、問題:Whichoneisrightaboutwavelettransforms?選項:A、Wavelettransformsstorelargefractionsofthestrongestofthewaveletcoefficients.B、TheDWTdecomposeseachsegmentoftimeseriesviathesuccessiveuseoflow-passandhigh-passfilteringatappropriatelevels.C、Wavelettransformscanbeusedforreducingdataandsmoothingdata.D、Wavelettransformsmeansapplyingtopairsofdata,resultingintwosetofdataofthesamelength.正確答案:【TheDWTdecomposeseachsegmentoftimeseriesviathesuccessiveuseoflow-passandhigh-passfilteringatappropriatelevels.#Wavelettransformscanbeusedforreducingdataandsmoothingdata.】10、問題:Whicharethecommonusedwaystosampling?選項:A、SimplerandomsamplewithoutreplacementB、SimplerandomsamplewithreplacementC、StratifiedsampleD、Clustersample正確答案:【Simplerandomsamplewithoutreplacement#Simplerandomsamplewithreplacement#Stratifiedsample#Clustersample】11、問題:Discretizationmeansdividingtherangeofacontinuousattributeintointervals.選項:A、正確B、錯誤正確答案:【正確】Test31、問題:What'sthedifferencebetweeneagerlearnerandlazylearner?選項:A、Eagerlearnerswouldgenerateamodelforclassificationwhilelazylearnerwouldnot.B、Eagerlearnersclassifytheturplebasedonitssimilaritytothestoredtrainingturplewhilelazylearnernot.C、Eagerlearnerssimplystoredata(ordoesonlyalittleminorprocessing)whilelazylearnernot.D、Lazylearnerwouldgenerateamodelforclassificationwhileeagerlearnerwouldnot.正確答案:【Eagerlearnerswouldgenerateamodelforclassificationwhilelazylearnerwouldnot.】2、問題:HowtochoosetheoptimalvalueforK?選項:A、Cross-validationcanbeusedtodetermineagoodvaluebyusinganindependentdatasettovalidatetheKvalues.B、LowvaluesforK(likek=1ork=2)canbenoisyandsubjecttotheeffectofoutliers.C、Alargekvaluecanreducetheoverallnoisesothevaluefor'k'canbeasbigaspossible.D、Historically,theoptimalKformostdatasetshasbeenbetween3-10.正確答案:【Cross-validationcanbeusedtodetermineagoodvaluebyusinganindependentdatasettovalidatetheKvalues.#LowvaluesforK(likek=1ork=2)canbenoisyandsubjecttotheeffectofoutliers.#Historically,theoptimalKformostdatasetshasbeenbetween3-10.】3、問題:What’sthemajorcomponentsinKNN?選項:A、Howtomeasuresimilarity?B、Howtochoosek?C、Howareclasslabelsassigned?D、Howtodecidethedistance?正確答案:【Howtomeasuresimilarity?#Howtochoosek?#Howareclasslabelsassigned?】4、問題:WhichoneofthefollowingwayscanbeusedtoobtainattributeweightforAttribute-WeightedKNN?選項:A、Priorknowledge/experience.B、PCA,FA(Factoranalysismethod).C、Informationgain.D、Gradientdescent,simplexmethodsandgeneticalgorithm.正確答案:【Priorknowledge/experience.#PCA,FA(Factoranalysismethod).#Informationgain.#Gradientdescent,simplexmethodsandgeneticalgorithm.】5、問題:AtlearningstageKNNwouldfindtheKclosestneighborsandthendecideclassifyKidentifiednearestlabel.選項:A、正確B、錯誤正確答案:【錯誤】6、問題:AtclassificationstageKNNwouldstoreallinstanceorsometypicalofthem.選項:A、正確B、錯誤正確答案:【錯誤】7、問題:Normalizingthedatacansolvetheproblemthatdifferentattributeshavedifferentvalueranges.選項:A、正確B、錯誤正確答案:【正確】8、問題:ByEuclideandistanceorManhattandistance,wecancalculatethedistancebetweentwoinstances.選項:A、正確B、錯誤正確答案:【正確】9、問題:DatanormalizationbeforeMeasureDistancecanavoiderrorscausedbydifferentdimensions,self-variations,orlargenumericaldifferences.選項:A、正確B、錯誤正確答案:【正確】10、問題:Thewaytoobtaintheregressionforanewinstancefromtheknearestneighborsistocalculatetheaveragevalueofkneighbors.選項:A、正確B、錯誤正確答案:【正確】11、問題:Thewaytoobtaintheclassificationforanewinstancefromtheknearestneighborsistocalculatethemajorityclassofkneighbors.選項:A、正確B、錯誤正確答案:【正確】12、問題:ThewaytoobtaininstanceweightforDistance-WeightedKNNistocalculatethereciprocalofthedistancesquaredbetweenobjectandneighbors.選項:A、正確B、錯誤正確答案:【正確】Test41、問題:Whichdescriptionisrightaboutnodesindecisiontree?選項:A、InternalnodestestthevalueofparticularfeaturesB、LeafnodesspecifytheclassC、BranchnodesdecidetheresultD、Rootnodesdecidethestartpoint正確答案:【Internalnodestestthevalueofparticularfeatures#Leafnodesspecifytheclass】2、問題:ComputinginformationgainforcontinuousvalueattributewhenusingID3consistsofthefollowingprocedure:選項:A、SortthevalueAinincreasingorder.B、Considerthemidpointbetweeneachpairofadjacentvaluesasapossiblesplitpoint.C、Selecttheminimumexpectedinformationrequirementasthesplit-point.D、Split.正確答案:【SortthevalueAinincreasingorder.#Considerthemidpointbetweeneachpairofadjacentvaluesasapossiblesplitpoint.#Selecttheminimumexpectedinformationrequirementasthesplit-point.#Split.】3、問題:Whichisthetypicalalgorithmstogeneratetrees?選項:A、ID3B、C4.5C、CARTD、PCA正確答案:【ID3#C4.5#CART】4、問題:Whichoneisrightaboutunderfittingandoverfitting?選項:A、Underfittingmeanspooraccuracybothfortrainingdataandunseensamples.B、Overfittingmeanshighaccuracyfortrainingdatabutpooraccuracyforunseensamples.C、Underfittingimpliesthemodelistoosimplethatweneedtoincreasethemodelcomplexity.D、Overfittingoccurstoomanybranchesthatweneedtodecreasethemodelcomplexity.正確答案:【Underfittingmeanspooraccuracybothfortrainingdataandunseensamples.#Overfittingmeanshighaccuracyfortrainingdatabutpooraccuracyforunseensamples.#Underfittingimpliesthemodelistoosimplethatweneedtoincreasethemodelcomplexity.#Overfittingoccurstoomanybranchesthatweneedtodecreasethemodelcomplexity.】5、問題:Whichoneisrightaboutpre-pruningandpost-pruning?選項:A、Bothofthemaremethodstodealwithoverfittingproblem.B、Pre-pruningdoesnotsplitanodeifthiswouldresultinthegoodnessmeasurefallingbelowathreshold.C、Post-pruningremovesbranchesfroma“fullygrown”tree.D、Thereisnoneedtochooseanappropriatethresholdwhenmakingpre-pruning.正確答案:【Bothofthemaremethodstodealwithoverfittingproblem.#Pre-pruningdoesnotsplitanodeifthiswouldresultinthegoodnessmeasurefallingbelowathreshold.#Post-pruningremovesbranchesfroma“fullygrown”tree.】6、問題:Post-pruninginCARTconsistsofthefollowingprocedure:選項:A、First,considerthecostcomplexityofatree.B、Then,foreachinternalnode,N,computethecostcomplexityofthesubtreeatN.C、AndalsocomputethecostcomplexityofthesubtreeatNifitweretobepruned.D、Atlast,comparethetwovalues.IfpruningthesubtreeatnodeNwouldresultinasmallercostcomplexity,thesubtreeispruned.Otherwise,thesubtreeiskept.正確答案:【First,considerthecostcomplexityofatree.#Then,foreachinternalnode,N,computethecostcomplexityofthesubtreeatN.#AndalsocomputethecostcomplexityofthesubtreeatNifitweretobepruned.#Atlast,comparethetwovalues.IfpruningthesubtreeatnodeNwouldresultinasmallercostcomplexity,thesubtreeispruned.Otherwise,thesubtreeiskept.】7、問題:ThecostcomplexitypruningalgorithmusedinCARTevaluatecostcomplexitybythenumberofleavesinthetree,andtheerrorrate.選項:A、正確B、錯誤正確答案:【正確】8、問題:GainratioisusedasattributeselectionmeasureinC4.5andtheformulaisGainRatio(A)=Gain(A)/SplitInfo(A).選項:A、正確B、錯誤正確答案:【正確】9、問題:Ruleiscreatedforeachpartfromitsroottoitsleafnotes.選項:A、正確B、錯誤正確答案:【正確】10、問題:ID3useinformationgainasitsattributeselectionmeasure.AndtheattributewiththelowestinformationgainischosenasthesplittingattributefornoteN.選項:A、正確B、錯誤正確答案:【錯誤】Test51、問題:WhatthefeatureofSVM?選項:A、Extremelyslow,butarehighlyaccurate.B、Muchlesspronetooverfittingthanothermethods.C、Blackboxmodel.D、Provideacompactdescriptionofthelearnedmodel.正確答案:【Extremelyslow,butarehighlyaccurate.#Muchlesspronetooverfittingthanothermethods.#Provideacompactdescriptionofthelearnedmodel.】2、問題:Whichisthetypicalcommonkernel?選項:A、LinearB、PolynomialC、Radialbasisfunction(Gaussiankernel)D、Sigmoidkernel正確答案:【Linear#Polynomial#Radialbasisfunction(Gaussiankernel)#Sigmoidkernel】3、問題:WhatadaptationscanbemadetoallowSVMtodealwithMulticlassClassificationproblem?選項:A、Oneversusrest(OVR).B、Oneversusone(OVO).C、Errorcorrectinginputcodes(ECIC).D、Errorcorrectingoutputcodes(ECOC).正確答案:【Oneversusrest(OVR).#Oneversusone(OVO).#Errorcorrectingoutputcodes(ECOC).】4、問題:What'stheproblemofOVR?選項:A、Sensitivetotheaccuracyoftheconfidencefiguresproducedbytheclassifiers.B、Thescaleoftheconfidencevaluesmaydifferbetweenthebinaryclassifiers.C、Thebinaryclassificationlearnersseeunbalanceddistributions.D、Onlywhentheclassdistributionisbalancedcanbalanceddistributionsattain.正確答案:【Sensitivetotheaccuracyoftheconfidencefiguresproducedbytheclassifiers.#Thescaleoftheconfidencevaluesmaydifferbetweenthebinaryclassifiers.#Thebinaryclassificationlearnersseeunbalanceddistributions.】5、問題:WhichoneisrightabouttheadvantagesofSVM?選項:A、Theyareaccurateinhigh-dimensionalspaces.B、Theyarememoryefficient.C、Thealgorithmisnotproneforover-fittingcomparedtootherclassificationmethod.D、Thesupportvectorsaretheessentialorcriticaltrainingtuples.正確答案:【Theyareaccurateinhigh-dimensionalspaces.#Theyarememoryefficient.#Thealgorithmisnotproneforover-fittingcomparedtootherclassificationmethod.#Thesupportvectorsaretheessentialorcriticaltrainingtuples.】6、問題:Kerneltrickwasusedtoavoidcostlycomputationanddealwithmappingproblems.選項:A、正確B、錯誤正確答案:【正確】7、問題:ThereisnostructuredwayandnogoldenrulesforsettingtheparametersinSVM.選項:A、正確B、錯誤正確答案:【正確】8、問題:Errorcorrectingoutputcodes(ECOC)isakindofproblemtransformationtechniques.選項:A、正確B、錯誤正確答案:【錯誤】9、問題:Regressionformulasincludingthreetypes:linear,nonlinearandgeneralform.選項:A、正確B、錯誤正確答案:【正確】10、問題:Ifyouhaveabigdataset,SVMissuitableforefficientcomputation.選項:A、正確B、錯誤正確答案:【錯誤】Test61、問題:Whichdescriptionisrighttodescribeoutliers?選項:A、OutlierscausedbymeasurementerrorB、OutliersreflectinggroundtruthC、OutlierscausedbyequipmentfailureD、Outliersneededtobedroppedoutalways正確答案:【Outlierscausedbymeasurementerror#Outliersreflectinggroundtruth#Outlierscausedbyequipmentfailure】2、問題:Whatisapplicationcaseofoutliermining?選項:A、TrafficincidentdetectionB、CreditcardfrauddetectionC、NetworkintrusiondetectionD、Medicalanalysis正確答案:【Trafficincidentdetection#Creditcardfrauddetection#Networkintrusiondetection#Medicalanalysis】3、問題:Whichoneisthemethodtodetectoutliers?選項:A、Statistics-basedapproachB、Distance-basedapproachC、Bulk-basedapproachD、Density-basedapproach正確答案:【Statistics-basedapproach#Distance-basedapproach#Density-basedapproach】4、問題:Howtopicktherightkbyaheuristicmethodfordensity-basedoutlierminingmethod?選項:A、Kshouldbeatleast10toremoveunwantedstatisticalfluctuations.B、Pick10to20appearstoworkwellingeneral.C、Picktheupperboundvalueforkasthemaximumof“closeby”objectsthatcanpotentiallybeglobaloutliers.D、Picktheupperboundvalueforkasthemaximumof“closeby”objectsthatcanpotentiallybelocaloutliers.正確答案:【Kshouldbeatleast10toremoveunwantedstatisticalfluctuations.#Pick10to20appearstoworkwellingeneral.#Picktheupperboundvalueforkasthemaximumof“closeby”objectsthatcanpotentiallybelocaloutliers.】5、問題:Whichoneisrightaboutthreemethodsofoutliermining?選項:A、Statistics-basedapproachissimpleandfastbutdifficulttodealwithperiodicitydataandcategoricaldata.B、Theefficiencyofdistance-basedapproachislowforthegreatdatasetinhighdimensionalspace.C、Distance-basedapproachcannotbeusedinmultidimensionaldataset.D、Density-basedapproachspendslowcostonsearchingneighborhood.正確答案:【Statistics-basedapproachissimpleandfastbutdifficulttodealwithperiodicitydataandcategoricaldata.#Theefficiencyofdistance-basedapproachislowforthegreatdatasetinhighdimensionalspace.】6、問題:Distance-basedoutlierMiningisnotsuitabletodatasetthatdoesnotfitanystandarddistributionmodel.選項:A、正確B、錯誤正確答案:【錯誤】7、問題:Statistic-basedmethodneedstorequireknowingthedistributionofthedataandthedistributionparametersinadvance.選項:A、正確B、錯誤正確答案:【正確】8、問題:Whenidentifyingoutlierswithadiscordancytest,thedatapointisconsideredasanoutlierifitfallswithintheconfidenceinterval.選項:A、正確B、錯誤正確答案:【錯誤】9、問題:MahalanobisDistanceaccountsfortherelativedispersionsandinherentcorrelationsamongvectorelements,whichisdifferentfromEuclideanDistance.選項:A、正確B、錯誤正確答案:【正確】10、問題:Anoutlierisadataobjectthatdeviatessignificantlyfromtherestoftheobjects,asifitweregeneratedbyadifferentmechanism.選項:A、正確B、錯誤正確答案:【正確】Test71、問題:Howtodealwithimbalanceddatain2-classclassification?選項:A、OversamplingB、UndersamplingC、Threshold-movingD、Ensembletechniques正確答案:【Oversampling#Undersampling#Threshold-moving#Ensembletechniques】2、問題:Whichoneisrightwhendealingwiththeclass-imbalanceproblem?選項:A、Oversamplingworksbydecreasingthenumberofminoritypositivetuples.B、Undersamplingworksbyincreasingthenumberofmajoritynegativetuples.C、Smotealgorithmaddssynthetictuplesthatareclosetotheminoritytuplesintuplespace.D、Threshold-movingandensemblemethodswereempiricallyobservedtooutperformoversamplingandundersampling.正確答案:【Smotealgorithmaddssynthetictuplesthatareclosetotheminoritytuplesintuplespace.#Threshold-movingandensemblemethodswereempiricallyobservedtooutperformoversamplingandundersampling.】3、問題:Whichstepisnecessarywhenconstructinganensemblemodel?選項:A、CreatingmultipledatasetB、ConstructingasetofclassifiersfromthetrainingdataC、CombiningpredictionsmadebymultipleclassifierstoobtainfinalclasslabelD、Findthebestperformingpredictionstoobtainfinalclasslabel正確答案:【Creatingmultipledataset#Constructingasetofclassifiersfromthetrainingdata#Combiningpredictionsmadebymultipleclassifierstoobtainfinalclass
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
- 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 個人工廠合同范例
- 半導體產(chǎn)業(yè)鏈細分行業(yè)梳理
- 平陽店面拆除合同范例
- 廣州店鋪租金合同模板
- 出口合同范例貿(mào)易術(shù)語
- 公司樓層看護合同范例
- 公司購銷合同范例范例
- 商業(yè)廠區(qū)租賃合同范例
- 2024年信陽c1客運從業(yè)資格證怎么考
- 2024年福州客運上崗證考試題多少道題
- 五年級上冊解方程練習100題及答案
- 2024年中科院心理咨詢師官方備考試題庫-上(單選題)
- 設(shè)計變更控制程序
- 三年級硬筆書法課件
- 2024全球量子產(chǎn)業(yè)發(fā)展報告
- 滬科版(2024)八年級全一冊物理第一學期期末學業(yè)質(zhì)量測試卷 2套(含答案)
- 2022年甘肅省職業(yè)技能大賽小程序設(shè)計與開發(fā)賽項(高職學生組)試題 C卷
- 求是文章《開創(chuàng)我國高質(zhì)量發(fā)展新局面》專題課件
- (正式版)QC∕T 1206.1-2024 電動汽車動力蓄電池熱管理系統(tǒng) 第1部分:通 用要求
- 場地移交安全管理協(xié)議書
- 醫(yī)院卒中中心建設(shè)各種制度、流程匯編
評論
0/150
提交評論