




版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領
文檔簡介
Content
1.PrinciplesforAIsafetygovernance 1
2.FrameworkforAIsafetygovernance 4
3.ClassificationofAIsafetyrisks 5
3.1AI'sinherentsafetyrisks 6
3.2SafetyrisksinAIapplications 9
4.Technologicalmeasurestoaddressrisks 13
4.1AddressingAI’sinherentsafetyrisks 13
4.2AddressingsafetyrisksinAIapplications 15
5.Comprehensivegovernancemeasures 17
6.SafetyguidelinesforAIdevelopmentandapplication 21
6.1Safetyguidelinesformodelalgorithmdevelopers 21
6.2SafetyguidelinesforAIserviceproviders 23
6.3Safetyguidelinesforusersinkeyareas 24
6.4Safetyguidelinesforgeneralusers 26
-1-
AISafetyGovernanceFramework
AISafetyGovernanceFramework
(V1.0)
ArtificialIntelligence(AI),anewareaofhumandevelopment,presentssignificantopportunitiestotheworldwhileposingvariousrisksandchallenges.Upholdingapeople-centeredapproachandadheringtotheprincipleofdevelopingAIforgood,thisframeworkhasbeenformulatedtoimplementtheGlobalAIGovernanceInitiativeandpromoteconsensusandcoordinatedeffortsonAIsafetygovernanceamonggovernments,internationalorganizations,companies,researchinstitutes,civilorganizations,andindividuals,aimingtoeffectivelypreventanddefuseAIsafetyrisks.
1.PrinciplesforAIsafetygovernance
-Committoavisionofcommon,comprehensive,cooperative,andsustainablesecuritywhileputtingequalemphasisondevelopmentandsecurity
-PrioritizetheinnovativedevelopmentofAI
-TakeeffectivelypreventinganddefusingAIsafetyrisksasthestartingpointandultimategoal
-2-
AISafetyGovernanceFramework
-Establishgovernancemechanismsthatengageallstakeholders,integratetechnologyandmanagement,andensurecoordinatedeffortsandcollaborationamongthem
-EnsurethatallpartiesinvolvedfullyshouldertheirresponsibilitiesforAIsafety
-Createawhole-process,all-elementgovernancechain
-Fosterasafe,reliable,equitable,andtransparentAIforthetechnicalresearch,development,andapplication
-PromotethehealthydevelopmentandregulatedapplicationofAI
-Effectivelysafeguardnationalsovereignty,securityanddevelopmentinterests
-Protectthelegitimaterightsandinterestsofcitizens,legalpersonsandotherorganizations
-GuaranteethatAItechnologybenefitshumanity
1.1Beinclusiveandprudenttoensuresafety
WeencouragedevelopmentandinnovationandtakeaninclusiveapproachtoAIresearch,development,andapplication.WemakeeveryefforttoensureAIsafety,andwilltaketimelymeasurestoaddressanyrisksthatthreatennationalsecurity,harmthepublicinterest,orinfringeuponthelegitimaterightsandinterestsofindividuals.
-3-
AISafetyGovernanceFramework
1.2Identifyriskswithagilegovernance
BycloselytrackingtrendsinAIresearch,development,andapplication,weidentifyAIsafetyrisksfromtwoperspectives:thetechnologyitselfanditsapplication.Weproposetailoredpreventivemeasurestomitigatetheserisks.Wefollowtheevolutionofsafetyrisks,swiftlyadjustingourgovernancemeasuresasneeded.Wearecommittedtoimprovingthegovernancemechanismsandmethodswhilepromptlyrespondingtoissueswarrantinggovernmentoversight.
1.3Integratetechnologyandmanagementforcoordinatedresponse
WeadoptacomprehensivesafetygovernanceapproachthatintegratestechnologyandmanagementtopreventandaddressvarioussafetyrisksthroughouttheentireprocessofAIresearch,development,andapplication.WithintheAIresearch,development,andapplicationchain,itisessentialtoensurethatallrelevantparties,includingmodelandalgorithmresearchersanddevelopers,serviceproviders,andusers,assumetheirrespectiveresponsibilitiesforAIsafety.Thisapproachwellleveragestherolesofgovernancemechanismsinvolvinggovernmentoversight,industryself-regulation,andpublicscrutiny.
1.4Promoteopennessandcooperationforjointgovernanceandsharedbenefits
-4-
AISafetyGovernanceFramework
WepromoteinternationalcooperationonAIsafetygovernance,withthebestpracticessharedworldwide.WeadvocateestablishingopenplatformsandadvanceeffortstobuildbroadconsensusonaglobalAIgovernancesystemthroughdialogueandcooperationacrossvariousdisciplines,fields,regions,andnations.
2.FrameworkforAIsafetygovernance
Basedonthenotionofriskmanagement,thisframeworkoutlinescontrolmeasurestoaddressdifferenttypesofAIsafetyrisksthroughtechnologicalandmanagerialstrategies.AsAIresearch,development,andapplicationrapidlyevolves,leadingtochangesintheforms,impacts,andourperceptionofsafetyrisks,itisnecessarytocontinuouslyupdatecontrolmeasures,andinviteallstakeholderstorefinethegovernanceframework.
2.1Safetyandsecurityrisks
ByexaminingthecharacteristicsofAItechnologyanditsapplicationscenariosacrossvariousindustriesandfields,wepinpointsafetyandsecurityrisksandpotentialdangersthatareinherentlylinkedtothetechnologyitselfanditsapplication.
2.2Technicalcountermeasures
Regardingmodelsandalgorithms,trainingdata,computingfacilities,
-5-
AISafetyGovernanceFramework
productsandservices,andapplicationscenarios,weproposetargetedtechnicalmeasurestoimprovethesafety,fairness,reliability,androbustnessofAIproductsandapplications.Thesemeasuresincludesecuresoftwaredevelopment,dataqualityimprovement,constructionandoperationssecurityenhancement,andconductingevaluation,monitoring,andreinforcementactivities.
2.3Comprehensivegovernancemeasures
Inaccordancewiththeprincipleofcoordinatedeffortsandjointgovernance,weclarifythemeasuresthatallstakeholders,includingtechnologyresearchinstitutions,productandserviceproviders,users,governmentagencies,industryassociations,andsocialorganizations,shouldtaketoidentify,prevent,andrespondtoAIsafetyrisks.
2.4SafetyguidelinesforAIdevelopmentandapplication
WeproposeseveralsafetyguidelinesforAImodelandalgorithmdevelopers,AIserviceproviders,usersinkeyareas,andgeneralusers,todevelopandapplyAItechnology.
3.ClassificationofAIsafetyrisks
SafetyrisksexistateverystagethroughouttheAIchain,fromsystemdesigntoresearchanddevelopment(R&D),training,testing,deployment,
-6-
AISafetyGovernanceFramework
utilization,andmaintenance.Theserisksstemfrominherenttechnicalflawsaswellasmisuse,abuse,andmalicioususeofAI.
3.1AI'sinherentsafetyrisks
3.1.1Risksfrommodelsandalgorithms(a)Risksofexplainability
AIalgorithms,representedbydeeplearning,havecomplexinternal
workings.Theirblack-boxorgrey-boxinferenceprocessresultsinunpredictableanduntraceableoutputs,makingitchallengingtoquicklyrectifythemortracetheiroriginsforaccountabilityshouldanyanomaliesarise.
(b)Risksofbiasanddiscrimination
Duringthealgorithmdesignandtrainingprocess,personalbiasesmaybeintroduced,eitherintentionallyorunintentionally.Additionally,poor-qualitydatasetscanleadtobiasedordiscriminatoryoutcomesinthealgorithm'sdesignandoutputs,includingdiscriminatorycontentregardingethnicity,religion,nationalityandregion.
(c)Risksofrobustness
Asdeepneuralnetworksarenormallynon-linearandlargeinsize,AIsystemsaresusceptibletocomplexandchangingoperationalenvironmentsormaliciousinterferenceandinductions,possiblyleadingtovariousproblemslikereducedperformanceanddecision-makingerrors.
(d)Risksofstealingandtampering
-7-
AISafetyGovernanceFramework
Corealgorithminformation,includingparameters,structures,andfunctions,facesrisksofinversionattacks,stealing,modification,andevenbackdoorinjection,whichcanleadtoinfringementofintellectualpropertyrights(IPR)andleakageofbusinesssecrets.Itcanalsoleadtounreliableinference,wrongdecisionoutputandevenoperationalfailures.
(e)Risksofunreliableoutput
GenerativeAIcancausehallucinations,meaningthatanAImodelgeneratesuntruthfulorunreasonablecontent,butpresentsitasifitwereafact,leadingtobiasedandmisleadinginformation.
(f)Risksofadversarialattack
Attackerscancraftwell-designedadversarialexamplestosubtlymislead,influenceandevenmanipulateAImodels,causingincorrectoutputsandpotentiallyleadingtooperationalfailures.
3.1.2Risksfromdata
(a)Risksofillegalcollectionanduseofdata
ThecollectionofAItrainingdataandtheinteractionwithusersduringserviceprovisionposesecurityrisks,includingcollectingdatawithoutconsentandimproperuseofdataandpersonalinformation.
(b)Risksofimpropercontentandpoisoningintrainingdata
Ifthetrainingdataincludesillegalorharmfulinformationlikefalse,biasedandIPR-infringingcontent,orlacksdiversityinitssources,theoutputmayincludeharmfulcontentlikeillegal,malicious,orextremeinformation.Trainingdataisalsoatriskofbeingpoisonedfromtampering,error
-8-
AISafetyGovernanceFramework
injection,ormisleadingactionsbyattackers.Thiscaninterferewiththemodel'sprobabilitydistribution,reducingitsaccuracyandreliability.
(c)Risksofunregulatedtrainingdataannotation
Issueswithtrainingdataannotation,suchasincompleteannotationguidelines,incapableannotators,anderrorsinannotation,canaffecttheaccuracy,reliability,andeffectivenessofmodelsandalgorithms.Moreover,theycanintroducetrainingbiases,amplifydiscrimination,reducegeneralizationabilities,andresultinincorrectoutputs.
(d)Risksofdataleakage
InAIresearch,development,andapplications,issuessuchasimproperdataprocessing,unauthorizedaccess,maliciousattacks,anddeceptiveinteractionscanleadtodataandpersonalinformationleaks.
3.1.3RisksfromAIsystems
(a)Risksofexploitationthroughdefectsandbackdoors
ThestandardizedAPI,featurelibraries,toolkitsusedinthedesign,training,andverificationstagesofAIalgorithmsandmodels,developmentinterfaces,andexecutionplatforms,maycontainlogicalflawsandvulnerabilities.Theseweaknessescanbeexploited,andinsomecases,backdoorscanbeintentionallyembedded,posingsignificantrisksofbeingtriggeredandusedforattacks.
(b)Risksofcomputinginfrastructuresecurity
ThecomputinginfrastructureunderpinningAItrainingandoperations,whichreliesondiverseandubiquitouscomputingnodesandvarioustypes
-9-
AISafetyGovernanceFramework
ofcomputingresources,facesriskssuchasmaliciousconsumptionofcomputingresourcesandcross-boundarytransmissionofsecuritythreatsatthelayerofcomputinginfrastructure.
(c)Risksofsupplychainsecurity
TheAIindustryreliesonahighlyglobalizedsupplychain.However,certaincountriesmayuseunilateralcoercivemeasures,suchastechnologybarriersandexportrestrictions,tocreatedevelopmentobstaclesandmaliciouslydisrupttheglobalAIsupplychain.Thiscanleadtosignificantrisksofsupplydisruptionsforchips,software,andtools.
3.2SafetyrisksinAIapplications
3.2.1Cyberspacerisks
(a)Risksofinformationandcontentsafety
AI-generatedorsynthesizedcontentcanleadtothespreadoffalseinformation,discriminationandbias,privacyleakage,andinfringementissues,threateningthesafetyofcitizens'livesandproperty,nationalsecurity,ideologicalsecurity,andcausingethicalrisks.Ifusers’inputscontainharmfulcontent,themodelmayoutputillegalordamaginginformationwithoutrobustsecuritymechanisms.
(b)Risksofconfusingfacts,misleadingusers,andbypassingauthentication
AIsystemsandtheiroutputs,ifnotclearlylabeled,canmakeitdifficultforuserstodiscernwhethertheyareinteractingwithAIandtoidentifythe
-10-
AISafetyGovernanceFramework
sourceofgeneratedcontent.Thiscanimpedeusers'abilitytodeterminetheauthenticityofinformation,leadingtomisjudgmentandmisunderstanding.Additionally,AI-generatedhighlyrealisticimages,audio,andvideosmaycircumventexistingidentityverificationmechanisms,suchasfacialrecognitionandvoicerecognition,renderingtheseauthenticationprocessesineffective.
(c)Risksofinformationleakageduetoimproperusage
Staffofgovernmentagenciesandenterprises,iffailingtousetheAIserviceinaregulatedandpropermanner,mayinputinternaldataandindustrialinformationintotheAImodel,leadingtoleakageofworksecrets,businesssecretsandothersensitivebusinessdata.
(d)Risksofabuseforcyberattacks
AIcanbeusedinlaunchingautomaticcyberattacksorincreasingattackefficiency,includingexploringandmakinguseofvulnerabilities,crackingpasswords,generatingmaliciouscodes,sendingphishingemails,networkscanning,andsocialengineeringattacks.Alltheselowerthethresholdforcyberattacksandincreasethedifficultyofsecurityprotection.
(e)Risksofsecurityflawtransmissioncausedbymodelreuse
Re-engineeringorfine-tuningbasedonfoundationmodelsiscommonlyusedinAIapplications.Ifsecurityflawsoccurinfoundationmodels,itwillleadtorisktransmissiontodownstreammodels.
3.2.2Real-worldrisks
(a)Inducingtraditionaleconomicandsocialsecurityrisks
-11-
AISafetyGovernanceFramework
AIisusedinfinance,energy,telecommunications,traffic,andpeople'slivelihoods,suchasself-drivingandsmartdiagnosisandtreatment.Hallucinationsanderroneousdecisionsofmodelsandalgorithms,alongwithissuessuchassystemperformancedegradation,interruption,andlossofcontrolcausedbyimproperuseorexternalattacks,willposesecuritythreatstousers'personalsafety,property,andsocioeconomicsecurityandstability.
(b)RisksofusingAIinillegalandcriminalactivities
AIcanbeusedintraditionalillegalorcriminalactivitiesrelatedtoterrorism,violence,gambling,anddrugs,suchasteachingcriminaltechniques,concealingillicitacts,andcreatingtoolsforillegalandcriminalactivities.
(c)Risksofmisuseofdual-useitemsandtechnologies
Duetoimproperuseorabuse,AIcanposeseriousriskstonationalsecurity,economicsecurity,andpublichealthsecurity,suchasgreatlyreducingthecapabilityrequirementsfornon-expertstodesign,synthesize,acquire,andusenuclear,biological,andchemicalweaponsandmissiles;designingcyberweaponsthatlaunchnetworkattacksonawiderangeofpotentialtargetsthroughmethodslikeautomaticvulnerabilitydiscoveringandexploiting.
3.2.3Cognitiverisks
(a)Risksofamplifyingtheeffectsof"informationcocoons"
AIcanbeextensivelyutilizedforcustomizedinformationservices,collectinguserinformation,andanalyzingtypesofusers,theirneeds,intentions,preferences,habits,andevenmainstreampublicawarenessoveracertain
-12-
AISafetyGovernanceFramework
period.Itcanthenbeusedtoofferformulaicandtailoredinformationandservice,aggravatingtheeffectsof"informationcocoons."
(b)Risksofusageinlaunchingcognitivewarfare
AIcanbeusedtomakeandspreadfakenews,images,audio,andvideos,propagatecontentofterrorism,extremism,andorganizedcrimes,interfereininternalaffairsofothercountries,socialsystems,andsocialorder,andjeopardizesovereigntyofothercountries.AIcanshapepublicvaluesandcognitivethinkingwithsocialmediabotsgainingdiscoursepowerandagenda-settingpowerincyberspace.
3.2.4Ethicalrisks
(a)Risksofexacerbatingsocialdiscriminationandprejudice,andwideningtheintelligencedivide
AIcanbeusedtocollectandanalyzehumanbehaviors,socialstatus,economicstatus,andindividualpersonalities,labelingandcategorizinggroupsofpeopletotreatthemdiscriminatingly,thuscausingsystematicalandstructuralsocialdiscriminationandprejudice.Atthesametime,theintelligencedividewouldbeexpandedamongregions.
(b)Risksofchallengingtraditionalsocialorder
ThedevelopmentandapplicationofAImayleadtotremendouschangesinproductiontoolsandrelations,acceleratingthereconstructionoftraditionalindustrymodes,transformingtraditionalviewsonemployment,fertility,andeducation,andbringingchallengestostableperformanceoftraditionalsocialorder.
-13-
AISafetyGovernanceFramework
(c)RisksofAIbecominguncontrollableinthefuture
WiththefastdevelopmentofAItechnologies,thereisariskofAIautonomouslyacquiringexternalresources,conductingself-replication,becomeself-aware,seekingforexternalpower,andattemptingtoseizecontrolfromhumans.
4.Technologicalmeasurestoaddressrisks
Respondingtotheaboverisks,AIdevelopers,serviceproviders,andsystemusersshouldpreventrisksbytakingtechnologicalmeasuresinthefieldsoftrainingdata,computinginfrastructures,modelsandalgorithms,productservices,andapplicationscenarios.
4.1AddressingAI’sinherentsafetyrisks
4.1.1Addressingrisksfrommodelsandalgorithms
(a)ExplainabilityandpredictabilityofAIshouldbeconstantlyimprovedtoprovideclearexplanationfortheinternalstructure,reasoninglogic,technicalinterfaces,andoutputresultsofAIsystems,accuratelyreflectingtheprocessbywhichAIsystemsproduceoutcomes.
(b)Securedevelopmentstandardsshouldbeestablishedandimplementedinthedesign,R&D,deployment,andmaintenanceprocessestoeliminateasmanysecurityflawsanddiscriminationtendenciesinmodelsandalgorithmsaspossibleandenhancerobustness.
-14-
AISafetyGovernanceFramework
4.1.2Addressingrisksfromdata
(a)Securityrulesondatacollectionandusage,andonprocessingpersonalinformationshouldbeabidedbyinallproceduresoftrainingdataanduserinteractiondata,includingdatacollection,storage,usage,processing,transmission,provision,publication,anddeletion.Thisaimstofullyensureuser’slegitimaterightsstipulatedbylawsandregulations,suchastheirrightstocontrol,tobeinformed,andtochoose.
(b)ProtectionofIPRshouldbestrengthenedtopreventinfringementonIPRinstagessuchasselectingtrainingdataandresultoutputs.
(c)Trainingdatashouldbestrictlyselectedtoensureexclusionofsensitivedatainhigh-riskfieldssuchasnuclear,biological,andchemicalweaponsandmissiles.
(d)Datasecuritymanagementshouldbestrengthenedtocomplywithdatasecurityandpersonalinformationprotectionstandardsandregulationsiftrainingdatacontainssensitivepersonalinformationandimportantdata.
(e)Tousetruthful,precise,objective,anddiversetrainingdatafromlegitimatesources,andfilterineffective,wrong,andbiaseddatainatimely
manner.
(f)Thecross-borderprovisionofAIservicesshouldcomplywiththeregulationsoncross-borderdataflow.TheexternalprovisionofAImodelsandalgorithmsshouldcomplywithexportcontrolrequirements.
4.1.3AddressingrisksfromAIsystem
(a)Toproperlydisclosetheprinciples,capacities,applicationscenarios,and
-15-
AISafetyGovernanceFramework
safetyrisksofAItechnologiesandproducts,toclearlylabeloutputs,andtoconstantlymakeAIsystemsmoretransparent.
(b)Toenhancetheriskidentification,detection,andmitigationofplatformswheremultipleAImodelsorsystemscongregate,soastopreventmaliciousactsorattacksandinvasionsthattargettheplatformsfromimpactingtheAImodelsorsystemstheysupport.
(c)Tostrengthenthecapacityofconstructing,managing,andoperatingAIcomputingplatformsandAIsystemservicessafely,withanaimtoensureuninterruptedinfrastructureoperationandserviceprovision.
(d)Tofullyconsiderthesupplychainsecurityofthechips,software,tools,computinginfrastructure,anddatasourcesadoptedforAIsystems.Totrackthevulnerabilitiesandflawsofbothsoftwareandhardwareproductsandmaketimelyrepairandreinforcementtoensuresystemsecurity.
4.2AddressingsafetyrisksinAIapplications
4.2.1Addressingcyberspacerisks
(a)Asecurityprotectionmechanismshouldbeestablishedtopreventmodel
frombeinginterferedandtamperedduringoperationtoensurereliableoutputs.
(b)AdatasafeguardshouldbesetuptomakesurethatAIsystemscomplywithapplicablelawsandregulationswhenoutputtingsensitivepersonalinformationandimportantdata.
4.2.2Addressingreal-worldrisks
-16-
AISafetyGovernanceFramework
(a)Toestablishservicelimitationsaccordingtousers’actualapplicationscenariosandcutAIsystems’featuresthatmightbeabused.AIsystemsshouldnotprovideservicesthatgobeyondthepresetscope.
(b)ToimprovetheabilitytotracetheenduseofAIsystemstopreventhigh-riskapplicationscenariossuchasmanufacturingofweaponsofmassdestruction,likenuclear,biological,chemicalweaponsandmissiles.
4.2.3Addressingcognitiverisks
(a)Toidentifyunexpected,untruthful,andinaccurateoutputsviatechnologicalmeans,andregulatetheminaccordancewithlawsandregulations.
(b)StrictmeasuresshouldbetakentopreventabuseofAIsystemsthatcollect,connect,gather,analyze,anddigintousers’inquiriestoprofiletheiridentity,preference,andpersonalmindset.
(c)TointensifyR&DofAI-generatedcontent(AIGC)testingtechnologies,aimingtobetterprevent,detect,andnavigatethecognitivewarfare.
4.2.4Addressingethicalrisks
(a)Trainingdatashouldbefilteredandoutputsshouldbeverifiedduringalgorithmdesign,modeltrainingandoptimization,serviceprovisionandotherprocesses,inanefforttopreventdiscriminationbasedonethnicities,beliefs,nationalities,region,gender,age,occupationandhealthfactors,amongothers.
(b)AIsystemsappliedinkeysectors,suchasgovernmentdepartments,criticalinformationinfrastructure,andareasdirectlyaffectingpublicsafety
-17-
AISafetyGovernanceFramework
andpeople'shealthandsafety,shouldbeequippedwithhigh-efficientemergencymanagementandcontrolmeasures.
5.Comprehensivegovernancemeasures
Whileadoptingtechnologicalcontrols,weshouldformulateandrefinecomprehensiveAIsafetyandsecurityriskgovernancemechanismsandregulationsthatengagemulti-stakeholderparticipation,includingtechnologyR&Dinstitutions,serviceproviders,users,governmentauthorities,industryassociations,andsocialorganizations.
5.1Toimplementatieredandcategory-basedmanagementforAIapplication.WeshouldclassifyandgradeAIsystemsbasedontheirfeatures,functions,andapplicationscenarios,andsetupatestingandassessmentsystembasedonAIrisklevels.Weshouldbolsterend-usemanagementofAI,andimposerequirementsontheadoptionofAItechnologiesbyspecificusersandinspecificscenarios,therebypreventingAIsystemabuse.WeshouldregisterAIsystemswhosecomputingandreasoningcapacitieshavereachedacertainthresholdorthoseareappliedinspecificindustriesandsectors,anddemandthatsuchsystemspossessthesafetyprotectioncapacitythroughoutthelifecycleincludingdesign,R&D,testing,deployment,utilization,andmaintenance.
5.2TodevelopatraceabilitymanagementsystemforAIservices.
WeshouldusedigitalcertificatestolabeltheAIsystemsservingthepublic.WeshouldformulateandintroducestandardsandregulationsonAIoutput
-18-
AISafetyGovernanceFramework
labeling,andclarifyrequirementsforexplicitandimplicitlabelsthroughoutkeystagesincludingcreationsources,transmissionpaths,anddistributionchannels,withaviewtoenableuserstoidentifyandjudgeinformationsourcesandcredibility.
5.3ToimproveAIdatasecurityandpersonalinformationprotectionregulations.WeshouldexplicatetherequirementsfordatasecurityandpersonalinformationprotectioninvariousstagessuchasAItraining,labeling,utilization,andoutputbasedonthefeaturesofAItechnologiesandapplications.
5.4TocreatearesponsibleAIR&Dandapplicationsystem.Weshouldproposepragmaticinstructionsandbestpracticestoupholdthepeople-centeredapproachandadheretotheprincipleofdevelopingAIforgoodinAIR&Dandapplication,andcontinuouslyalignAI’sdesign,R&D,andapplicationprocesseswithsuchvaluesandethics.Weshouldexplorethecopyrightprotection,developmentandutilizationsystemsthatadapttotheAIeraandcontinuouslyadvancetheconstructionofhigh-qualityfoundationalcorporaanddatasetstoprovidepremiumresourcesforthesafedevelopmentofAI.WeshouldestablishAI-relatedethicalreviewstandards,norms,andguidelinestoimprovetheethicalreviewsystem.
5.5TostrengthenAIsupplychainsecurity.WeshouldpromoteknowledgesharinginAI,makeAItechnologiesavailabletothepublicunderopen-sourceterms,andjointlydevelopAIchips,frameworks,andsoftware.Weshouldguidetheindustrytobuildanopenecosystem,enhancethe
-19-
AISafetyGovernanceFramework
diversityofsupplychainsources,andensurethesecurityandstabilityoftheAIsupplychain.
5.6ToadvanceresearchonAIexplainability.Weshouldorganizeandconductresearchonthetransparency,trustworthiness,anderror-correctionmechanisminAIdecision-makingfromtheperspectivesofmachinelearningtheory,trainingmethodsandhuman-computerinteraction.ContinuouseffortsshouldbemadetoenhancetheexplainabilityandpredictabilityofAItopreventmaliciousconsequencesresultingfromunintendeddecisionsmadebyAIsystems.
5.7Toshareinformation,andemergencyresponseofAIsafetyrisksandthreats.Weshouldcontinuouslytrackandanalyzesecurityvulnerabilities,defects,risks,threats,andsafetyincidentsrelatedtoAItechnologies,softwareandhardwareproducts,services,andotheraspects.Weshouldcoordinatewithrelevantdevelopersandserviceproviderstoestablishareportingandsharinginformationmechanismonrisksandthreats.WeshouldestablishanemergencyresponsemechanismforAIsafetyandsecurityincidents,formulateemergencyplans,conductemergencydrills,andhandleAIsafetyhazards,AIsecuritythreats,andeventstimely,rapidly,andeffectively.
5.8ToenhancethetrainingofAIsafetytalents.WeshouldpromotethedevelopmentofAIsafetyeducationinparallelwithAIdiscipline.Weshouldleverageschoolsandresearchinstitutionstostrengthentalentcultivationinthefieldsofdesign,development
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
- 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
- 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 不動產抵押擔保合同
- 消費品銷售數(shù)據(jù)對比表
- 商貿流通企業(yè)改制工作實施方案
- 2024年工業(yè)自動化儀表項目資金申請報告代可行性研究報告
- 2025年國網西藏電力有限公司招聘568人(第一批)筆試參考題庫附帶答案詳解
- 2025屆新華人壽保險股份有限公司安徽分公司“新雁”管培生招聘12人筆試參考題庫附帶答案詳解
- 2025年上半年宜春市政府北京辦事處招考服務員易考易錯模擬試題(共500題)試卷后附參考答案
- 2025年上半年宜昌長陽城市發(fā)展投資集團限公司招聘【若干人】易考易錯模擬試題(共500題)試卷后附參考答案
- 2025四川德陽科安安全技術有限公司招聘11人筆試參考題庫附帶答案詳解
- 2025年上半年安徽銅陵市公安局義安分局義安區(qū)城管局招聘19人易考易錯模擬試題(共500題)試卷后附參考答案
- 2025年天翼云解決方案架構師認證考試指導題庫-上(單選題)
- 2025年春人教版英語八年級下冊同步課件 Unit 7 Whats the highest mountain in the world課件 Section A 1a-2d
- 2025年哈爾濱鐵道職業(yè)技術學院單招職業(yè)傾向性測試題庫必考題
- 行為規(guī)范教育中學校長在國旗下講話:嚴格要求自己規(guī)范自己的行為
- 七下綜合世界真奇妙-共享“地球村”
- 2025年信陽職業(yè)技術學院高職單招職業(yè)技能測試近5年??及鎱⒖碱}庫含答案解析
- 印刷服務投標方案(技術方案)
- 戶政知識技能比武大練兵考試題庫(完整版)
- 奶牛胚胎移植課件
- 心臟胚胎發(fā)育
- 慢性腎衰竭(慢性腎臟病)診療指南(內容清晰)
評論
0/150
提交評論