《人工智能安全治理框架》1.0版(英文)_第1頁
《人工智能安全治理框架》1.0版(英文)_第2頁
《人工智能安全治理框架》1.0版(英文)_第3頁
《人工智能安全治理框架》1.0版(英文)_第4頁
《人工智能安全治理框架》1.0版(英文)_第5頁
已閱讀5頁,還剩51頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

Content

1.PrinciplesforAIsafetygovernance 1

2.FrameworkforAIsafetygovernance 4

3.ClassificationofAIsafetyrisks 5

3.1AI'sinherentsafetyrisks 6

3.2SafetyrisksinAIapplications 9

4.Technologicalmeasurestoaddressrisks 13

4.1AddressingAI’sinherentsafetyrisks 13

4.2AddressingsafetyrisksinAIapplications 15

5.Comprehensivegovernancemeasures 17

6.SafetyguidelinesforAIdevelopmentandapplication 21

6.1Safetyguidelinesformodelalgorithmdevelopers 21

6.2SafetyguidelinesforAIserviceproviders 23

6.3Safetyguidelinesforusersinkeyareas 24

6.4Safetyguidelinesforgeneralusers 26

-1-

AISafetyGovernanceFramework

AISafetyGovernanceFramework

(V1.0)

ArtificialIntelligence(AI),anewareaofhumandevelopment,presentssignificantopportunitiestotheworldwhileposingvariousrisksandchallenges.Upholdingapeople-centeredapproachandadheringtotheprincipleofdevelopingAIforgood,thisframeworkhasbeenformulatedtoimplementtheGlobalAIGovernanceInitiativeandpromoteconsensusandcoordinatedeffortsonAIsafetygovernanceamonggovernments,internationalorganizations,companies,researchinstitutes,civilorganizations,andindividuals,aimingtoeffectivelypreventanddefuseAIsafetyrisks.

1.PrinciplesforAIsafetygovernance

-Committoavisionofcommon,comprehensive,cooperative,andsustainablesecuritywhileputtingequalemphasisondevelopmentandsecurity

-PrioritizetheinnovativedevelopmentofAI

-TakeeffectivelypreventinganddefusingAIsafetyrisksasthestartingpointandultimategoal

-2-

AISafetyGovernanceFramework

-Establishgovernancemechanismsthatengageallstakeholders,integratetechnologyandmanagement,andensurecoordinatedeffortsandcollaborationamongthem

-EnsurethatallpartiesinvolvedfullyshouldertheirresponsibilitiesforAIsafety

-Createawhole-process,all-elementgovernancechain

-Fosterasafe,reliable,equitable,andtransparentAIforthetechnicalresearch,development,andapplication

-PromotethehealthydevelopmentandregulatedapplicationofAI

-Effectivelysafeguardnationalsovereignty,securityanddevelopmentinterests

-Protectthelegitimaterightsandinterestsofcitizens,legalpersonsandotherorganizations

-GuaranteethatAItechnologybenefitshumanity

1.1Beinclusiveandprudenttoensuresafety

WeencouragedevelopmentandinnovationandtakeaninclusiveapproachtoAIresearch,development,andapplication.WemakeeveryefforttoensureAIsafety,andwilltaketimelymeasurestoaddressanyrisksthatthreatennationalsecurity,harmthepublicinterest,orinfringeuponthelegitimaterightsandinterestsofindividuals.

-3-

AISafetyGovernanceFramework

1.2Identifyriskswithagilegovernance

BycloselytrackingtrendsinAIresearch,development,andapplication,weidentifyAIsafetyrisksfromtwoperspectives:thetechnologyitselfanditsapplication.Weproposetailoredpreventivemeasurestomitigatetheserisks.Wefollowtheevolutionofsafetyrisks,swiftlyadjustingourgovernancemeasuresasneeded.Wearecommittedtoimprovingthegovernancemechanismsandmethodswhilepromptlyrespondingtoissueswarrantinggovernmentoversight.

1.3Integratetechnologyandmanagementforcoordinatedresponse

WeadoptacomprehensivesafetygovernanceapproachthatintegratestechnologyandmanagementtopreventandaddressvarioussafetyrisksthroughouttheentireprocessofAIresearch,development,andapplication.WithintheAIresearch,development,andapplicationchain,itisessentialtoensurethatallrelevantparties,includingmodelandalgorithmresearchersanddevelopers,serviceproviders,andusers,assumetheirrespectiveresponsibilitiesforAIsafety.Thisapproachwellleveragestherolesofgovernancemechanismsinvolvinggovernmentoversight,industryself-regulation,andpublicscrutiny.

1.4Promoteopennessandcooperationforjointgovernanceandsharedbenefits

-4-

AISafetyGovernanceFramework

WepromoteinternationalcooperationonAIsafetygovernance,withthebestpracticessharedworldwide.WeadvocateestablishingopenplatformsandadvanceeffortstobuildbroadconsensusonaglobalAIgovernancesystemthroughdialogueandcooperationacrossvariousdisciplines,fields,regions,andnations.

2.FrameworkforAIsafetygovernance

Basedonthenotionofriskmanagement,thisframeworkoutlinescontrolmeasurestoaddressdifferenttypesofAIsafetyrisksthroughtechnologicalandmanagerialstrategies.AsAIresearch,development,andapplicationrapidlyevolves,leadingtochangesintheforms,impacts,andourperceptionofsafetyrisks,itisnecessarytocontinuouslyupdatecontrolmeasures,andinviteallstakeholderstorefinethegovernanceframework.

2.1Safetyandsecurityrisks

ByexaminingthecharacteristicsofAItechnologyanditsapplicationscenariosacrossvariousindustriesandfields,wepinpointsafetyandsecurityrisksandpotentialdangersthatareinherentlylinkedtothetechnologyitselfanditsapplication.

2.2Technicalcountermeasures

Regardingmodelsandalgorithms,trainingdata,computingfacilities,

-5-

AISafetyGovernanceFramework

productsandservices,andapplicationscenarios,weproposetargetedtechnicalmeasurestoimprovethesafety,fairness,reliability,androbustnessofAIproductsandapplications.Thesemeasuresincludesecuresoftwaredevelopment,dataqualityimprovement,constructionandoperationssecurityenhancement,andconductingevaluation,monitoring,andreinforcementactivities.

2.3Comprehensivegovernancemeasures

Inaccordancewiththeprincipleofcoordinatedeffortsandjointgovernance,weclarifythemeasuresthatallstakeholders,includingtechnologyresearchinstitutions,productandserviceproviders,users,governmentagencies,industryassociations,andsocialorganizations,shouldtaketoidentify,prevent,andrespondtoAIsafetyrisks.

2.4SafetyguidelinesforAIdevelopmentandapplication

WeproposeseveralsafetyguidelinesforAImodelandalgorithmdevelopers,AIserviceproviders,usersinkeyareas,andgeneralusers,todevelopandapplyAItechnology.

3.ClassificationofAIsafetyrisks

SafetyrisksexistateverystagethroughouttheAIchain,fromsystemdesigntoresearchanddevelopment(R&D),training,testing,deployment,

-6-

AISafetyGovernanceFramework

utilization,andmaintenance.Theserisksstemfrominherenttechnicalflawsaswellasmisuse,abuse,andmalicioususeofAI.

3.1AI'sinherentsafetyrisks

3.1.1Risksfrommodelsandalgorithms(a)Risksofexplainability

AIalgorithms,representedbydeeplearning,havecomplexinternal

workings.Theirblack-boxorgrey-boxinferenceprocessresultsinunpredictableanduntraceableoutputs,makingitchallengingtoquicklyrectifythemortracetheiroriginsforaccountabilityshouldanyanomaliesarise.

(b)Risksofbiasanddiscrimination

Duringthealgorithmdesignandtrainingprocess,personalbiasesmaybeintroduced,eitherintentionallyorunintentionally.Additionally,poor-qualitydatasetscanleadtobiasedordiscriminatoryoutcomesinthealgorithm'sdesignandoutputs,includingdiscriminatorycontentregardingethnicity,religion,nationalityandregion.

(c)Risksofrobustness

Asdeepneuralnetworksarenormallynon-linearandlargeinsize,AIsystemsaresusceptibletocomplexandchangingoperationalenvironmentsormaliciousinterferenceandinductions,possiblyleadingtovariousproblemslikereducedperformanceanddecision-makingerrors.

(d)Risksofstealingandtampering

-7-

AISafetyGovernanceFramework

Corealgorithminformation,includingparameters,structures,andfunctions,facesrisksofinversionattacks,stealing,modification,andevenbackdoorinjection,whichcanleadtoinfringementofintellectualpropertyrights(IPR)andleakageofbusinesssecrets.Itcanalsoleadtounreliableinference,wrongdecisionoutputandevenoperationalfailures.

(e)Risksofunreliableoutput

GenerativeAIcancausehallucinations,meaningthatanAImodelgeneratesuntruthfulorunreasonablecontent,butpresentsitasifitwereafact,leadingtobiasedandmisleadinginformation.

(f)Risksofadversarialattack

Attackerscancraftwell-designedadversarialexamplestosubtlymislead,influenceandevenmanipulateAImodels,causingincorrectoutputsandpotentiallyleadingtooperationalfailures.

3.1.2Risksfromdata

(a)Risksofillegalcollectionanduseofdata

ThecollectionofAItrainingdataandtheinteractionwithusersduringserviceprovisionposesecurityrisks,includingcollectingdatawithoutconsentandimproperuseofdataandpersonalinformation.

(b)Risksofimpropercontentandpoisoningintrainingdata

Ifthetrainingdataincludesillegalorharmfulinformationlikefalse,biasedandIPR-infringingcontent,orlacksdiversityinitssources,theoutputmayincludeharmfulcontentlikeillegal,malicious,orextremeinformation.Trainingdataisalsoatriskofbeingpoisonedfromtampering,error

-8-

AISafetyGovernanceFramework

injection,ormisleadingactionsbyattackers.Thiscaninterferewiththemodel'sprobabilitydistribution,reducingitsaccuracyandreliability.

(c)Risksofunregulatedtrainingdataannotation

Issueswithtrainingdataannotation,suchasincompleteannotationguidelines,incapableannotators,anderrorsinannotation,canaffecttheaccuracy,reliability,andeffectivenessofmodelsandalgorithms.Moreover,theycanintroducetrainingbiases,amplifydiscrimination,reducegeneralizationabilities,andresultinincorrectoutputs.

(d)Risksofdataleakage

InAIresearch,development,andapplications,issuessuchasimproperdataprocessing,unauthorizedaccess,maliciousattacks,anddeceptiveinteractionscanleadtodataandpersonalinformationleaks.

3.1.3RisksfromAIsystems

(a)Risksofexploitationthroughdefectsandbackdoors

ThestandardizedAPI,featurelibraries,toolkitsusedinthedesign,training,andverificationstagesofAIalgorithmsandmodels,developmentinterfaces,andexecutionplatforms,maycontainlogicalflawsandvulnerabilities.Theseweaknessescanbeexploited,andinsomecases,backdoorscanbeintentionallyembedded,posingsignificantrisksofbeingtriggeredandusedforattacks.

(b)Risksofcomputinginfrastructuresecurity

ThecomputinginfrastructureunderpinningAItrainingandoperations,whichreliesondiverseandubiquitouscomputingnodesandvarioustypes

-9-

AISafetyGovernanceFramework

ofcomputingresources,facesriskssuchasmaliciousconsumptionofcomputingresourcesandcross-boundarytransmissionofsecuritythreatsatthelayerofcomputinginfrastructure.

(c)Risksofsupplychainsecurity

TheAIindustryreliesonahighlyglobalizedsupplychain.However,certaincountriesmayuseunilateralcoercivemeasures,suchastechnologybarriersandexportrestrictions,tocreatedevelopmentobstaclesandmaliciouslydisrupttheglobalAIsupplychain.Thiscanleadtosignificantrisksofsupplydisruptionsforchips,software,andtools.

3.2SafetyrisksinAIapplications

3.2.1Cyberspacerisks

(a)Risksofinformationandcontentsafety

AI-generatedorsynthesizedcontentcanleadtothespreadoffalseinformation,discriminationandbias,privacyleakage,andinfringementissues,threateningthesafetyofcitizens'livesandproperty,nationalsecurity,ideologicalsecurity,andcausingethicalrisks.Ifusers’inputscontainharmfulcontent,themodelmayoutputillegalordamaginginformationwithoutrobustsecuritymechanisms.

(b)Risksofconfusingfacts,misleadingusers,andbypassingauthentication

AIsystemsandtheiroutputs,ifnotclearlylabeled,canmakeitdifficultforuserstodiscernwhethertheyareinteractingwithAIandtoidentifythe

-10-

AISafetyGovernanceFramework

sourceofgeneratedcontent.Thiscanimpedeusers'abilitytodeterminetheauthenticityofinformation,leadingtomisjudgmentandmisunderstanding.Additionally,AI-generatedhighlyrealisticimages,audio,andvideosmaycircumventexistingidentityverificationmechanisms,suchasfacialrecognitionandvoicerecognition,renderingtheseauthenticationprocessesineffective.

(c)Risksofinformationleakageduetoimproperusage

Staffofgovernmentagenciesandenterprises,iffailingtousetheAIserviceinaregulatedandpropermanner,mayinputinternaldataandindustrialinformationintotheAImodel,leadingtoleakageofworksecrets,businesssecretsandothersensitivebusinessdata.

(d)Risksofabuseforcyberattacks

AIcanbeusedinlaunchingautomaticcyberattacksorincreasingattackefficiency,includingexploringandmakinguseofvulnerabilities,crackingpasswords,generatingmaliciouscodes,sendingphishingemails,networkscanning,andsocialengineeringattacks.Alltheselowerthethresholdforcyberattacksandincreasethedifficultyofsecurityprotection.

(e)Risksofsecurityflawtransmissioncausedbymodelreuse

Re-engineeringorfine-tuningbasedonfoundationmodelsiscommonlyusedinAIapplications.Ifsecurityflawsoccurinfoundationmodels,itwillleadtorisktransmissiontodownstreammodels.

3.2.2Real-worldrisks

(a)Inducingtraditionaleconomicandsocialsecurityrisks

-11-

AISafetyGovernanceFramework

AIisusedinfinance,energy,telecommunications,traffic,andpeople'slivelihoods,suchasself-drivingandsmartdiagnosisandtreatment.Hallucinationsanderroneousdecisionsofmodelsandalgorithms,alongwithissuessuchassystemperformancedegradation,interruption,andlossofcontrolcausedbyimproperuseorexternalattacks,willposesecuritythreatstousers'personalsafety,property,andsocioeconomicsecurityandstability.

(b)RisksofusingAIinillegalandcriminalactivities

AIcanbeusedintraditionalillegalorcriminalactivitiesrelatedtoterrorism,violence,gambling,anddrugs,suchasteachingcriminaltechniques,concealingillicitacts,andcreatingtoolsforillegalandcriminalactivities.

(c)Risksofmisuseofdual-useitemsandtechnologies

Duetoimproperuseorabuse,AIcanposeseriousriskstonationalsecurity,economicsecurity,andpublichealthsecurity,suchasgreatlyreducingthecapabilityrequirementsfornon-expertstodesign,synthesize,acquire,andusenuclear,biological,andchemicalweaponsandmissiles;designingcyberweaponsthatlaunchnetworkattacksonawiderangeofpotentialtargetsthroughmethodslikeautomaticvulnerabilitydiscoveringandexploiting.

3.2.3Cognitiverisks

(a)Risksofamplifyingtheeffectsof"informationcocoons"

AIcanbeextensivelyutilizedforcustomizedinformationservices,collectinguserinformation,andanalyzingtypesofusers,theirneeds,intentions,preferences,habits,andevenmainstreampublicawarenessoveracertain

-12-

AISafetyGovernanceFramework

period.Itcanthenbeusedtoofferformulaicandtailoredinformationandservice,aggravatingtheeffectsof"informationcocoons."

(b)Risksofusageinlaunchingcognitivewarfare

AIcanbeusedtomakeandspreadfakenews,images,audio,andvideos,propagatecontentofterrorism,extremism,andorganizedcrimes,interfereininternalaffairsofothercountries,socialsystems,andsocialorder,andjeopardizesovereigntyofothercountries.AIcanshapepublicvaluesandcognitivethinkingwithsocialmediabotsgainingdiscoursepowerandagenda-settingpowerincyberspace.

3.2.4Ethicalrisks

(a)Risksofexacerbatingsocialdiscriminationandprejudice,andwideningtheintelligencedivide

AIcanbeusedtocollectandanalyzehumanbehaviors,socialstatus,economicstatus,andindividualpersonalities,labelingandcategorizinggroupsofpeopletotreatthemdiscriminatingly,thuscausingsystematicalandstructuralsocialdiscriminationandprejudice.Atthesametime,theintelligencedividewouldbeexpandedamongregions.

(b)Risksofchallengingtraditionalsocialorder

ThedevelopmentandapplicationofAImayleadtotremendouschangesinproductiontoolsandrelations,acceleratingthereconstructionoftraditionalindustrymodes,transformingtraditionalviewsonemployment,fertility,andeducation,andbringingchallengestostableperformanceoftraditionalsocialorder.

-13-

AISafetyGovernanceFramework

(c)RisksofAIbecominguncontrollableinthefuture

WiththefastdevelopmentofAItechnologies,thereisariskofAIautonomouslyacquiringexternalresources,conductingself-replication,becomeself-aware,seekingforexternalpower,andattemptingtoseizecontrolfromhumans.

4.Technologicalmeasurestoaddressrisks

Respondingtotheaboverisks,AIdevelopers,serviceproviders,andsystemusersshouldpreventrisksbytakingtechnologicalmeasuresinthefieldsoftrainingdata,computinginfrastructures,modelsandalgorithms,productservices,andapplicationscenarios.

4.1AddressingAI’sinherentsafetyrisks

4.1.1Addressingrisksfrommodelsandalgorithms

(a)ExplainabilityandpredictabilityofAIshouldbeconstantlyimprovedtoprovideclearexplanationfortheinternalstructure,reasoninglogic,technicalinterfaces,andoutputresultsofAIsystems,accuratelyreflectingtheprocessbywhichAIsystemsproduceoutcomes.

(b)Securedevelopmentstandardsshouldbeestablishedandimplementedinthedesign,R&D,deployment,andmaintenanceprocessestoeliminateasmanysecurityflawsanddiscriminationtendenciesinmodelsandalgorithmsaspossibleandenhancerobustness.

-14-

AISafetyGovernanceFramework

4.1.2Addressingrisksfromdata

(a)Securityrulesondatacollectionandusage,andonprocessingpersonalinformationshouldbeabidedbyinallproceduresoftrainingdataanduserinteractiondata,includingdatacollection,storage,usage,processing,transmission,provision,publication,anddeletion.Thisaimstofullyensureuser’slegitimaterightsstipulatedbylawsandregulations,suchastheirrightstocontrol,tobeinformed,andtochoose.

(b)ProtectionofIPRshouldbestrengthenedtopreventinfringementonIPRinstagessuchasselectingtrainingdataandresultoutputs.

(c)Trainingdatashouldbestrictlyselectedtoensureexclusionofsensitivedatainhigh-riskfieldssuchasnuclear,biological,andchemicalweaponsandmissiles.

(d)Datasecuritymanagementshouldbestrengthenedtocomplywithdatasecurityandpersonalinformationprotectionstandardsandregulationsiftrainingdatacontainssensitivepersonalinformationandimportantdata.

(e)Tousetruthful,precise,objective,anddiversetrainingdatafromlegitimatesources,andfilterineffective,wrong,andbiaseddatainatimely

manner.

(f)Thecross-borderprovisionofAIservicesshouldcomplywiththeregulationsoncross-borderdataflow.TheexternalprovisionofAImodelsandalgorithmsshouldcomplywithexportcontrolrequirements.

4.1.3AddressingrisksfromAIsystem

(a)Toproperlydisclosetheprinciples,capacities,applicationscenarios,and

-15-

AISafetyGovernanceFramework

safetyrisksofAItechnologiesandproducts,toclearlylabeloutputs,andtoconstantlymakeAIsystemsmoretransparent.

(b)Toenhancetheriskidentification,detection,andmitigationofplatformswheremultipleAImodelsorsystemscongregate,soastopreventmaliciousactsorattacksandinvasionsthattargettheplatformsfromimpactingtheAImodelsorsystemstheysupport.

(c)Tostrengthenthecapacityofconstructing,managing,andoperatingAIcomputingplatformsandAIsystemservicessafely,withanaimtoensureuninterruptedinfrastructureoperationandserviceprovision.

(d)Tofullyconsiderthesupplychainsecurityofthechips,software,tools,computinginfrastructure,anddatasourcesadoptedforAIsystems.Totrackthevulnerabilitiesandflawsofbothsoftwareandhardwareproductsandmaketimelyrepairandreinforcementtoensuresystemsecurity.

4.2AddressingsafetyrisksinAIapplications

4.2.1Addressingcyberspacerisks

(a)Asecurityprotectionmechanismshouldbeestablishedtopreventmodel

frombeinginterferedandtamperedduringoperationtoensurereliableoutputs.

(b)AdatasafeguardshouldbesetuptomakesurethatAIsystemscomplywithapplicablelawsandregulationswhenoutputtingsensitivepersonalinformationandimportantdata.

4.2.2Addressingreal-worldrisks

-16-

AISafetyGovernanceFramework

(a)Toestablishservicelimitationsaccordingtousers’actualapplicationscenariosandcutAIsystems’featuresthatmightbeabused.AIsystemsshouldnotprovideservicesthatgobeyondthepresetscope.

(b)ToimprovetheabilitytotracetheenduseofAIsystemstopreventhigh-riskapplicationscenariossuchasmanufacturingofweaponsofmassdestruction,likenuclear,biological,chemicalweaponsandmissiles.

4.2.3Addressingcognitiverisks

(a)Toidentifyunexpected,untruthful,andinaccurateoutputsviatechnologicalmeans,andregulatetheminaccordancewithlawsandregulations.

(b)StrictmeasuresshouldbetakentopreventabuseofAIsystemsthatcollect,connect,gather,analyze,anddigintousers’inquiriestoprofiletheiridentity,preference,andpersonalmindset.

(c)TointensifyR&DofAI-generatedcontent(AIGC)testingtechnologies,aimingtobetterprevent,detect,andnavigatethecognitivewarfare.

4.2.4Addressingethicalrisks

(a)Trainingdatashouldbefilteredandoutputsshouldbeverifiedduringalgorithmdesign,modeltrainingandoptimization,serviceprovisionandotherprocesses,inanefforttopreventdiscriminationbasedonethnicities,beliefs,nationalities,region,gender,age,occupationandhealthfactors,amongothers.

(b)AIsystemsappliedinkeysectors,suchasgovernmentdepartments,criticalinformationinfrastructure,andareasdirectlyaffectingpublicsafety

-17-

AISafetyGovernanceFramework

andpeople'shealthandsafety,shouldbeequippedwithhigh-efficientemergencymanagementandcontrolmeasures.

5.Comprehensivegovernancemeasures

Whileadoptingtechnologicalcontrols,weshouldformulateandrefinecomprehensiveAIsafetyandsecurityriskgovernancemechanismsandregulationsthatengagemulti-stakeholderparticipation,includingtechnologyR&Dinstitutions,serviceproviders,users,governmentauthorities,industryassociations,andsocialorganizations.

5.1Toimplementatieredandcategory-basedmanagementforAIapplication.WeshouldclassifyandgradeAIsystemsbasedontheirfeatures,functions,andapplicationscenarios,andsetupatestingandassessmentsystembasedonAIrisklevels.Weshouldbolsterend-usemanagementofAI,andimposerequirementsontheadoptionofAItechnologiesbyspecificusersandinspecificscenarios,therebypreventingAIsystemabuse.WeshouldregisterAIsystemswhosecomputingandreasoningcapacitieshavereachedacertainthresholdorthoseareappliedinspecificindustriesandsectors,anddemandthatsuchsystemspossessthesafetyprotectioncapacitythroughoutthelifecycleincludingdesign,R&D,testing,deployment,utilization,andmaintenance.

5.2TodevelopatraceabilitymanagementsystemforAIservices.

WeshouldusedigitalcertificatestolabeltheAIsystemsservingthepublic.WeshouldformulateandintroducestandardsandregulationsonAIoutput

-18-

AISafetyGovernanceFramework

labeling,andclarifyrequirementsforexplicitandimplicitlabelsthroughoutkeystagesincludingcreationsources,transmissionpaths,anddistributionchannels,withaviewtoenableuserstoidentifyandjudgeinformationsourcesandcredibility.

5.3ToimproveAIdatasecurityandpersonalinformationprotectionregulations.WeshouldexplicatetherequirementsfordatasecurityandpersonalinformationprotectioninvariousstagessuchasAItraining,labeling,utilization,andoutputbasedonthefeaturesofAItechnologiesandapplications.

5.4TocreatearesponsibleAIR&Dandapplicationsystem.Weshouldproposepragmaticinstructionsandbestpracticestoupholdthepeople-centeredapproachandadheretotheprincipleofdevelopingAIforgoodinAIR&Dandapplication,andcontinuouslyalignAI’sdesign,R&D,andapplicationprocesseswithsuchvaluesandethics.Weshouldexplorethecopyrightprotection,developmentandutilizationsystemsthatadapttotheAIeraandcontinuouslyadvancetheconstructionofhigh-qualityfoundationalcorporaanddatasetstoprovidepremiumresourcesforthesafedevelopmentofAI.WeshouldestablishAI-relatedethicalreviewstandards,norms,andguidelinestoimprovetheethicalreviewsystem.

5.5TostrengthenAIsupplychainsecurity.WeshouldpromoteknowledgesharinginAI,makeAItechnologiesavailabletothepublicunderopen-sourceterms,andjointlydevelopAIchips,frameworks,andsoftware.Weshouldguidetheindustrytobuildanopenecosystem,enhancethe

-19-

AISafetyGovernanceFramework

diversityofsupplychainsources,andensurethesecurityandstabilityoftheAIsupplychain.

5.6ToadvanceresearchonAIexplainability.Weshouldorganizeandconductresearchonthetransparency,trustworthiness,anderror-correctionmechanisminAIdecision-makingfromtheperspectivesofmachinelearningtheory,trainingmethodsandhuman-computerinteraction.ContinuouseffortsshouldbemadetoenhancetheexplainabilityandpredictabilityofAItopreventmaliciousconsequencesresultingfromunintendeddecisionsmadebyAIsystems.

5.7Toshareinformation,andemergencyresponseofAIsafetyrisksandthreats.Weshouldcontinuouslytrackandanalyzesecurityvulnerabilities,defects,risks,threats,andsafetyincidentsrelatedtoAItechnologies,softwareandhardwareproducts,services,andotheraspects.Weshouldcoordinatewithrelevantdevelopersandserviceproviderstoestablishareportingandsharinginformationmechanismonrisksandthreats.WeshouldestablishanemergencyresponsemechanismforAIsafetyandsecurityincidents,formulateemergencyplans,conductemergencydrills,andhandleAIsafetyhazards,AIsecuritythreats,andeventstimely,rapidly,andeffectively.

5.8ToenhancethetrainingofAIsafetytalents.WeshouldpromotethedevelopmentofAIsafetyeducationinparallelwithAIdiscipline.Weshouldleverageschoolsandresearchinstitutionstostrengthentalentcultivationinthefieldsofdesign,development

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論