確保人工智能時(shí)代的關(guān)鍵基礎(chǔ)設(shè)施安全(英文)_第1頁
確保人工智能時(shí)代的關(guān)鍵基礎(chǔ)設(shè)施安全(英文)_第2頁
確保人工智能時(shí)代的關(guān)鍵基礎(chǔ)設(shè)施安全(英文)_第3頁
確保人工智能時(shí)代的關(guān)鍵基礎(chǔ)設(shè)施安全(英文)_第4頁
確保人工智能時(shí)代的關(guān)鍵基礎(chǔ)設(shè)施安全(英文)_第5頁
已閱讀5頁,還剩55頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

CenterforSecurityandEmergingTechnology|1

ThisworkshopandtheproductionofthefinalreportwasmadepossiblebyagenerouscontributionfromtheMicrosoftCorporation.Theviewsinthisdocumentarestrictlytheauthors’anddonotnecessarilyrepresenttheviewsoftheU.S.government,the

MicrosoftCorporation,orofanyinstitution,organization,orentitywithwhichtheauthorsmaybeaffiliated.

Referencetoanyspecificcommercialproduct,process,orservicebytradename,

trademark,manufacturer,orotherwise,doesnotconstituteorimplyanendorsement,recommendation,orfavoringbytheU.S.government,includingtheU.S.DepartmentoftheTreasury,theU.S.DepartmentofHomelandSecurity,andtheCybersecurityand

InfrastructureSecurityAgency,oranyotherinstitution,organization,orentitywithwhichtheauthorsmaybeaffiliated.

CenterforSecurityandEmergingTechnology|2

ExecutiveSummary

Asartificialintelligencecapabilitiescontinuetoimprove,criticalinfrastructure(CI)

operatorsandprovidersseektointegratenewAIsystemsacrosstheirenterprises;

however,thesecapabilitiescomewithattendantrisksandbenefits.AIadoptionmayleadtomorecapablesystems,improvementsinbusinessoperations,andbettertoolstodetectandrespondtocyberthreats.Atthesametime,AIsystemswillalso

introducenewcyberthreatsthatCIprovidersmustcontendwith.Lastyear’sAI

executiveorderdirectedthevariousSectorRiskManagementAgencies(SRMAs)to

“evaluateandprovide…anassessmentofpotentialrisksrelatedtotheuseofAIin

criticalinfrastructuresectorsinvolved,includingwaysinwhichdeployingAImaymakecriticalinfrastructuresystemsmorevulnerabletocriticalfailures,physicalattacks,andcyber-attacks.”

Despitetheexecutiveorder’srecentdirection,AIuseincriticalinfrastructureisnot

new.AItoolsthatexcelinpredictionandanomalydetectionhavebeenusedforcyberdefenseandotherbusinessactivitiesformanyyears.Forexample,providershavelongreliedoncommercialinformationtechnologysolutionsthatarepoweredbyAIto

detectmaliciousactivity.WhathaschangedisthatnewgenerativeAItechniqueshavebecomemorecapableandoffernovelopportunitiesforCIoperators.Potentialuses

includemorecapablechatbotsforcustomerinteraction,enhancedthreatintelligencesynthesisandprioritization,fastercodeproductionprocesses,and,morerecently,AIagentsthatcanperformactionsbasedonuserprompts.

CIoperatorsandsectorsareattemptingtonavigatethisrapidlychanginganduncertainlandscape.Fortunately,thereareanaloguesfromcybersecuritythatwecandrawon.Yearsago,innovationsinnetworkconnectivityprovidedCIoperatorswithawayto

remotelymonitorandoperatemanysystems.However,thisalsocreatednewattackvectorsformaliciousactors.PastlessonscanhelpinformhoworganizationsapproachtheintegrationofAIsystems.Today,riskmayariseintwoways:fromAIvulnerabilitiesorfailuresinsystemsdeployedwithinCIandfromthemalicioususeofAIsystems

againstCIsectors.

Thisworkshopreportprovidestechnicalmitigationsandpolicyrecommendationsfor

managingtheuseofAIincriticalinfrastructure.Severalfindingsandrecommendationsemergedfromthisdiscussion.

●ResourcedisparitiesbetweenCIproviderswithinandacrosssectorshavea

majorimpactontheprospectsofAIadoptionandmanagementofAI-relatedrisks.Furtherprogramsareneededtosupportlesswell-resourcedproviders

CenterforSecurityandEmergingTechnology|3

withAI-relatedassistance,includingfinancialresources,datafortraining

models,requisitetalentandstaff,forumsforcommunication,andavoiceinthebroaderAIdiscourse.Expandingformalandinformalmeansofmutual

assistancecouldhelpclosethedisparitygap.Theseinitiativesshareresources,talent,andknowledgeacrossorganizationstoimprovethesecurityand

resiliencyofthesectorasawhole.Theyincludeformalprograms,suchas

sharingpersonnelinresponsetoincidentsoremergencies,andinformaleffortssuchasdevelopingbestpracticesorvettingproductsandservices.

●ThereisarecognizedneedtointegrateAIriskmanagementintoexisting

enterpriseriskmanagementpractices;however,ownershipofAIriskcanbe

ambiguouswithincurrentcorporatestructures.ThisriskwasreferredtobyoneparticipantastheAI“hotpotato”beingtossedaroundtheC-suite.Aclear

designationofresponsibilityforAIriskwithinthecorporatestructureisneeded.

●AmbiguitybetweenAIsafetyandAIsecurityalsoposessubstantialchallengestooperationalizingAIriskmanagement.OrganizationsareoftenunsurehowtoapplyguidancefromtheNationalInstituteofStandardsandTechnology’s

recentlypublishedAIriskmanagementframeworkalongsidethecybersecurityframework.FurtherguidanceonhowtoimplementaunifiedapproachtoAIriskisneeded.Tailoringandprioritizingthisguidancewouldhelpmakeitmoreaccessibletolesswell-resourcedprovidersandthosewithspecific,often

bespoke,needs.

●Whiletherearewell-establishedchannelsforcybersecurityinformationsharing,thereisnoanalogueinthecontextofAI.SRMAsshouldleverageexisting

venues,suchastheInformationSharingandAnalysisCenters,forAIsecurityinformationsharing.SharingAIsafetyissues,mitigations,andbestpracticesisalsocritical,butthechannelstodosoareunclear.ClarityonwhatconstitutesanAIincident,whichincidentsshouldbereported,thethresholdsforreporting,andwhetherexistingcyber-incidentreportingchannelsaresufficientwouldbe

valuable.Topromotecross-sectorvisibilityandanalysisthatspansbothAIsafetyandsecurity,thesectorsshouldconsiderestablishingacentralizedanalysiscenterforAIsafetyandsecurity.

●SkillstomanagecyberandAIrisksaresimilarbutnotidentical.The

implementationofAIsystemswillrequireexpertisethatmanyCIprovidersdonotcurrentlyhave.Assuch,providersandoperatorsshouldactivelyupskilltheircurrentworkforcesandseekopportunitiestocross-trainstaffwith

CenterforSecurityandEmergingTechnology|4

relevantcybersecurityskillstoeffectivelyaddresstherangeofAI-andcyber-relatedrisks.

●GenerativeAIintroducesnewissuesthatcanbemoredifficulttomanageand

thatwarrantcloseexamination.CIprovidersshouldremaincautiousand

informedbeforeadoptingnewerAItechnologies,particularlyforsensitiveormission-criticaltasks.Assessingwhetheranorganizationisevenreadyto

adoptthesesystemsisacriticalfirststep.

CenterforSecurityandEmergingTechnology|5

TableofContents

ExecutiveSummary 2

Introduction 6

Background 7

ResearchMethodology 7

TheCurrentandFutureUseofAIinCriticalInfrastructure 8

Figure1.ExamplesofAIUseCasesinCriticalInfrastructurebySector 10

Risks,Opportunities,andBarriersAssociatedwithAI 11

Risks 11

Opportunities 12

BarrierstoAdoption 13

Observations 14

DisparitiesBetweenandWithinSectors 14

UnclearBoundaryBetweenAIandCybersecurity 16

ChallengesinAIRiskManagement 17

FracturedGuidanceandRegulation 18

Recommendations 21

Cross-CuttingRecommendations 21

ResponsibleGovernmentDepartmentsandAgencies 23

Sectors 25

Organizations 25

CriticalInfrastructureOperators 26

AIDevelopers 26

Authors 28

AppendixA:BackgroundResearchSources 29

Government/Intergovernmental 29

Science/Academia/NongovernmentalOrganizations/FederallyFundedResearch

andDevelopmentCenters/Industry 29

DocumentsMentionedDuringWorkshop 30

Endnotes 31

CenterforSecurityandEmergingTechnology|6

Introduction

InOctober2023,theWhiteHousereleasedanExecutiveOrderontheSafe,Secure,

andTrustworthyDevelopmentandUseofArtificialIntelligence.Section4.3ofthe

orderspecificallyfocusesonthemanagementofAIincriticalinfrastructureand

cybersecurity

.1

WhileregulatorsdebatestrategiesforgoverningAIatthestate,

federal,andinternationallevels,protectingCIremainsatoppriorityformany

stakeholders.However,therearenumerousoutstandingquestionsonhowbesttoaddressAI-relatedriskstoCI,giventhefracturedregulatorylandscapeandthe

diversityamongthe16CIsectors.

Toaddresssomeofthesequestions,theCenterforSecurityandEmergingTechnology(CSET)hostedanin-personworkshopinJune2024thatbroughttogether

representativesfromtheU.S.federalgovernment,thinktanks,industry,academia,andfiveCIsectors(communications,informationtechnology,water,energy,andfinancialservices).ThediscussionwasframedaroundtheissueofsecurityinCI,includingtheriskfrombothAI-enabledcyberthreatsandpotentialvulnerabilitiesorfailuresin

deployedAIsystems.Theintentionoftheworkshopwastofosteracandid

conversationaboutthecurrentstateofAIincriticalinfrastructure,identify

opportunitiesandrisks—particularlyrelatedtocybersecurity—presentedbyAI

adoption,andrecommendtechnicalmitigationsandpolicyoptionsformanagingtheuseofAIandmachinelearningincriticalsystems.

ThediscussionfocusedonCIintheUnitedStates,withsomelimitedconversationontheglobalregulatorylandscape.Thisreportsummarizestheworkshop’sfindingsinfourprimarysections.TheBackgroundsectioncontainsCSETresearchonthecurrentandpotentialfutureuseofAItechnologiesinvariousCIsectors.TheRisks,

Opportunities,andBarrierssectionaddressestheseissuesassociatedwithAIthatparticipantsraisedoverthecourseoftheworkshop.Thethirdsection,Observations,categorizesvariousthemesfromthediscussion,andthereportconcludeswith

Recommendations,whichareorganizedbytargetaudience(government,CIsectors,andindividualorganizationswithinboththesectorsandtheAIindustry).

CenterforSecurityandEmergingTechnology|7

Background

Inpreparationforthisworkshop,CSETresearchersexaminedthereportssubmittedbyvariousfederaldepartmentsandagenciesinresponsetotheWhiteHouseAIexecutiveorder,section4.3.ThesereportsprovidedinsightintohowsomeCIownersand

operatorsarealreadyusingAIwithintheirsector,butitwassometimesunclearwhattypesofAIsystemsCIproviderswereemployingorconsidering.Forexample,theU.S.DepartmentofEnergy(DOE)summaryreportoverviewedthepotentialforusingAI-directedorAI-assistedsystemstosupportthecontrolofenergyinfrastructure,butitdidnotspecifywhethertheseweregenerativeAIortraditionalmodels.Thiswasthecaseformanyofthesourcesandusecasesassessedforthebackgroundresearch,

spanninginformationtechnology(IT),operationaltechnology(OT),andsector-specificusecases.ThisambiguityreducesvisibilityintothecurrentstateofAIadoptionacrosstheCIsectors,limitingtheeffectivenessofecosystemmonitoringandriskassessment.

ThissectionsummarizesCSET’spreliminaryresearchfortheworkshopandprovidesexamplesofmanyofthecurrentandpotentialfutureAIusecasesinthreesectors—

financialservices,water,andenergy—basedonfederalagencyreporting.

ResearchMethodology

TheU.S.DepartmentofHomelandSecurity(DHS)recentlyreleasedguidelinesforCIownersandoperatorsthatcategorizeover150individualAIusecasesinto10

categories

.2

Whilethereportencompassedall16CIsectors,theusecaseswerenotspecified.ToidentifyAIusecasesforthesectorsthatparticipatedintheworkshop,weassessedreportsfromtheU.S.DepartmentoftheTreasury(financialservices),DOE(energy),andtheU.S.EnvironmentalProtectionAgency(EPA,water).Wealso

examinedtheAIinventoriesforeachdepartmentandagency,buttheyonlyincludedusecasesinternaltothoseorganizations,notthesectorsgenerally.

TheTreasuryandDOEreportswerewrittenfollowingtheAIexecutiveorder,were

relativelycomprehensive,andconsideredmanyAIusecases

.3

Furtherusecasesinthefinanceandenergysectorswerepulledfromnongovernmentalsources(e.g.,the

JournalofRiskandFinancialManagementandIndigoAdvisoryGroup)

.4

TheEPA

sourcesweredatedandlackeddetailsonAIusecases

.5

Toidentifymoreusecasesinthewatersector,weassessedliteraturereviewsfromWaterResourcesManagement(aforumforpublicationsonthemanagementofwaterresources)andWater(ajournalonwaterscienceandtechnology)

.6

AlthoughweprimarilyfocusedonsourcescoveringU.S.CI,someresearchencompassedCIabroad.Afulllistofsourcescanbefoundin

AppendixA.

CenterforSecurityandEmergingTechnology|8

TheCurrentandFutureUseofAIinCriticalInfrastructure

WeclassifyAIusecasesinCIintothreebroadcategories:IT,OT,andsector-specific

usecases.ITencompassestheuseofAIfor“traditional”cybersecuritytaskssuchasnetworkmonitoring,anomalydetection,andclassificationofsuspiciousemails.AllCIsectorsuseIT,andthereforetheyallhavethepotentialtouseAIinthiscategory.OTencompassesAIuseinmonitoringorcontrollingphysicalsystemsandinfrastructure,suchasindustrialcontrolsystems.Sector-specificusecasesincludetheuseofAIfordetectingfraudinthefinancialsectororforecastingpowerdemandintheenergy

sector.ThesebroadcategoriesprovideasharedframeofreferenceandcapturethebreadthofAIusecasesacrosssectors.However,theyarenotmeanttobe

comprehensiveorconveythedepthofAIuse(orlackthereof)acrossorganizationswithinsectors.

WhendiscussingusecasesforCI,weconsiderabroadspectrumofAIapplications.

WhilenewertechnologiessuchasgenerativeAI(e.g.,largelanguagemodels)have

recentlybeentopofmindformanypolicymakers,moretraditionaltypesofmachine

learningsystems,includingpredictiveAIsystemsthatforecastandidentifypatternswithindata(asopposedtogeneratingcontent),havelongbeenusedinCI.ThevariousAIsystemspresentdifferingopportunitiesandchallenges,butgenerativeAI

introducesnewissuesthatcanbemoredifficulttomanageandthatwarrantcloseexamination.Thisincludesdifficultiesininterpretinghowmodelsprocessinputs,

explainingtheiroutputs,managingunpredictablebehaviors,andidentifying

hallucinationsandfalseinformation.Evenmorerecently,generativemodelshavebeenusedtopowerAIagents,enablingthesemodelstotakemoredirectactionintherealworld.Althoughthesesystemsarestillnascent,theirpotentialtoautomatetasks—

whetherroutineworkstreamsorcyberattacks—deservesclosewatching.

ThemesinAI-CIusecasesfromthereportsexaminedinclude:

?ManyITusecasesemployAItosupplementexistingcybersecuritypracticesandhavecommonalitiesacrosssectors.Forexample,AIisoftenusedtodetect

maliciouseventsorthreatsinIT,beitatafinancialfirmorwaterfacility.SomeAIITusecases,suchasscanningsecuritylogsforanomalies,gobacktothe1990s.Othershaveemergedoverthepast20years,suchasanomalousor

maliciouseventdetection.NewpotentialusecaseshavesurfacedwiththerecentadventofgenerativeAI,suchasmitigatingcodevulnerabilitiesandanalyzingthreatactorbehavior.

CenterforSecurityandEmergingTechnology|9

?Basedonreportedusecases,therearenoexplicitexamplesofgenerativeAI

beingusedinOT.WhilesomeapplicationsoftraditionalAIarebeingused,suchasininfrastructureoperationalawareness,broaderadoptionisstillfairlylimited.ThisisinpartduetoconcernsovercausingerrorsincriticalOT.However,futureusecasesarebeingactivelyconsidered,suchasreal-timecontrolofenergy

infrastructurewithhumansintheloop.

?Manysector-specificAIusecasesseektoimprovethereliability,robustness,

andefficiencyofCI.However,theyalsoraiseconcernsaboutdataprivacy,

cybersecurity,AIsecurity,andtheneedforgovernanceframeworkstoensureresponsibleAIdeployment.Itcanbemorechallengingtoimplementacommonriskmanagementframeworkfortheseusecasesbecausetheyarespecializedandhavelimitedoverlapacrosssectors.

?AIadoptionvarieswidelyacrossCIsectors.Organizationsacrosseachsectorhavevaryingtechnicalexpertise,funding,experienceintegratingnew

technologies,regulatoryorlegalconstraints,anddataavailability.Moreover,itisnotclearwhethercertainAIusecaseswereactivelybeingimplemented,

consideredinthenearterm,orfeasibleinthelongterm.ManyofthepotentialAIusecaseshighlightedinrelevantliteraturearetheoretical,withexperimentsconductedonlyinlaboratory,controlled,orlimitedsettings.Oneexampleisaproposedintelligentirrigationsystemprototypeforefficientwaterusagein

agriculturewhichwasdevelopedusingdatacollectedfromreal-world

environments,butnottestedinthefield

.7

Thefeasibilityofimplementingtheseapplicationsinpracticeandacrossorganizationsiscurrentlyunclear.

?ThedepthofAIuseacrossorganizationswithinsectorsisdifficulttoassess.

Therearethousandsoforganizationsacrossthefinancial,energy,andwater

sectors.ItisunknownhowmanyorganizationswithinthesesectorsareusingorwilluseAI,forwhatpurposes,andhowtherisksfromthosedifferentusecases

vary.

Figure1aggregatesallAIusecasesidentifiedinthepreliminaryresearch

.*

EachsectorisdividedintoIT,OT,andsector-specificusecasesandsubdividedintocurrent/near-termandlong-termusecases.

Figure1.ExamplesofAIUseCasesinCriticalInfrastructurebySector

Source:CSET(SeeAppendixA).

+Thesourcesexaminedduringourpreliminaryresearchdidnotcontainanycurrent,near-term,orfutureexamplesofAIusecasesinfinancialsectorOT,currentornear-termexamplesofAIusecasesinwatersectorOTorIT,noranyfutureAIusecasesinenergysectorIT.

CenterforSecurityandEmergingTechnology|10

CenterforSecurityandEmergingTechnology|11

Risks,Opportunities,andBarriersAssociatedwithAI

AsevidencedbythewiderangeofcurrentandpotentialusecasesforAIincritical

infrastructure,manyworkshopparticipantsexpressedinterestinadoptingAI

technologiesintheirrespectivesectors.However,manywerealsoconcernedaboutthebroadandunchartedspectrumofrisksassociatedwithAIadoption,bothfromexternalmaliciousactorsandfrominternaldeploymentofAIsystems.CIsectorsalsofacea

varietyofbarrierstoAIadoption,evenforusecasesthatmaybeimmediately

beneficialtothem.Thissectionwillbrieflysummarizethediscussionconcerningthese

threetopics:risks,opportunities,andbarrierstoadoption.

Risks

AIriskistwofold,encompassingbothmalicioususeofAIsystemsandAIsystemvulnerabilitiesorfailures.Thissubsectionwilladdressbothofthesecategories,

startingwithrisksfrommalicioususe,whichseveralworkshopparticipantsraised

concernsaboutgiventhecurrentprevalenceofcyberattacksonU.S.critical

infrastructure.TheseconcernsincludedhowAImighthelpmaliciousactorsdiscovernewattackvectors,conductreconnaissanceandmappingofcomplexCInetworks,andmakecyberattacksmoredifficulttodetectordefendagainst.AI-poweredtoolslowerthebarriertoentryformaliciousactors,givingthemanew(andpotentiallylow-cost)waytosynthesizevastamountsofinformationtoconductcyberandphysicalsecurityattacks.However,theadditionofAIalonedoesnotnecessarilypresentanovelthreat,asCIsystemsarealreadytargetsforvariouscapableandmotivatedcyberactors

.8

MostconcernsaboutAIinthiscontextcenteredonitspotentialtoenableattacksthatmaynotcurrentlybepossibleorincreasetheseverityoffutureattacks.Amore

transformativeuseofAIbyattackerscouldinvolveseekingimprovedinsightsastowhatsystemsanddataflowstodisruptorcorrupttoachievethegreatestimpact.

GenerativeAIcapabilitiesarecurrentlyincreasingthreatstoCIprovidersincertain

cases.Thesethreatsincludeenhancedspearphishing,enabledbylargelanguage

models.Researchershaveobservedthreatactorsexploringthecapabilitiesof

generativeAIsystems,whicharenotnecessarilygame-changingbutcanbefairly

usefulacrossawiderangeoftaskssuchasscripting,reconnaissance,translation,andsocialengineering

.9

Furthermore,asAIdevelopersstrivetoimprovegenerative

models’capabilitiesbyenablingthemodeltouseexternalsoftwaretoolsandinteract

withotherdigitalsystems,digital“agents”thatcantranslategeneralhumaninstructionsintoexecutablesubtasksmaysoonbeusedforcyberoffense.

CenterforSecurityandEmergingTechnology|12

TheotherriskcategoryparticipantsidentifiedwasrelatedtoAIadoption,suchasthepotentialfordataleakage,alargercybersecurityattacksurface,andgreatersystem

complexity.Dataleakagewasasignificantconcern,regardingboththepossibilityofaCIoperator’sdatabeingstoredexternally(suchasbyanAIprovider)andthepotentialforsensitiveinformationtoaccidentallyleakduetoemployeeusageofAI(suchasbypromptinganexternallargelanguagemodel).

IncorporatingAIsystemscouldalsoincreaseaCIoperator’scybersecurityattack

surfaceinnew—orunknown—ways,especiallyiftheAIsystemisusedforeitherOTorIT.(AusecaseencompassingOTandIT,whicharetypicallystrictlyseparatedwith

firewallstolimittheriskofcompromise,wouldincreasetheattacksurfaceeven

further.)Forcertainsectors,participantspointedoutthatevenmappinganoperator’snetworkstoevaluateanAIsystem’susefulness—andsubsequentlystoringorsharingthatsensitiveinformation—couldpresentatargetformotivatedthreatactors.CI

operatorsfacemoreconstraintsthanorganizationsinotherindustriesandthereforeneedtobeextracautiousaboutdisclosinginformationabouttheirsystems.NewerAIproducts,especiallygenerativeAIsystems,mayalsofailunexpectedlybecauseitisimpossibletothoroughlytesttheentirerangeofinputstheymightreceive.

Finally,AIsystems’complexitypresentsachallengefortestingandevaluation,

especiallygiventhatsomesystemsarenotfullyexplainable(inthesenseofnotbeingabletotracetheprocessesthatleadtotherelationshipbetweeninputsandoutputs).RisksassociatedwithcomplexityarecompoundedbythefactthatthereisagenerallackofexpertiseattheintersectionofAIandcriticalinfrastructure,bothwithintheCI

communityandonthepartofAIproviders.

Opportunities

DespiteacknowledgmentoftherisksassociatedwiththeuseofAI,therewasgeneralagreementamongparticipantsthattherearemanybenefitstousingAItechnologiesincriticalinfrastructure.

AItechnologiesarealreadyinuseinseveralsectorsfortaskssuchasanomaly

detection,operationalawareness,andpredictiveanalytics.Thesearerelativelymatureusecasesthatrelyonolder,establishedformsofAIandmachinelearning(suchas

classificationsystems)ratherthannewergenerativeAItools.

OtheropportunitiesforAIadoptionacrossCIsectorsincludeissuetriageor

prioritization(suchasforfirstresponders),thefacilitationofinformationsharinginthecybersecurityorfraudcontexts,forecasting,threathunting,SecurityOperationsCenter

CenterforSecurityandEmergingTechnology|13

(SOC)operations,andpredictivemaintenanceofOTsystems.Moregenerally,

participantswereinterestedinAI’spotentialtohelpusersnavigatecomplexsituationsandhelpoperatorsprovidemoretailoredinformationtocustomersorstakeholders

withspecificneeds.

BarrierstoAdoption

Evenafterconsideringtherisk-opportunitytrade-offs,however,severalparticipantsnotedthatCIoperatorsfaceavarietyofbarriersthatcouldpreventthemfrom

adoptinganAIsystemevenwhenitmaybefullybeneficial.

SomeofthesebarrierstoadoptionarerelatedtohesitancyaroundAI-relatedrisks,

suchasdataprivacyandthepotentialbroadeningofone’scybersecurityattacksurface.SomeoperatorsareparticularlyhesitanttoadoptAIinOT(whereitmightaffect

physicalsystems)orcustomer-facingapplications.Thetrustworthiness—orlackthereof—ofAIsystemsisalsoasourceofhesitancy.

OtherbarriersareduetotheuniqueconstraintsfacedbyCIoperators.Forinstance,thefactthatsomesystemshavetobeconstantlyavailableisachallengeuniquetoCI.

Operatorsinsectorswithimportantdependencies—suchasenergy,water,and

communications—havelimitedwindowsinwhichtheycantaketheirsystemsoffline.OT-heavysectorsalsomustcontendwithadditionaltechnicalbarrierstoentry,suchasagenerallackofusefuldataorarelianceonlegacysystemsthatdonotproduce

usabledigitaloutputs.Incertaincases,itmayalsobeprohibitivelyexpensive—oreventechnicallyimpossible—toconductthoroughtestingandevaluationofAIapplicationswhencontrolofphysicalsystemsisinvolved.

Athirdcategoryofbarriersconcernscompliance,liability,andregulatoryrequirements.CIoperatorsareconcernedaboutrisksstemmingfromtheuseofuserdatainAI

modelsandtheneedtocomplywithfracturedregulatoryrequirementsacrossdifferentstatesordifferentcountries.Forexample,multinationalcorporationsinsectorssuchas

ITorcommunicationsarebeholdentothelawsofmultiplejurisdictionsandneedtoadheretoregulationssuchastheEuropeanUnion’sGeneralDataProtection

Regulation(GDPR),whichmaynotapplytomorelocalCIoperators.

Finally,asignificantbarriertoentryacrossalmostallsectorsistheneedforworkerswithAI-relevantskills.Participantsnotedthatalleviatingworkforceshortagesby

hiringnewworkersorskillingupcurrentemployeesisaprerequisiteforadoptingAIinanyrealcapacity.

CenterforSecurityandEmergingTechnology|14

Observations

Throughouttheworkshop,fourcommontrendsemergedfromthebroaderdiscussion.

Differentparticipants,eachrepresentingdifferentsectorsorgovernmentagencies,

raisedthematmultiplepointsduringtheconversation,anindicatoroftheirsaliency.

ThesetopicsincludethedisparitiesbetweenlargeandsmallCIproviders,thedifficultyindefininglinesbetweenAI-andcyber-relatedissues,thelackofclearowners

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論