2024在工作場所使用人工智能:機會、風(fēng)險和政策應(yīng)對措施(英)-OECD_第1頁
2024在工作場所使用人工智能:機會、風(fēng)險和政策應(yīng)對措施(英)-OECD_第2頁
2024在工作場所使用人工智能:機會、風(fēng)險和政策應(yīng)對措施(英)-OECD_第3頁
2024在工作場所使用人工智能:機會、風(fēng)險和政策應(yīng)對措施(英)-OECD_第4頁
2024在工作場所使用人工智能:機會、風(fēng)險和政策應(yīng)對措施(英)-OECD_第5頁
已閱讀5頁,還剩24頁未讀 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

OECDpub1ishing

USINGAIINTHEWORKPLACE

OPPORTUNITIES,RISKS

ANDPOLICYRESPONSES

OECDARTIFICIAL

INTELLIGENCEPAPERS

March2024No.11

2

USINGAIINTHEWORKPLACE?OECD2024

ThispaperispublishedundertheresponsibilityoftheSecretary-GeneraloftheOECD.TheopinionsexpressedandtheargumentsemployedhereindonotnecessarilyreflecttheofficialviewsofOECDmembercountries.

Thisdocument,aswellasanydataandmapincludedherein,arewithoutprejudicetothestatusoforsovereigntyoveranyterritory,tothedelimitationofinternationalfrontiersandboundariesandtothenameofanyterritory,cityorarea.

Coverimage:?Kjpargeter/S

?OECD2024

Theuseofthiswork,whetherdigitalorprint,isgovernedbytheTermsandConditionstobefoundat

/termsandconditions.

3

USINGAIINTHEWORKPLACE?OECD2024

UsingAIintheworkplace:

Opportunities,risksandpolicyresponses

Introductionandpurpose

Policymakersacrosstheglobearegrapplingwiththerapiddevelopmentsinartificialintelligence(AI)technologiesandtheiradoptionintheworkplace.EvenbeforetheadventofgenerativeAI,impressiveprogresshadbeenmadeinarangeofdomains,includingcomputervision,reasoning,problemsolving,aswellasreadingcomprehensionandlearning.EmployersarebeginningtouseAIapplicationstosiftthroughCVs,interactwithcustomers,allocate,direct,andevaluatework,andtoidentifyandprovidetraining.WorkersareusingAIinanincreasingnumberoftasks.TheadventofgenerativeAIhasresultedinashiftandaccelerationintheuseandimpactofAI,whichisnowageneralpurposetechnologythatislikelytoaffecteveryoccupationandsectoroftheeconomy.

AIcanbringsignificantbenefitstotheworkplace.IntheOECDAIsurveys,fourinfiveworkerssaidthatAIhadimprovedtheirperformanceatworkandthreeinfivesaidithadincreasedtheirenjoymentofwork(Lane,WilliamsandBroecke,2023[1]).WorkerswerealsopositiveabouttheimpactofAIontheirphysicalandmentalhealth,aswellasitsusefulnessindecisionmaking(Lane,WilliamsandBroecke,2023[1]).NotinvestinginAIandnotadoptingitintheworkplacewouldbeamissedopportunitytoboostproductivityandimprovejobquality,amongstothers.UnequalaccesstoanduseofAIintheworkplacecouldleadtoincreaseddisparitiesbetweenfirmsandworkersaswellasacrosscountries.

Torealisetheseopportunities,itishowevernecessarytoaddresstherisksraisedbyAIforthelabourmarket.TheOECDAIsurveysshowthat3in5workersareworriedaboutlosingtheirjobtoAIinthenext10years,and2in5expectAItoreducewagesintheirsector.Workersalsoexpressconcernsaroundincreasedworkintensityandthecollectionanduseofdata,amongstothers(Lane,WilliamsandBroecke,2023[1]).Otherrisksinclude:biasanddiscrimination,unequalimpactonworkers,lackofhumanoversight,aswellaslackoftransparency,explainabilityandaccountability,amongstothers.

Box1.TheOECDAIsurveys

Wishingtocaptureworkers’andemployers’ownperceptionsofthecurrentandfutureimpactofAIontheirworkplaces,theOECDsurveyedatotalof5334workersand2053firmsinthemanufacturingandfinancialsectorsinAustria,Canada,France,Germany,Ireland,theUnitedKingdomandtheUnitedStates.ThesurveysexaminehowandwhyAIisbeingimplementedintheworkplace;itsimpactonmanagement,workingconditionsandskillneeds;itsimpactonworkerproductivity,wagesandemployment;whatmeasuresarebeingputinplacetomanagetransitions;andconcernsandattitudessurroundingAI.ThemostfrequentlyreportedusesofAIincludedataanalyticsandfrauddetectioninthefinancesector,andproductionprocessesandmaintenancetasksinmanufacturing.

4

USINGAIINTHEWORKPLACE?OECD2024

ThesurveyrevealsthatbothworkersandemployersaregenerallyverypositiveabouttheimpactofAIonworkerproductivityandworkingconditions.Around80%ofAIuserssaidthatAIhadimprovedtheirperformanceatwork,andAIusersweremorethanfourtimesaslikelytosaythatAIhadimprovedworkingconditionsastosaythatAIhadworsenedthem.

However,therearealsoconcerns,includingaboutjobloss–anissuethatshouldbecloselymonitored.Thesurveysalsoindicatethat,whilemanyworkerstrusttheiremployerswhenitcomestotheimplementationofAIintheworkplace,morecanbedonetoimprovetrust.Inparticular,thesurveysshowthatbothtrainingandworkerconsultationareassociatedwithbetteroutcomesforworkers.

Source:Lane,M.,M.WilliamsandS.Broecke(2023[1]),“TheimpactofAIontheworkplace:MainfindingsfromtheOECDAIsurveysof

employersandworkers”

,/10.1787/ea0a0fe1-en.

Arisk-basedapproachhasbeencommoninthinkingaboutthepolicyandregulatoryresponsetoAI.InDecember2023,theEuropeanParliamentandCouncilreachedaprovisionalagreementontheArtificialIntelligenceAct,whichwillestablishrulesforAIbasedonitspotentialrisksandlevelofimpact,withsomeapplicationsbeingbannedandobligationsimposedforapplicationsthataredeemedtobehighrisk–suchasmanyusesintheworkplace.IntheUnitedStates,theExecutiveOrderonSafe,Secure,andTrustworthyArtificialIntelligenceissuedinOctober2023directs“themostsweepingactionsevertakentoprotectAmericansfromthepotentialrisksofAIsystems”including,forexample,developingprinciplesandbestpracticestomitigatetheharmsandmaximisethebenefitsofAIforworkers.TheBletchleyDeclarationbycountriesthatattendedtheAISafetySummitatBletchleyPark(UnitedKingdom)inNovember2023focusedonidentifyingAIsafetyrisksandbuildingrisk-basedpolicies.Inmanycases,AIdoesnotoperateinaregulatoryvacuumandthattherearealreadylawsthatregulateitsuseandimpact.Howevertherearegapsintheexistingregulatoryandpolicyframeworks,andurgentpolicyactionisneeded.

Aspolicymakersimplementthesemeasures,thereisaneedforspecificguidanceonrisksandmeasureslinkedtotheuseofAIintheworkplace.ThisnoteusestheOECDPrinciplesontrustworthyAIanddrawsonthesubstantialbodyofworkdonebytheOECDinthisfield(OECD,2023[2])toidentifykeyrisksposedbytheuseofAIintheworkplace,toidentifythemainpolicygapsandofferpossiblepolicyavenuesspecifictolabourmarkets.Thenotepresentstherisksandtheassociatedpolicyresponsesindividually,buttheserisksinteractamongeachotherandmeasurestoaddressoneriskwilloftencontributetoaddressingothersaswell.

Risks,policygapsandpolicyavenues

Automationandjobdisplacement

Risks:AIisanautomatingtechnologythatdiffersfromprevioustechnologiesinatleastthreeimportantaspects.First,AIextendsthetypesoftasksthatcanbeautomatedtomanynon-routinecognitivetasks,andthereforeexposesworkerswhowerepreviouslyrelativelyprotectedfromautomation(e.g.thehigh-skilled)totherisksofdisplacement.Second,alloccupationsandsectorsarelikelytobeaffectedbyAI(asopposedto,forexample,robotswhichprimarilyimpactedthemanufacturingsector).Third,thespeedofAIdevelopmentandadoptioninthelabourmarketleaveslittletimeforadjustmentandcouldraisefrictionalunemployment.Sofar,thereislittleevidenceofanetnegativeimpactofAIonthenumberofjobs,buttheriskofautomationremainssubstantial:theOECDestimatesthatoccupationsatthehighestriskofautomationaccountforabout27%oftotalemployment.Itwillbeimportanttohelpworkersmovefromdecliningsectorsandoccupationsintotonewandgrowingones.

5

USINGAIINTHEWORKPLACE?OECD2024

Figure1.Percentageofemploymentinhighlyautomatablejobs,2019

40

35

30

25

20

15

10

5

0

Source:OECD(2023[2]),OECDEmploymentOutlook2023

,/10.1787/08785bba-en.

Policygaps:MostcountriesrecognisetheimportanceofskillsandtrainingtoadapttoAI-relatedautomation,butfewhaveproposedconcreteactionplans,andfewarepreparedforthequantumleapintrainingthatwillberequired.ExistingprogrammestendtofocusondigitalorAIskills,butfewrecognisetheimportanceofcomplementaryskills(e.g.communication,creativity,orworkingwithothers),andonlyaminorityhavedevelopedanintegratedapproachforAIskillsdevelopment.Socialdialoguewillalsobeimportantinmanagingthesetransitions,butfacesitsownchallenges(seesectiononsocialdialoguebelow).

Possiblepolicydirectionsthatcountriesmayconsider:

?MonitoringtheimpactofAIonthelabourmarkettoidentifyjobsmostatriskofautomation.

?AnticipatingfutureskillneedsrelatedtoAIadoptionintheworkplace.

?Skillsdevelopmentprogrammesatalllevelsofeducation,todevelopskillsneededtoworkwithanddevelopAI.

?TrainingforworkersandmanagerstosupporttheadoptionanduseoftrustworthyAI.

?Employmentsupportmeasures,includingtargetedtrainingprogrammesandcareerguidance,forworkersatdirectriskofautomationbyAI.

?AdequatesocialprotectionforworkersdisplacedbyAI.

?Supportingsocialdialogue(seebelow).

Risinginequality

Risks:Workersfacedifferentrisksofautomation-forexampledependingontheirskills,occupation,firmsize.Theyhavealsodifferentexposuretoriskofbiasanddiscrimination,privacybreaches,andhealthandsafety.Ontheotherhand,workersthatdonothaveaccesstoAIintheworkplacecannotbenefitfromtheopportunitiesitoffers,forexampletobemoreproductive,toovercomeobstacleslinkedtodisability,oraccessnewjobscreatedbyAI.EmergingevidenceshowsthatAIcanalsoincreaseproductivityoflow-skilledworkersincertainoccupations,reducingproductivitygapswithhigher-skilledworkers.ThereisthereforeaconcreteriskthattheadoptionofAIintheworkplaceleadstoincreasedinequalityinthelabourmarket.

6

USINGAIINTHEWORKPLACE?OECD2024

Olderworkers

Lowskilledworkers

Femaleworkers

Workerswithdisabilities

Figure2.PercentageofemployerswhothinkAIhelps/harmsgroupsofworkers,financeandmanufacturing

Helpthem

Harmthem

0%10%20%30%40%50%60%

Source:OECD(2023[2]),OECDEmploymentOutlook2023

,/10.1787/08785bba-en.

Policygaps:WhilesomecountriesalreadyhavepoliciesinplacesuchastrainingorsubsidiesforAIadoption,theymaybepoorlytargetedandthereisaneedtobetterunderstandwhichgroupsfacethehighestrisksothatpublicresourcesareusedefficiently.WhereAIoffersopportunitiesforreducinginequalities,governmentscandomoretofostertheirdevelopmentandadoption,especiallyamongsmallerfirmswhichhavelessmeanstoaccessgoodqualityAItools.Forexample,eventhoughmanyAIsolutionsexisttohelppeoplewithdisabilitiesovercomelabourmarketbarriers,therearechallengeswithfunding,certificationandqualitystandardsforsuchtools,aswellasalackofaccessibilitytrainingamongdevelopers.Policiestoaddresstheotherrisksdiscussedintherestofthisbriefwillhelpaddressinequalities.

Possiblepolicydirectionsthatcountriesmayconsider:

?IdentifyingthegroupsmostexposedtoAI-relatedrisksinthelabourmarket.

?TrainingandsupporttargetedtodisadvantagedworkerspriortoandduringAIadoption.

?TargetedgrantsorsubsidiesforSMEstofacilitatetheiradoptionoftrustworthyAI.

?TacklingrisksinAIsystemsrelatedtobiasanddiscriminationandautonomy(seebelow).

?InvolvingvulnerableandunderrepresentedgroupsinthedevelopmentandadoptionofAIsystemsfortheworkplace.

Riskstooccupationalhealthandsafety

Risks:AIsystemscanbeusedtoimproveworkers’healthandsafetyatwork,forexamplebyautomatingdangeroustasks,detectinghazards,ormonitoringworkerfatigue.TheOECDAISurveysshow,forexample,thattheadoptionofAIatworkincreasedenjoymentatworkfor3in5workers(Lane,WilliamsandBroecke,2023[1]).Atthesametime,theuseofAIcreatesnewrisksfromanOccupationalSafetyandHealth(OSH)perspective.Forinstance,someAI-poweredmonitoringsystemsmayincreasetimeandperformancepressuretotheextentthattheycausestressand/orcreateincentivesforworkerstoignoresafetystandards.Stressmayalsoresultfromdecisionsthatareunfair,lacktransparencyandexplainability,andwherethereisnoeasyopportunityforredress.Thedisappearanceofroutinetasks

7

USINGAIINTHEWORKPLACE?OECD2024

throughAImaydeprivetheworkeroftherespiteprovidedbythesetasks,leadingtomorementallytaxingshiftsandpossiblyincreasingtheriskofphysicalinjury.IncreaseduseofAIintheworkplacemayalsodecreasehumancontacttothedetrimentofmentalhealth.

Figure3.Numberofincidentscausingphysicalorpsychologicalharmtoworkers,2023

40

35

30

25

20

15

10

5

0

JanFebMarAprMayJunJulAugSepOctNovDec

Source:OECDAIIncidentsMonitor(AIM)

,https://oecd.ai/en/incidents.

Gaps:Mostcountrieshaveregulationsthatsetoutemployers’obligationstowardsemployeesconcerningtheiroccupationalsafetyandhealth.Whilethedetailsvaryfromcountrytocountry,employersusuallyhavetoassessrisks,andeliminateorreducethemwithpreventativeandprotectivemeasures,andinformworkersabouttherisksandtrainthem.WhileintheorysuchregulationsshouldalsocoverAI,theremaybegaps,particularlyinmentalhealth.Also,whilemostcountrieshaveproductliabilityregulations,theylikelywillneedtobeadaptedtotheuseofAIsystems.Finally,labourinspectoratesmaylacktheknowledgeand/orcapacitytoaddressnewrisksposedbyAI.

Possiblepolicydirectionsthatcountriesmayconsider:

?Reviewingand,ifnecessary,updatinglabourlawsandOSHregulationstoaddresstheAIuseintheworkplace.

?Healthandsafetyriskassessments,auditsandcertificationsforAIsystemstoensureworkers’healthandsafetyfromthedesignstage.

?Strengtheninglabourinspectorate’scapacitiestoinspectandenforcecompliancewiththelaw.

?Involvingmanagers,workers,andtheirrepresentativesinthedesignandadoptionofAIsystemsintheworkplace.

?Informingemployers,workersandtheirrepresentativesaboutthepossibleOSHrisksofAIsystemsusedintheworkplace.

Privacybreaches

Risks:TheincreaseduseofAIintheworkplacewilllikelyresultinthegreatercollectionandanalysisofdataonworkersandjobcandidatestotrainandusethesesystems.Datamayormaynotbepersonal,andcouldincludeinformationsuchas:workermovements,biometricdata,likeheartratesandbloodpressure,aswellasdigitalactivities.Workersmayfeelthatthisisaninvasionoftheirprivacy,inparticular

8

USINGAIINTHEWORKPLACE?OECD2024

Stronglyagree

Somewhatagree

Neitheragreenordisagree

Somewhatdisagree

Stronglydisagree

iftheygavenoconsenttothecollectionanduseofthedata.Workersmightalsoworrythatthedataareusedforpurposesotherthanforwhichitwasintended.Moreover,datacollectionmayresultinincreasedmonitoringandsurveillance,whichcouldleadtostress.

Gaps:TheprotectionofworkersagainstprivacyrisksvariesconsiderablyacrossOECDcountriesbut,eveninthosewiththestrongestprotections,gapsremain.Forexample,inEUcountries,theGeneralDataandPrivacyRegulation(GDPR)strengthensindividuals’controlandrightsovertheirpersonalinformationbuttherearesignificantenforcementgaps.TheGDPRalsoleavesdataprotectionintheemploymentcontexttobeaddressedattheMemberStatelevel,sotheserulesarestillfarfrombeingharmonisedacrosscountries,consistentandcomprehensive.ProtectionsareevenweakerinotherOECDcountries.Forexample,inmostUSstates,thereareverylimitedprotectionswhenitcomestothecollectionanduseofdataonworkersbyemployers.

Figure4.Percentageofworkerswhoareworriedabouttheirprivacy,manufacturingandfinanceemployerswhouseAI

11%

24%

14%

19%

32%

Note:Workerswhoreportthattheiremployers’useofAIinvolvedthecollectionofdataonworkersortheirworkwereasked:“Towhatextentdoyouagreeordisagreewiththefollowingstatements?Iworryaboutmyprivacywhenmydataiscollected”.

Source:Lane,M.,M.WilliamsandS.Broecke(2023[1]),“TheimpactofAIontheworkplace:MainfindingsfromtheOECDAIsurveysofemployersandworkers”

,/10.1787/ea0a0fe1-en.

Possiblepolicydirectionsthatcountriesmayconsider:

?ImpactassessmentsandqualitylabelstoevaluateprivacyandsecurityofpersonalinformationintheAIsystems.

?Restrictingthecollection,use,inference,anddisclosureofworkers’personalinformation.

?Requirementstosafeguardworkers’personalinformationandappropriatehandlingofdata.

?Providinginformationtoworkersaboutdatacollectedbyemployersandpurposeofuse(seealsoTransparency).

?Rightsforworkerstocorrect,delete,opt-outof,orlimittheuseofsensitivepersonalinformation,includingthroughworkers’representatives.

?QualitylabelsandcertificationsforAIsystemswithgooddataprotection.

9

USINGAIINTHEWORKPLACE?OECD2024

Biasanddiscrimination

Risks:TrustworthyAIcanhelpidentifyandreducehumandiscriminationandbiasintheworkplacebysupportingdecisionswithquantitativeevidence.However,ifnotwelldesignedand/ortrainedonbiased/non-representativedata,AIsystemscanreplicateandsystematisehumanbiasesthathavehistoricallyexistedinthelabourmarket,leadingtobiasanddiscriminationinwhocanseejobpostings,whoisshortlistedforjobopenings,whoisassignedwhichtasksatwork,whoreceivestraining,andperformanceassessment,amongothers.

Gaps:Intheory,existinganti-discriminationlegislationisapplicabletoAIuseintheworkplace.Theremay,however,begapsandloopholesinthislegislation.Relevantcaselawisstilllimitedandwillshowwherelegislationmayneedtobereviewed.LackoftransparencyandexplainabilityofAIsystems(seeTransparencyandExplainability)posesfurtherchallengesincountriesthatrelyheavilyonindividualactionforseekingredress,makingitdifficulttocontestAI(-based)workplacedecisionsusingonlyexistinganti-discriminationlaws.

Figure5.PercentageofAI-usingorganisationsthatdonottakestepstoreduceunintendedbiasinthesystem

80

70

60

50

40

30

20

10

0

NotreducingunintendedbiasNotmakingsuretheycan

explainAI-powereddecisions

NotdevelopingethicalAIpolicies

Notguardingagainst

adversarialthreatsandpotential

incursionstokeepsystems

healthy

Notsafeguardingdataprivacythroughtheentirelifecycle

Source:IBMWatson(2022[3]),IBMGlobalAIAdoptionIndex2022,

/downloads/casGVAGA3JP?mkt_tok=/

NjczLVBISy05NDgAAAGH0tcnDiI.

Possiblepolicydirectionsthatcountriesmayconsider:

?Reviewingand,wherenecessary,adaptingexistinganti-discriminationlegislationtotheuseofAIintheworkplace.

?Impactassessmentstoassessrisksofbiaspriortoimplementation,andregularauditsafterimplementation.

?Qualitylabelsandcertificationsagainstbias.

?InvolvingsocialpartnersandrepresentativesofvulnerableandunderrepresentedworkersinthedesignanddeploymentofAIsystemsintheworkplace.

10

USINGAIINTHEWORKPLACE?OECD2024

Lackofautonomy,agency,anddignity

Risks:FirmsfrequentlyintroduceAIsystemstostreamlineproductionprocesses,boostefficiencyandincreaseproductivity.Thesesystemscangiveworkersreal-timeandcontinuousfeedbackontheirperformance,directworkandprovidebehaviouralnudges.This“algorithmicmanagement”canundulylimitworkers’autonomy,reducehumancontactandtheabilityofworkerstodiscusstheirworkwithmanagers,orcontestdecisionsthatseemunsafe,unfair,ordiscriminatory.Thesepracticescouldundermineworkers’senseofprofessionalidentityandmeaningfulness,andpresentrisksforphysicalandmentalhealthandsafetyatwork.

Gaps:Somecountrieshaveintroducedregulationonworkplacemonitoring(e.g.theElectronicCommunicationsPrivacyActintheUnitedStates,theGDPRintheEuropeanUnionandtheUnitedKingdom,orthePersonalInformationProtectionandElectronicDocumentsActinCanada)andautomateddecision-making(theAlgorithmicAccountabilityActintheUnitedStatesandtheGDPR).Acomprehensiveapproachtoregulatingalgorithmicmanagementisstilllackinginmostjurisdictions,however.TheEUplatformdirectiveisoneofthefirstpiecesoflegislationtodoso,butitonlyappliestoaverysmallsub-sectionoftheworkforce(platformworkers).

Figure6.Percentageofworkerswhosesenseofautonomydecreased,manufacturingandfinanceemployerswhouseAI

ManagedbyAI

OtherAIusers

0246810121416

Source:OECD(2023[2]),OECDEmploymentOutlook2023

,/10.1787/08785bba-en.

Possiblepolicydirectionsthatcountriesmayconsider:

?DefiningclearboundariesforuseofAIsystems,e.g.onthepermissibleextentofmonitoringandautomateddecision-making.

?Requiringhumanoversightofdecisionsthataffectworkers’safety,rights,andopportunities.

?Consultationsandinvolvementofworkersand/ortheirrepresentativesintheadoptionofAIsystems(seeChallengestosocialdialogue).

11

USINGAIINTHEWORKPLACE?OECD2024

Lackoftransparency

Risks:Theabilityofworkerstoexercisespecificrights(e.g.therightnottobesubjecttoautomateddecision-making),detectrisks,and/oreffectivelyquestionoutcomes,hingesontheirawarenessoftheirinteractionswithAIsystemsandhowthatsystemreachesitsoutcomes(seealso

Insufficient

explainability)

.However,AIusecanbedifficulttodetectwithoutexplicitdisclosure.Forinstance,Harris,B.etal.(2023[4])findthatonly17%ofadultsintheUnitedKingdomcanoftenoralwaystellwhentheyareusingAI.EvenifindividualsareawareoftheirinteractionswithAI,gaininginsightintoitsdecision-makingprocesscanbedifficult,forinstanceduetodevelopers’reluctancetodiscloseinformation,ortothecomplexityofthesystem.

Gaps:MostAIprinciplesunderscoretheimportanceoftransparencyofAIanditsuse,buttranslatingtheseconceptsintopracticemaybecomplex.Forinstance,severalStatesintheUnitedStateshaveintroducedlawsrequiringemployerstonotifyapplicantsand/oremployeesabouttheirinteractionswithAI,butoftentheseregulationsdonotencompassallconceivableAIapplications,andfocusontheuseofAIforrecruitmentorelectronicmonitoring.IntheEU,thePlatformWorkDirectiveprovidesindividualswithsomerightstoinformationonthelogicofalgorithmswhereautomateddecision-makingisused,howeveritonlyappliestoplatformworkers.Inaddition,theremaybebarrierstotransparencyduetointellectualpropertyrights(tradesecrets)andprivacylaws,bothofwhichlimithowmuchinformationcanbedisclosed.

Possiblepolicydirectionsthatcountriesmayconsider:

?RequirementstodiscloseuseofAIsystemsintheworkplaceandinhiringprocesses,forbothemployersandworkers.

?Reviewingand,ifnecessary,updatingprivacyandintellectualpropertylawstoaddresspotentialambiguitiesandbalancetherightstheyprotectagainsttheneedfortransparentAI(use).

Insufficientexplainability

Risks:AIsystems,particularlythoseusingcomplextechnologieslikedeepneuralnetworks,yieldoutcomesthatcanbedifficultorevenimpossibletoexplain.AlackofexplainabilitycanunderminethetrustandconfidencethatpeopleplaceinAIsystemsandthedecisionsthatareinformedbythem.Italsomakesitdifficultforindividualstoprovideinformedconsenttotheuseofsuchsystems,ortoidentifyandseekredressforadverseeffectscausedbyAIsystemsintheworkplace.Alackoftrustandconfidence,inturn,cancauseworkerresistanceandhencehindertheadoptionofAIsystemsintheworkplace.

Gaps:PolicymakersinvariouscountrieshavetoutedexplainabilityasadesirablepropertyofAIsystems,however,therestillisnobroadagreementonwhatexplainabilitywouldentail.TheGDPRforexamplerequiresdatasubjectstobeprovidedwith“meaningfulinformationaboutthelogicinvolved”inautomateddecisionmakingprocesses,whichoftenstartsbyprovidinginformationaboutwhattheAIsystemhasbeen“optimised”todo.Explanatorytools,suchasasimplealgorithmthatapproximatesthebehaviouroftheAIsystemandthusprovidesapproximateexplanation.ForsomeAIsystems(anddependinguponthedefinitionused),explainabilitymaybedifficultifnotimpossibletoachieve,oritmaybeinconflictwithotherdesirableobjectivessuchasaccuracyorprivacy.NeithertheEUAIActnortheUSPresidentialExecutiveOrdermentionexplainability.

Possiblepolicydirectionsthatcountriesmayconsider:

?Requiringdeveloperstoprovidedocumentation,instructionsofuse,andexplanatorytoolstoaccompanyAIsystemsusedintheworkplace.

?RequiringemployersandworkerstodisclosetheuseofAIsystemsintheworkplaceandinhiringprocesses,andprovideresultsofexplanatorytoolsupontherequestofworkersortheirrepresentatives.

12

USINGAIINTHEWORKPLACE?OECD2024

Lackofaccountability

Risks:EstablishingclearlinesofaccountabilityisfundamentalforatrustworthyuseofAIandtheenforcementofregulations.Itisnotalwaysclear,however,whichac

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論