版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)
文檔簡介
AI-EnabledInfluenceOperations:SafeguardingFutureElections
SamStockwell,MeganHughes,PhilSwatton,AlbertZhang,JonathanHallandKieran
November2024
SamStockwell,MeganHughes,PhilSwatton,AlbertZhang,JonathanHallandKieran
AboutCETaS 2
Acknowledgements 2
ExecutiveSummary 3
CETaSUKelectionsecurityrecommendations 6
Introduction 10
Researchmethodology 11
Reportstructure 12
1.PublicVulnerabilityandResilienceagainstDeceptiveContent 13
1.1Riskfactorsassociatedwithvulnerability 13
1.2TheeffectsofAIonriskfactors 15
1.3Protectivefactorsassociatedwithresilience 17
2.AI-EnabledUSElectionThreatAnalysis 20
2.1QualitativeanalysisofAI-enabledUSelectionthreats 20
2.2NetworkanalysisofUSelectiondeepfakes 36
3.EvaluatingInfluenceOperationsintheAgeofAI 41
3.1Challengesinevaluatinginfluenceoperations 41
3.2Measuringhostileinfluenceoperations 43
4.PolicyResponsestoAI-EnabledElectionThreats 46
4.1Legalandregulatorymeasures 46
4.2Policymeasures 52
5.TechnicalSolutionstoAI-EnabledElectionThreats 59
5.1Preventionmethods 59
5.2Contentdetectionmethods 60
5.3Socialbotdetectionmethods 62
5.4Contentprovenance 63
Conclusion 66
AbouttheAuthors 67
1
AI-EnabledInfluenceOperations:SafeguardingFutureElections
AboutCETaS
TheCentreforEmergingTechnologyandSecurity(CETaS)isaresearchcentrebasedatTheAlanTuringInstitute,theUK’snationalinstitutefordatascienceandartificial
intelligence.TheCentre’smissionistoinformUKsecuritypolicythroughevidence-based,interdisciplinaryresearchonemergingtechnologyissues.ConnectwithCETaSat
cetas.turing.ac.uk.
ThisresearchwassupportedbyTheAlanTuringInstitute’sDefenceandNationalSecurityGrandChallenge.Allviewsexpressedinthisreportarethoseoftheauthors,anddonot
necessarilyrepresenttheviewsofTheAlanTuringInstituteoranyotherorganisation.
Acknowledgements
Theauthorsaregratefultoallthosewhotookpartinaworkshopforthisproject,withoutwhomtheresearchwouldnothavebeenpossible.Theauthorsarealsogratefulto:TonyAattheUK’sNationalCyberSecurityCentre;Anne-LouiseBrownattheAustralianCyber
SecurityCooperativeResearchCentre;DrJonathanBrightattheTuring’sPublicPolicy
Programme;researchersattheAustralianStrategicPolicyInstitute;andDanielJordan,
KevinXu,AliceCrillyandSamAbbottattheDepartmentforScience,Innovationand
Technologyforreviewinganearlierversionofthereport.ThefiguresinthisBriefingPaperweredesignedbyEmmaRowlandsandChrisRaggett.
ThisworkislicensedunderthetermsoftheCreativeCommonsAttributionLicence4.0,
whichpermitsunrestricteduseprovidedtheoriginalauthorsandsourcearecredited.Thelicenceisavailableat:
/licenses/by-nc-sa/4.0/legalcode.
Citethisworkas:SamStockwell,MeganHughes,PhilSwatton,AlbertZhang,JonathanHallandKieran,“AI-EnabledInfluenceOperations:SafeguardingFutureElections,”CETaS
ResearchReports(November2024).
2
SamStockwell,MeganHughes,PhilSwatton,AlbertZhang,JonathanHallandKieran
ExecutiveSummary
ThisCETaSResearchReportexamineshostileinfluenceoperationsenabledorenhancedbyartificialintelligence(AI),andmethodstoevaluateandcounteractsuchactivitiesduring
electioncyclesandbeyond.Italsoincludesevidence-basedanalysisofAI-enabledthreatsthatemergedintheNovember2024USpresidentialelection.
As2024drawstoaclose,morethan2billionpeopleinatleast50countrieswillhavevotedinthebiggestelectionyearinhistory.Atthestartoftheyear,thereweresignificant
concernsovertheproliferationofnewgenerativeAImodels,whichallowuserstocreateincreasinglyrealisticsyntheticcontent.Therehasbeenpersistentspeculationabouthow
thesetoolscoulddisruptkeyelectionsthisyear,manyofwhichwillhavemajorconsequencesforinternationalsecurity.
Therewasariskthatalackofempiricalworkontheimpactofthethreatwouldamplify
publicanxietyaboutit–which,inturn,couldhaveunderminedtrustinelectoralprocesses.Therefore,CETaScloselymonitoredkeyelectionsthroughouttheyear,tounderstandifandhowAImisuseaffectedtheseprocesses.AsreflectedintwoBriefingPaperspublishedin
MayandSeptember2024,CETaSconsistentlyfoundnoevidencethatAI-enabled
disinformationhadmeasurablyalteredanelectionresultinjurisdictionsrangingfromtheUKandtheEuropeanUniontoTaiwanandIndia.
ThisfinalResearchReportextendsthisglobalanalysistotheUSelectionandprovidesrecommendationsforprotectingtheintegrityoffuturedemocraticprocessesfromAI-enabledthreats,withafocusonhowUKinstitutionscancountersuchactivities.
KeyfindingsfromtheUSelectionspecificallyareasfollows:
?ThereisalackofevidencethatAI-enableddisinformationhashadameasurableimpactonthe2024USpresidentialelectionresults.However,thisisprimarilyduetoinsufficientdataontheimpactofsuchdisinformationonreal-worldvoter
behaviour.Whilesocialmediametricscanprovideinsightsintohowusersengage
withthiscontent,moreempiricalresearchisneededtounderstandhowitinfluenceslarge-scalevotingintentions.
?Despitethis,deceptiveAI-generatedcontentdidshapeUSelectiondiscoursebyamplifyingotherformsofdisinformationandinflamingpoliticaldebates.From
fabricatedcelebrityendorsementstoallegationsagainstimmigrants,viralAI-enabledcontentwasevenreferencedbysomepoliticalcandidatesandreceivedwidespreadmediacoverage.Nevertheless,non-AIfalsehoodscontinuedtohaveasignificant
3
AI-EnabledInfluenceOperations:SafeguardingFutureElections
impactandcouldnotbeignored.Theyincluded:misleadingclaimsbypolitical
candidates;conspiracytheoriespromotedbyfringeonlinegroups;andothertoolsofcontentmanipulation,suchastraditionalvideo-andimage-editingsoftware.
?AI-enableddisinformationintheUSelectionwasprimarilyendorsedoramplifiedbythosewithpre-existingbeliefsalignedwithitsmessages.Giventheextreme
politicalpolarisationofUSsociety,thecontentpredominantlyhelpedreinforcepriorideologicalaffiliationsamongtheelectorate.ThisechoespreviousCETaSfindingsthatalignmentbetweendisinformationandanindividual’sestablishedpolitical
opinionsiscrucialintheirdecisiontosharethecontent.
KeyfindingsforcounteractingAI-enabledinfluenceoperationsareasfollows:
?Digitalliteracy,astrongpublicbroadcastingecosystemandlowlevelsof
politicalpolarisationareallfactorsthatcanincreasepublicresistanceto
engagementwithdisinformation.Suchfactorspointtotheimportanceofinitiativestofosterahealthyinformationspaceatboththeindividualandsocietallevels.
?Thereisnoone-size-fits-allframeworktoevaluatehostileinfluenceoperationstargetingfutureelectioncyclesorwidersociety.Instead,researchersshould
weighthetrade-offsbetweenthedifferenttoolsthatareavailableandusetheonemostsuitedtotheoperationinquestion.Insomecases,combiningdifferent
frameworkswillprovideadditionalinsightsintotheseactivities.
?GiventhesignsthatAI-enabledthreatsbegantodamagethehealthof
democraticsystemsgloballythisyear,complacencymustnotcreepinto
governmentdecision-making.Aheadofupcominglocal,regionalandnational
elections–fromAustraliaandCanadain2025toScotlandin2026–thereisnowa
valuableopportunitytoreflectontheevidencebaseandidentifymeasurestoprotectvoters.
?Therefore,thisreportrecommendsthefollowingactionstoprotectelections
andwidersocietyfromAI-enabledinfluenceoperationsandotherdisinformationactivities.Thesesolutionshavebeeninformedbyanextensiveliteraturereviewandworkshopswith47cross-sectorexperts.TheycentreonthefollowingfourstrategicobjectivesdesignedtohelpUKinstitutionstargetdifferentaspectsoftheonline
disinformationprocess:
oCurtailgeneration–measuresthatincreasebarriersto,ordeteractorsfrom,creatingonlinedisinformationinthefirstplace.
oConstraindissemination–measuresthatreducetheeffectivenessandviralityofdisinformationcirculatingondigitalplatforms.
4
SamStockwell,MeganHughes,PhilSwatton,AlbertZhang,JonathanHallandKieran
oCounteractengagement–measuresthattargetthewaysthatusers
consumedisinformationondigitalplatforms,toreducemaliciousinfluence.
oEmpowersociety–measuresthatstrengthensocietalcapabilitiesforexposingandunderminingonlinedisinformation.
5
AI-EnabledInfluenceOperations:SafeguardingFutureElections
CETaSUKelectionsecurityrecommendations
Curtailgeneration
1)DigitalprovenancestrategyforUKorganisations:TheUKDepartmentfor
Science,InnovationandTechnology(DSIT)shouldestablishanimplementation
strategyforautomaticallyembeddingprovenancerecordsindigitalcontent
producedbytheUKGovernmentandothersectorsatitsorigin.Thiswould
strengthentheauthenticityofcredibleinformationsources,andcoulddrawontheUSOfficeofManagementandBudget’srequirementtoissuesimilarguidancebyJune2025.
2)Authenticity-by-design:TheInternetEngineeringTaskForce’sSecurityAreashoulddevelopandimplementauthenticity-by-designprinciplesacrossthe
internetecosystemtoprotectinformationintegrity,usingstructuressuchastheStarlingLabframework.Theschemeshouldaimtoembedtoolsintodifferent
partsoftheinternetinfrastructurethatautomaticallycapture,storeandverifydigitalprovenancerecordssecurely.
3)Clarifyingexistinglaws:TheUKMinistryofJusticeshouldconductareviewtounderstandweaknessesinexistinglegislationthatmaybeexploitedwith
maliciousAI-generatedcontenttargetingpoliticalcandidatesordesignedto
undermineelectionintegrity(includingthoserelatedtodefamation,privacyandelectorallaws).ThiswillhelptheMinistryunderstandwhetherexistinglawsareadequatetodetersuchactivitiesorwhetherlegislativereformsarerequired.
Constraindissemination
4)Deepfakedetectionbenchmarkingandguidance:TheUKAISafetyInstituteandtheHomeOfficeshouldcoordinatetodevelopstandardisedbenchmarksand
guidancefordeepfakedetectiontools,providingminimumqualityassurancesfor
6
SamStockwell,MeganHughes,PhilSwatton,AlbertZhang,JonathanHallandKieran
thoseusingthem.Thebenchmarksshouldbecontinuouslyupdatedagainstnewdeepfakeexamplestomaintainrelevance,whiletheguidanceshouldencouragedeveloperstopublishalistofkeydetailsbeforerelease,including:thepurposeandscopeofthetool;howitshouldbeusedandinterpreted;theexplainabilityofitsoutputs;anditslimitations.
5)Codeofconductondisinformation:AspartofitsPhaseThreeroadmapfortheOnlineSafetyAct2023,OfcomshouldcreateanewCodeofConductaimedatsystematicallytargetingonlinedisinformation.DrawinginspirationfromtheEU’sCodeofPracticeonDisinformation,thenewcodeshouldsetoutself-regulatorystandardsfordifferentsectorsondemonetisingdisinformationcontentcreators;defineunpermittedmanipulativebehavioursassociatedwithbotaccounts;
providetoolsforempoweringusersagainstdisinformation;andrequiretransparentincidentreporting.
6)Politicalpartyconduct:TheElectoralCommissionshouldexpandexisting
guidanceforUKpoliticalpartiesonboththeappropriateuseofAItoolsandclearredlinesonmisuse.Inturn,politicalpartiesshouldupdatetheirinternalcodesofconductwiththisguidancetocreateaccountabilityforcandidatesand
campaigners.
Counteractengagement
7)Mediavalidationapptools:OfcomshouldconvenemajorUKcommunicationsappprovidersandtheInternationalFact-CheckingNetworktodesignaccessibleandtransparentfact-checkingappsforUKusers.Thesecouldreplicateother
initiatives,suchasTaiwan’sLINEapp,whichhelpsusersverifycontentbyprovidingtrustedalternativenewssourcesforcross-referencing.
7
AI-EnabledInfluenceOperations:SafeguardingFutureElections
8)ElectionIncidentProtocol:TheCabinetOfficeshouldestablishaUKCritical
ElectionIncidentPublicProtocolbasedontheCanadianmodel.Involvingarangeofseniorgovernmentexperts,theprotocolwouldinformthepublicofthreats
consideredsevereenoughtounderminetheintegrityofelections.Any
announcementsmadethroughtheprotocolshouldbebasedonaconsensusandrestrictedtoinformingthepublicabouttheincidentandhowtheycanprotect
themselves.
9)Electionadvertimprints:TheUKGovernmentshouldtableanamendmenttoSection54oftheElectionsAct2022,whichdealswithimprintsondigital
campaignmaterialduringelections.Thisshouldintroduceanewtransparencyprovisionlegallyrequiringadvertcontentthathasbeendigitallyeditedtobe
embeddedwithsecureprovenancerecordsdetailinghowitwaseditedandbywhom.
10)Decentralisedfact-checking:Socialmediaplatformsshouldinvestgreater
resourcesinsupportofdecentralisedfact-checkinginitiatives,tohelpaddressthevolumeofdisinformationcirculatedonline.Theseinitiativesshouldincorporate
reputationandvotingsystemstoprovidequalitycontrolof,andademocraticconsensuson,user-madenotices.
11)Mediareportingguidance:TheIndependentPressStandardsOrganisation
shouldreviseitsexistingguidanceon‘reportingmajorincidents’toincludekey
considerationsforcoverageofknownhostileinfluenceoperationsandviral
disinformationcontent–drawingoninsightsfromjournalistsandfact-checkers.Suchinformationcouldincludeadvicetorefrainfromlinkingtotheoriginalsourcecontentinonlinearticles–therebydiscouragingusersfromsharingitwithothers–andtoframetheimpactofthecontentinawaythatdoesnotexaggeratethe
threatoftheseactivitiestothewiderpublic.
Empowersociety
12)Regulatorreview:DSIT’sAICentralRiskFunctionshouldcoordinatewithboththeElectoralCommissionandOfcomtoanalysepotentialgapsintheirrespective
8
SamStockwell,MeganHughes,PhilSwatton,AlbertZhang,JonathanHallandKieran
regulatorypowersandremits.Thereviewshouldfocusontheeffectivenessofbothregulatorsintacklingallformsofonlinedisinformationduringelections,inaccordancewiththeOnlineSafetyAct2023,theElectionsAct2022andthe
RepresentationofthePeopleAct1983.
13)Trustedresearcheraccess:TheUKGovernmentshouldensurethattheDigitalInformationandSmartDataBillandotherrelevantfuturelegislationincludea
provisionforestablishingatrustedresearchgroupondisinformation.ThiswouldrequiresocialmediaplatformstoprovidetrustedmembersoftheUKacademic,researchandcivilsocietycommunitieswithaccesstodataonidentifiedhostile
influenceoperations–akintoX’sformerdataaccessmodel.Tomaintain
impartiality,organisationsandindividualsshouldbeselectedbyUKResearchandInnovation’strustedresearchandinnovationprogramme.
14)Conveningexperts:OfcomshouldprioritiseestablishingtheAdvisoryCommitteeonDisinformationandMisinformation,assetoutbysection152oftheOnline
SafetyAct2023,tomaintainalong-termfocusontacklingdisinformation.The
committeeshouldhaveaclearmandateforinformingOfcom’scounter-
disinformationactivities,anindependentchairnotaffiliatedwithanypoliticalpartyortechplatform,anddiversesectoralrepresentation.
15)Digitalliteracyprogrammes:TheDepartmentforEducationandDSITshould
coordinateonestablishingnationwidedigital-literacyandcritical-thinking
programmes.Anyschemesofthiskindshouldbemademandatoryinprimaryandsecondaryschools,whilealsobeingpromotedtoadults.Suchinitiativeswould
seektoimprovesocietalresilienceagainstdisinformationandcouldincludetopicson:AIandalgorithmicbias;deepfakes;evaluatinginformationsources;understandingsocialmediamanipulation;andbuildingacultureofcontentverification.
9
AI-EnabledInfluenceOperations:SafeguardingFutureElections
Introduction
SinceCETaSpublisheditsBriefingPaperontheUK,EUandFrenchelectionsinSeptember2024,mostvotingprocesseshaveconcludedwithoutbeingfundamentallyreshapedor
disruptedbyAI.However,atthetimeofwriting,thepivotal2024USpresidentialelection
hadnottakenplace.Giventhelongtimeframeofthecampaign,itsnarrowpollmarginsandthedifferencesbetweenthetwomaincandidatesonRussiaandChinapolicy,many
observersbelievedtheelectionwouldbetheultimatetestofAI-generateddisinformation.1
YetaspreviousCETaSresearchconcluded,thereisaneedtoinformsuchjudgementswithevidence-basedresearchandfindabalancebetweenassessingtheseverityofthethreat
andavoidingfearmongering.2AIthreatreportinginthecontesthasfocusedonunpickingindividualviralcasesinsteadofsystematicanalysisofstrategicthemesandtrendsacrosstheelectioncycle.Onlywell-groundedresearchcanaccuratelyinformthepublicandavoidunnecessaryspeculation.
The‘superyearofelections’maybedrawingtoaclosebutAImisusecouldstillemergeinfederalelectionsinAustraliaandCanadain2025,aswellasinregionalelectionssuchasthoseinScotlandin2026.Thereisariskthateffortstotacklethesethreatswillbe
deprioritisedontheincorrectassumptionthat,withmanynationalelectionsnowfinished,maliciousactorswillhavelittleincentiveforfurtherpoliticalinterference.Butmaintainingahealthyinformationenvironmentisalsocrucialoutsideelectionperiods,asevidencedintheUKcontextbytherecentuseofdisinformationtointensifyfar-rightriotsandpolitical
extremism.3
Therefore,policyresponsesandotherprotectivemeasuresshouldnotbenarrowlyfocusedonsecuringelectioncyclesonlyasofficialcampaigningtakesplace.4Instead,theyshouldidentifylong-terminterventionsthatembedresilience,drawonthecapabilitiesofdifferent
1WilliamTurton,“TheUSElectionThreatsAreClear.WhattoDoAboutThemIsAnythingBut,”WIRED,15May2024,
/story/election-threats-senate-hearing-ai-disinformation-deepfakes/.
2SamStockwelletal.,“AI-EnabledInfluenceOperations:TheThreattotheUKGeneralElection,”CETaSBriefingPapers(May2024),39
,https://cetas.turing.ac.uk/publications/ai-enabled-influence-operations-threat-uk-general-election;
SamStockwell,“AI-EnabledInfluenceOperations:ThreatAnalysisofUKandEuropeanElections,”CETaSBriefingPapers(September2024),6.
3InstituteforStrategicDialogue,“Fromrumourstoriots:Howonlinemisinformationfuelledviolenceintheaftermathofthe
Southportattack,”31
July2024,/digital_dispatches/from-rumours-to-riots-how-online-
misinformation-fuelled-violence-in-the-aftermath-of-the-southport-attack/.
4HelenMargetts,“TheAlelectionthatwasn’t–yet,”UKElectionAnalysis
,https://www.electionanalysis.uk/uk-election-
analysis-2024/section-6-the-digital-campaign/the-al-election-that-wasnt-yet/.
10
SamStockwell,MeganHughes,PhilSwatton,AlbertZhang,JonathanHallandKieran
sectorsandempowercitizensagainstdisinformation.Allsuchstepswillhelpprotectfutureelections–andwidersociety–againstthesethreats.
Researchmethodology
Thisprojectseekstoanswerthefollowingresearchquestions:
?RQ1:Whatfactorsmakeindividualcitizensandsocietyeithermorevulnerableorresilienttoengagementwithdisinformation,includingAI-enabledcontent?
?RQ2:HowhasAIbeenmaliciouslydeployedinthelead-uptothe2024USpresidentialelection?
?RQ3:Whichexistingevaluationframeworksgaugetheimpactofinfluenceoperations,andwhatarethebarrierstoeffectivemeasurement?
?RQ4:WhatinitiativescantheUKimplementtoenhanceelectionsecurityand
broadersocietalresilienceagainstinfluenceoperationsthatincorporatenovelAItools?
DatacollectionforthisstudywasconductedbetweenJuneandNovember2024,involvingthreecoreresearchactivities:
1.Literaturereviewcoveringjournalarticles,publicreportsandnewsarticleson:AImisuseinthe2024USelection;publicengagementwithAI-enableddisinformation;challengesinevaluatinginfluenceoperations;andcountermeasuresforimprovingelectionresilience.
2.SocialmediaanalysisofthreedifferentUSdeepfakes,tounderstandnodesofinfluenceamplifyingdisinformation(seeSection2.2formoredetailsonthe
methodologyused).
3.Twoworkshopsdesignedtoprioritisepolicyandtechnicalrecommendations
identifiedbytheprojectteam.ThesesessionsinvitedattendeestodeterminewhichsolutionsweremostimpactfulandfeasibleinenhancingelectionresilienceagainstAIthreats;theyinvolved47experts:
?20fromindustry.
?12fromgovernmentandregulatorybodies.
11
AI-EnabledInfluenceOperations:SafeguardingFutureElections
?10fromcivilsociety.
?5fromacademia.
Reportstructure
Theremainderofthisreportisstructuredasfollows.Section1describesthefactorsthataffectindividualandsocietalengagementwithdisinformationcontent.Section2providesanalysisofspecificAIthreatsinthe2024USpresidentialelectioncycle,aswellassocialmediaanalysisofaselectionofhigh-profileUSdeepfakes.Section3exploreschallengesandwaysforwardinevaluatingtheimpactoftheseactivities.Section4describespolicysolutionsthatcanhelpincreasedemocracies’resilienceagainstmaliciousAI-enabled
influenceoperations.Finally,Section5detailscorrespondingtechnicalsolutions.
12
SamStockwell,MeganHughes,PhilSwatton,AlbertZhang,JonathanHallandKieran
1.PublicVulnerabilityandResilienceagainstDeceptiveContent
Theubiquityofsocialmediahasshiftedresponsibilityfordetectingfalsehoodsfrom
professionaljournaliststoeverydayinternetusers.Itis,therefore,importanttounderstandhowindividualsinteractwithmisinformationanddisinformationtoreducepublic
susceptibilitytoit–bothduringelectionsandbeyond.CETaSdefinesmisinformationas
unintentionallymisleadingclaims.Incontrast,disinformationisdeliberatefalsehoods,
includingthosesharedaspartofonlineinfluenceoperationsthatareintendedtoshape
publicopinionorbehaviour.Theanalysisinthissectionfocusesontheriskfactorsand
protectivefactorsthataffectpublicvulnerabilityandresilienceagainstbothmisinformationanddisinformation.
1.1Riskfactorsassociatedwithvulnerability
Tounderstandtheimpactofmisinformationanddisinformationonindividuals,itishelpfultobreakdownthedifferentstagesofthecontentlifecycle:
Figure1.Onlinemisinformationanddisinformationlifecycle
Source:Authors’analysis.
13
AI-EnabledInfluenceOperations:SafeguardingFutureElections
Thereisonlysparsedataontheknownmotivationsofpeoplewhogeneratemisinformationanddisinformation.5However,somefactorshavebeensuggestedbasedonhistoricalcases.Theseincludevariousforeignanddomesticactors’desiretoinfluenceelectionresults,sowpoliticaldivisionorunderminemediaintegrityinacountry,aswellashyper-partisanmediaoutlets’aimtodistortfactstosuitorganisationalagendas.6
Anincreasingbodyofevidencehelpsexplainthereasonswhyindividualsdisseminate,
engageandpositivelyreacttothiscontent.Forexample,individualswhoconsume
misinformationanddisinformationaremorelikelytohaveconspiratorialoutlooks,distrust
publicinstitutions,experiencestressandfrustration,orlackcritical-thinkingand
information-verificationhabits.7Individualswhoexaggeratetheirknowledgeoftopicsandscorelowerontestsofanalyticalthinkingarealsomorelikelytobelievefakenewsstories.8Whenitcomestodemographics,somestudiesshowthatolderpeoplearemorelikelyto
sharemisinformationordisinformationonlinewhentheyviewit,butyoungerpeople–
particularlythoseundertheageofeighteen–aremorelikelytobelievemisleading
narratives.9Otherstudieshavefoundthatmenaremorelikelythanwomentodisseminatepoliticaldisinformation.
Userswhorelyonsocialmedia(ratherthantraditionalmedia)fornewsandpolitical
engagementwillalsobemorelikelytoencountermisinformationanddisinformation–andwill,therefore,bemoreatriskofconsumingit.Disinformationmaybespreadbygroupswith
ideologicalagendas,suchasclimate-changedeniers,orbythoseseekingtobenefit
5SophieLechelerandJanaLauraEgelhofer,“Disinformation,Misinformation,andFakeNewsUnderstandingtheSupplySide”inKnowledgeResistanceinHigh-ChoiceInformationEnvironments,ed.JesperStr?mb?cketal.(Routledge:2022),73-80,
/bitstream/handle/20.500.12657/54482/1/9781000599121.pdf.
6Ibid.
7ValentinStoian-IordacheandIrenaChiru,”2.AggravatingFactorsfortheDisseminationofDisinformation:2.1.Individualandgroupfactors”inHandbookonIdentifyingandCounteringDisinformation,ed.ChristinaIvanetal.(DOMINOESProject:2023),
https://dominoes.ciberimaginario.es/21-individual-factors.html;
Joná?Syrovátka,NikolaHo?ej?andSarahKomasová,
“Towardsamodelthatmeasurestheimpactofdisinformationonelections,”EuropeanView22,no.1(2023),
/10.1177/17816858231162677.
8GordonPennycookandDavidG.Rand,“Whofallsforfakenews?Therolesofbullshitreceptivity,overclaiming,familiarity,andanalyticthinking,”JournalofPersonality88,No.2(March2019:185-200)
,/10.1111/jopy.12476.
9AndrewGuess,JonathanNaglerandJoshuaTucker,“Lessthanyouthink:Prevalenceandpredictorsoffakenews
disseminationonFacebook,”ScienceAdvances5,no.1(January2019);CenterforCounteringDigitalHate,“Beliefin
conspiracytheorieshigheramongteenagersthanadults,asmajorityofAmericanssupportsocialmediareform,newpolling
finds,”
16August2023,/doi/10.1126/sciadv.aau4586;/blog/belief-in-
conspiracy-theories-higher-among-teenagers-than-adults-as-majority-of-americans-support-social-medi
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 二零二五版建筑工程主體承包合同(含建筑垃圾資源化處理)范本6篇
- 二零二五年度食堂服務(wù)員派遣合同2篇
- 二零二五年度二手?jǐn)嚢柙O(shè)備二手交易碳排放交易合同3篇
- 二零二五年進(jìn)出口貨物檢驗檢疫合同3篇
- 二零二五版房屋抵押貸款合同樣本編制指南6篇
- 石場生產(chǎn)線承包合同2025年度規(guī)范文本6篇
- 標(biāo)題14:2025年度網(wǎng)絡(luò)安全監(jiān)測與預(yù)警服務(wù)合同2篇
- 二零二五年技術(shù)轉(zhuǎn)讓合同具體條款2篇
- 二零二五年度酒吧經(jīng)營場所租賃合同范本(專業(yè)解析版)2篇
- 二零二五年度建筑工地環(huán)境監(jiān)測與節(jié)能管理系統(tǒng)合同3篇
- EPC總承包項目中的質(zhì)量管理體系
- 滬教版小學(xué)語文古詩(1-4)年級教材
- 外科醫(yī)生年終述職總結(jié)報告
- 橫格紙A4打印模板
- CT設(shè)備維保服務(wù)售后服務(wù)方案
- 重癥血液凈化血管通路的建立與應(yīng)用中國專家共識(2023版)
- 兒科課件:急性細(xì)菌性腦膜炎
- 柜類家具結(jié)構(gòu)設(shè)計課件
- 陶瓷瓷磚企業(yè)(陶瓷廠)全套安全生產(chǎn)操作規(guī)程
- 煤炭運輸安全保障措施提升運輸安全保障措施
- JTGT-3833-2018-公路工程機械臺班費用定額
評論
0/150
提交評論