可信賴的人工智能評(píng)估清單(ALTAL)_第1頁(yè)
可信賴的人工智能評(píng)估清單(ALTAL)_第2頁(yè)
可信賴的人工智能評(píng)估清單(ALTAL)_第3頁(yè)
可信賴的人工智能評(píng)估清單(ALTAL)_第4頁(yè)
可信賴的人工智能評(píng)估清單(ALTAL)_第5頁(yè)
已閱讀5頁(yè),還剩29頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、Assessment List for Trustworthy AI (ALTAI)Table of Contents HYPERLINK l _bookmark0 Introduction 3 HYPERLINK l _bookmark1 How to use this Assessment List for Trustworthy AI (ALTAI) 4 HYPERLINK l _bookmark2 REQUIREMENT #1 Human Agency and Oversight 7 HYPERLINK l _bookmark3 Human Agency and Autonomy 7

2、HYPERLINK l _bookmark4 Human Oversight 8 HYPERLINK l _bookmark5 REQUIREMENT #2 Technical Robustness and Safety 9 HYPERLINK l _bookmark6 Resilience to Attack and Security 9 HYPERLINK l _bookmark7 General Safety 9 HYPERLINK l _bookmark8 Accuracy 10 HYPERLINK l _bookmark9 Reliability, Fall-back plans a

3、nd Reproducibility 10 HYPERLINK l _bookmark10 REQUIREMENT #3 Privacy and Data Governance 12 HYPERLINK l _bookmark11 Privacy 12 HYPERLINK l _bookmark12 Data Governance 12 HYPERLINK l _bookmark13 REQUIREMENT #4 Transparency 14 HYPERLINK l _bookmark14 Traceability 14 HYPERLINK l _bookmark15 Explainabil

4、ity 14 HYPERLINK l _bookmark16 Communication 15 HYPERLINK l _bookmark17 REQUIREMENT #5 Diversity, Non-discrimination and Fairness 15 HYPERLINK l _bookmark18 Avoidance of Unfair Bias 16 HYPERLINK l _bookmark19 Accessibility and Universal Design 17 HYPERLINK l _bookmark20 Stakeholder Participation 18

5、HYPERLINK l _bookmark21 REQUIREMENT #6 Societal and Environmental Well-being 18 HYPERLINK l _bookmark22 Environmental Well-being 19 HYPERLINK l _bookmark23 Impact on Work and Skills 19 HYPERLINK l _bookmark24 Impact on Society at large or Democracy 20 HYPERLINK l _bookmark25 REQUIREMENT #7 Accountab

6、ility 21 HYPERLINK l _bookmark26 Auditability 21 HYPERLINK l _bookmark27 Risk Management 21 HYPERLINK l _bookmark28 Glossary 23 HYPERLINK l _bookmark29 Additional useful material 30This document was written by the High-Level Expert Group on AI (AI HLEG). It is the third deliverable of the AI HLEG an

7、d follows the publication of the groups deliverable of the Ethics Guidelines for Trustworthy AI, published on the 8th of April 2019. The members of the AI HLEG named in this document have contributed to the formulation of the content throughout the running of their mandate. The work was informed by

8、the piloting phase of the original assessment list contained in the Ethics Guidelines for Trustworthy AI, conducted by the European Commission from the 26th of June 2019 to the 1st of December 2019. They support the broad direction of the Assessment List for Trustworthy AI put forward in this docume

9、nt, although they do not necessarily agree with every single statement therein.The High-Level Expert Group on AI is an independent expert group that was set up by the European Commission in June 2018.DisclaimerThis Assessment List (ALTAI) is a self-assessment tool. The individual or collective membe

10、rs of the High Level Expert Group on AI do not offer any guarantee as to the compliance of an AI system assessed by using ALTAI with the 7 requirements for Trustworthy AI. Under no circumstances are the individual or collective members of the High Level Expert Group on AI liable for any direct, indi

11、rect, incidental, special or consequential damages or lost profits that result directly or indirectly from the use of or reliance on (the results of using) ALTAI.ContactCharlotte Stix - AI HLEG Coordinator E-mail HYPERLINK mailto:CNECT-HLG-AIec.europa.eu CNECT-HLG-AIec.europa.euEuropean Commission B

12、-1049 BrusselsDocument made public on the 16th of July 2020.Book: ISBN 978-92-76-20009-3doi:10.2759/791819KK-02-20-479-EN-CPDF: ISBN 978-92-76-20008-6doi:10.2759/002360KK-02-20-479-EN-NNeither the European Commission nor any person acting on behalf of the Commission is responsible for the use which

13、might be made of the following information. The contents of this publication are the sole responsibility of the High-Level Expert Group on Artificial Intelligence (AI HLEG). Although Commission staff facilitated the preparation thereof, the views expressed in this document reflect the opinion of the

14、 AI HLEG only and may not in any circumstances be regarded as reflecting an official position of the European Commission.More information on the High-Level Expert Group on Artificial Intelligence is available online1. European Union, 2020The reuse policy of European Commission documents is regulated

15、 by Decision 2011/833/EU (OJ L 330, 14.12.2011, p.39). For any use or reproduction of photos or other material that is not under the EU copyright, permission must be sought directly from the copyright holders.1 https:/ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intellige

16、nceAssessment List for Trustworthy AI (ALTAI)IntroductionIn 2019 the High-Level Expert Group on Artificial Intelligence (AI HLEG),2 set up by the European Commission, published the Ethics Guidelines for Trustworthy Artificial Intelligence.3 The third chapter of those Guidelines contained an Assessme

17、nt List to help assess whether the AI system that is being developed, deployed, procured or used, adheres to the seven requirements of Trustworthy Artificial Intelligence (AI), as specified in our Ethics Guidelines for Trustworthy AI:Human Agency and Oversight;Technical Robustness and Safety;Privacy

18、 and Data Governance;Transparency;Diversity, Non-discrimination and Fairness;Societal and Environmental Well-being;Accountability.This document contains the final Assessment List for Trustworthy AI (ALTAI) presented by the AI HLEG. This Assessment List for Trustworthy AI (ALTAI) is intended for self

19、-evaluation purposes. It provides an initial approach for the evaluation of Trustworthy AI. It builds on the one outlined in the Ethics Guidelines for Trustworthy AI and was developed over a period of two years, from June 2018 to June 2020. In that period this Assessment List for Trustworthy AI (ALT

20、AI) also benefited from a piloting phase (second half of 2019).4 Through that piloting phase, the AI HLEG received valuable feedback through fifty in-depth interviews with selected companies; input through an open work stream on the AI Alliance5 to provide best practices; and, via two publicly acces

21、sible questionnaires for technical and non-technical stakeholders.6This Assessment List (ALTAI) is firmly grounded in the protection of peoples fundamental rights, which is the term used in the European Union to refer to human rights enshrined in the EU Treaties,7 the Charter of Fundamental Rights (

22、the Charter)8, and international human rights Law.9 Please consult the text box below on fundamental rights to familiarise yourself with the concept and with the content of a Fundamental Rights Impact Assessment.This Assessment List for Trustworthy AI (ALTAI) is intended for flexible use: organisati

23、ons can draw on elements relevant to the particular AI system from this Assessment List for Trustworthy AI (ALTAI) or add elements to it as they see fit, taking into consideration the sector they operate in. It helps organisations understand what Trustworthy AI is, in particular what risks an AI sys

24、tem might generate, and how to minimize those risks while maximising the benefit of AI. It is intended to help organisations2 https:/ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence.3 https:/ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy

25、-ai.4 https:/ec.europa.eu/futurium/en/ethics-guidelines-trustworthy-ai/register-piloting-process-0.5 https:/ec.europa.eu/futurium/en/eu-ai-alliance/BestPractices.6 https:/ec.europa.eu/futurium/register-piloting-process.7 https:/europa.eu/european-union/law/treaties_en8 https:/ec.europa.eu/info/aid-d

26、evelopment-cooperation-fundamental-rights/your-rights-eu/eu-charter-fundamental- rights_en9 https:/ HYPERLINK /en/sections/universal-declaration/foundation-international-human-rights-law/index.html /en/sections/universal-declaration/foundation-international-human-rights-law/index.html.identify how p

27、roposed AI systems might generate risks, and to identify whether and what kind of active measures may need to be taken to avoid and minimise those risks. Organisations will derive the most value from this Assessment List (ALTAI) by active engagement with the questions it raises, which are aimed at e

28、ncouraging thoughtful reflection to provoke appropriate action and nurture an organisational culture committed to developing and maintaining Trustworthy AI systems. It raises awareness of the potential impact of AI on society, the environment, consumers, workers and citizens (in particular children

29、and people belonging to marginalised groups). It encourages the involvement of all relevant stakeholders. It helps to gain insight on whether meaningful and appropriate solutions or processes to accomplish adherence to the seven requirements (as outlined above) are already in place or need to be put

30、 in place. This could be achieved through internal guidelines, governance processes, etc.A trustworthy approach is key to enabling responsible competitiveness, by providing the foundation upon which all those using or affected by AI systems can trust that their design, development and use are lawful

31、, ethical and robust.10 This Assessment List for Trustworthy AI (ALTAI) helps foster responsible and sustainable AI innovation in Europe. It seeks to make ethics a core pillar for developing a unique approach to AI, one that aims to benefit, empower and protect both individual human flourishing and

32、the common good of society. We believe that this will enable Europe and European organisations to position themselves as global leaders in cutting-edge AI worthy of our individual and collective trust.This document is the offline version of this Assessment List for Trustworthy AI (ALTAI). An online

33、interactive version of this Assessment List for Trustworthy AI (ALTAI) is available.11How to use this Assessment List for Trustworthy AI (ALTAI)This Assessment List for Trustworthy AI (ALTAI) is best completed involving a multidisciplinary team of people. These could be from within and/or outside yo

34、ur organisation with specific competences or expertise on each of the 7 requirements and related questions. Among the stakeholders you may find for example the following:AI designers and AI developers of the AI system;data scientists;procurement officers or specialists;front-end staff that will use

35、or work with the AI system;legal/compliance officers;management.If you do not know how to address a question and find no useful help on the AI Alliance page,12 it is advised to seek outside counsel or assistance. For each requirement, this Assessment List for Trustworthy AI (ALTAI) provides introduc

36、tory guidance and relevant definitions in the Glossary. The10 The three components of Trustworthy AI, as defined in the Ethics Guidelines for Trustworthy AI.11 https:/futurium.ec.europa.eu/en/content/altai-assessment-list-trustworthy-artificial-intelligence.12 https:/futurium.ec.europa.eu/en/content

37、/altai-assessment-list-trustworthy-artificial-intelligence.Assessment List for Trustworthy AI (ALTAI)online version of this Assessment List for Trustworthy AI (ALTAI) contains additional explanatory notes for many of the questions.13Fundamental RightsFundamental rights encompass rights such as human

38、 dignity and non-discrimination, as well as rights in relation to data protection and privacy, to name just some examples. Prior to self-assessing an AI system with this Assessment List, a fundamental rights impact assessment (FRIA) should be performed.A FRIA could include questions such as the foll

39、owing drawing on specific articles in the Charter and the European Convention on Human Rights (ECHR)14 its protocols and the European Social Charter.15Does the AI system potentially negatively discriminate against people on the basis of anyof the following grounds (non-exhaustively): sex, race, colo

40、ur, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation?Have you put in place processes to test and monitor for potential negative discrimination (bias) duri

41、ng the development, deployment and use phases of the AI system?Have you put in place processes to address and rectify for potential negative discrimination (bias) in the AI system?Does the AI system respect the rights of the child, for example with respect to child protectionand taking the childs be

42、st interests into account?Have you put in place processes to address and rectify for potential harm to children by the AI system?Have you put in place processes to test and monitor for potential harm to children during the development, deployment and use phases of the AI system?13 https:/futurium.ec

43、.europa.eu/en/content/altai-assessment-list-trustworthy-artificial-intelligence.14 https:/ HYPERLINK /Documents/Convention_ENG.pdf /Documents/Convention_ENG.pdf.15 https:/ HYPERLINK /en/web/european-social-charter /en/web/european-social-charter.Does the AI system protect personal data relating to i

44、ndividuals in line with GDPR?16Have you put in place processes to assess in detail the need for a data protection impact assessment, including an assessment of the necessity and proportionality of the processing operations in relation to their purpose, with respect to the development, deployment and

45、 use phases of the AI system?Have you put in place measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data with respect to the development, deployment and use phases of the AI system?See the section on Privacy and Data

46、 Governance in this Assessment List, and available guidance from the European Data Protection Supervisor.17Does the AI system respect the freedom of expression and information and/or freedom of assembly and association?Have you put in place processes to test and monitor for potential infringement on

47、 freedom of expression and information, and/or freedom of assembly and association, during the development, deployment and use phases of the AI system?Have you put in place processes to address and rectify for potential infringement on freedom of expression and information, and/or freedom of assembl

48、y and association, in the AI system?16 https:/gdpr.eu.17 HYPERLINK https:/edps.europa.eu/data-protection/notre-r%C3%B4le-en-tant-que-contr%C3%B4leur/data-protection-impact-assessment-dpia_en https:/edps.europa.eu/data-protection/notre-rle-en-tant-que-contrleur/data-protection-impact-assessment- HYPE

49、RLINK https:/edps.europa.eu/data-protection/notre-r%C3%B4le-en-tant-que-contr%C3%B4leur/data-protection-impact-assessment-dpia_en dpia_en, and HYPERLINK https:/edps.europa.eu/sites/edp/files/publication/19-07-17_accountability_on_the_ground_part_ii_en.pdf https:/edps.europa.eu/sites/edp/files/public

50、ation/19-07- HYPERLINK https:/edps.europa.eu/sites/edp/files/publication/19-07-17_accountability_on_the_ground_part_ii_en.pdf 17_accountability_on_the_ground_part_ii_en.pdf.Assessment List for Trustworthy AI (ALTAI)REQUIREMENT #1 Human Agency and OversightAI systems should support human agency and h

51、uman decision-making, as prescribed by the principle of respect for human autonomy. This requires that AI systems should both: act as enablers for a democratic, flourishing and equitable society by supporting the users agency; and uphold fundamental rights, which should be underpinned by human overs

52、ight. In this section AI systems are assessed in terms of their respect for human agency and autonomy as well as human oversight.Glossary: AI System; Autonomous AI System; End User; Human-in-Command; Human-in-the-Loop; Human-on-the-Loop; Self-learning AI System; Subject; User.Human Agency and Autono

53、myThis subsection deals with the effect AI systems can have on human behaviour in the broadest sense. It deals with the effect of AI systems that are aimed at guiding, influencing or supporting humans in decision making processes, for example, algorithmic decision support systems, risk analysis/pred

54、iction systems (recommender systems, predictive policing, financial risk analysis, etc.). It also deals with the effect on human perception and expectation when confronted with AI systems that act like humans. Finally, it deals with the effect of AI systems on human affection, trust and (in)dependen

55、ce.Is the AI system designed to interact, guide or take decisions by human end-users that affect humans18 or society?Could the AI system generate confusion for some or all end-users or subjects on whether a decision, content, advice or outcome is the result of an algorithmic decision?Are end-users o

56、r other subjects adequately made aware that a decision, content, advice or outcome is the result of an algorithmic decision?Could the AI system generate confusion for some or all end-users or subjects on whether they are interacting with a human or AI system?Are end-users or subjects informed that t

57、hey are interacting with an AI system?Could the AI system affect human autonomy by generating over-reliance by end-users?Did you put in place procedures to avoid that end-users over-rely on the AI system?Could the AI system affect human autonomy by interfering with the end-users decision-making proc

58、ess in any other unintended and undesirable way?Did you put in place any procedure to avoid that the AI system inadvertently affects human autonomy?Does the AI system simulate social interaction with or between end-users or subjects?18 Henceforward referred to as subjects. The definition of subjects

59、 is available in the glossary.Does the AI system risk creating human attachment, stimulating addictive behaviour, or manipulating user behaviour? Depending on which risks are possible or likely, please answer the questions below:Did you take measures to deal with possible negative consequences for e

60、nd-users or subjects in case they develop a disproportionate attachment to the AI System?Did you take measures to minimise the risk of addiction?Did you take measures to mitigate the risk of manipulation?Human OversightThis subsection helps to self-assess necessary oversight measures through governa

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論