권호기사보기
기사명 | 저자명 | 페이지 | 원문 | 기사목차 |
---|
대표형(전거형, Authority) | 생물정보 | 이형(異形, Variant) | 소속 | 직위 | 직업 | 활동분야 | 주기 | 서지 | |
---|---|---|---|---|---|---|---|---|---|
연구/단체명을 입력해주세요. |
|
|
|
|
|
* 주제를 선택하시면 검색 상세로 이동합니다.
Title page
Contents
Foreword 2
Acknowledgements 3
Abstract 6
Resume 7
Background and objectives 8
Executive summary 9
Synthese 11
1. Introduction 13
1.1. The need for trustworthy AI 13
1.2. What is trustworthy AI? 13
1.3. What is accountability in AI? 14
2. DEFINE: Scope, context, actors, and criteria 18
2.1. Scope 18
2.2. Context 18
2.3. Actors 19
2.4. Criteria 22
3. ASSESS: Identify and measure AI risks 23
3.1. Benefiting people and the planet 23
3.2. Human-centred values and fairness 24
3.3. Transparency and explainability 29
3.4. Robustness, security, and safety 30
3.5. Interactions and trade-offs between the values-based Principles 31
4. TREAT: Prevent, mitigate, or cease AI risks 33
4.1. Risks to people and the planet 33
4.2. Risks to human-centred values and fairness 34
4.3. Risks to transparency and explainability 37
4.4. Risks to robustness, security, and safety 38
4.5. Anticipating unknown risks and contingency plans 40
5. GOVERN: Monitor, document, communicate, consult and embed 41
5.1. Monitor, document, communicate and consult 41
5.2. Embed a culture of risk management 49
6. Next steps and discussion 50
Annex A. Presentations relevant to accountability in AI from the OECD.AI network of experts 51
Annex B. Participation in the OECD.AI Expert Group on Classification and Risk 53
Annex C. Participation in the OECD.AI Expert Group on Tools and Accountability 55
References 58
Figure 1.1. High-level AI risk-management interoperability framework 16
Figure 1.2. Sample uses of the high-level AI risk management interoperability framework 17
Figure 2.1. Actors in an AI accountability ecosystem 20
Figure 3.1. UK Information Commissioner's Office (ICO) qualitative rating for data protection 27
Figure 3.2. Mapping of algorithms by explainability and performance 32
Figure 5.1. Trade-off between information concealed and auditing detail by access level 45
Boxes
Box 1.1. What is AI? 13
Box 1.2. Trustworthy AI per the OECD AI Principles 14
Box 2.1. Mapping the lifecycle phases to the dimensions of an AI system 18
Box 3.1. Errors, biases, and noise: a technical note 25
Box 3.2. Human rights and AI 28
Box 3.3. Explainability vs interpretability 29
Annex Tables
Table A.1. OECD.AI expert presentations 51
Table B.1. Participation in the OECD.AI Expert Group on Classification & Risk (as of December 2022) 53
Table C.1. Participation in the OECD.AI Expert Group on Tools & Accountability (as of December 2022) 55
*표시는 필수 입력사항입니다.
전화번호 |
---|
기사명 | 저자명 | 페이지 | 원문 | 기사목차 |
---|
번호 | 발행일자 | 권호명 | 제본정보 | 자료실 | 원문 | 신청 페이지 |
---|
도서위치안내: / 서가번호:
우편복사 목록담기를 완료하였습니다.
*표시는 필수 입력사항입니다.
저장 되었습니다.