본문 바로가기 주메뉴 바로가기
국회도서관 홈으로 정보검색 소장정보 검색

목차보기

Title page 1

Contents 3

About this toolkit 4

Introduction 5

What are hidden risks? 8

Limitations of existing approaches to AI safety 11

A new approach: surfacing 'hidden' risks 13

The Mitigating 'Hidden' AI Risks Framework 15

1. Quality Assurance 18

2. Task-tool mismatch 20

3. Perception, Emotions and Signalling 22

4. Workflow and Organisational Challenges 25

5. Ethics 28

6. Human Connection and Technological Overreliance 30

Step-by-step: How to use the Mitigating Hidden AI Risks Framework 32

Step 1. Set up a multidisciplinary and diverse working group 33

Step 2. Surface potential hidden risks for your tool 35

Step 3. Review and prioritise risks 37

Step 4. Monitor and develop mitigation strategies for your risks 38

Step 5. Implement ongoing monitoring and review mechanisms 42

Tips for Teams 44

Scope and limits of this guide 46

Acknowledgements 48

References 50

Figures 12

Figure 1. Examples of challenges which show why 'human-in-the-loop' is not a fixall solution 12

Figure 2. To prevent unintended consequences, we have to understand the mechanisms which create 'hidden' risks and which can lead to negative consequences and outcomes 14

Figure 3. Six categories of 'hidden' risks arising from organisational AI roll outs 16

Figure 4. Three principles to ensure that humans have the right conditions to be "in the loop" 39