본문 바로가기 주메뉴 바로가기
국회도서관 홈으로 정보검색 소장정보 검색

목차보기

Title page

Contents

Executive Summary 2

Introduction 4

Key Components of AI Incidents 7

Synthesizing Key Components of AI Incidents 10

Types of Events 10

AI Incidents and Near Misses 10

Harm Dimensions 11

Type of Harm 11

Mechanism of Harm 11

Severity Factor 12

Technical Data 14

Context, Circumstances, and Stakeholders 14

Post-incident Data 16

Policy Recommendations 17

Publish AI Incident Reporting Formats 17

Establish an Independent Investigation Agency 17

Conclusion 18

Appendix 19

Decomposing AI Incidents 19

AI Harm Events as a Spectrum 21

Documenting AI Harm: The Many Impacts of AI 22

Significant but Limited Information 23

Incident Components Reported in Other Sectors 24

Shared Incident Components 25

Additional Key Components 25

Measuring Severity: A First Glimpse 26

Authors 27

Acknowledgments 27

Endnotes 28

Table 1. Key Components of AI Incidents 8

Table 2. Key Component: Type of Event 10

Table 3. Key Components: Harm Dimensions 12

Table 4. Key Component: Technical Information 14

Table 5. Key Components: Context and Circumstances 15

Table 6. Key Components: Post-Incident 16

Table A1. List of Examined AI Initiatives 19

Table A2. List of Reporting Systems from Other Sectors 24

초록보기

In our past publication, “An Argument for Hybrid AI Incident Reporting,” we proposed implementing a federated and comprehensive artificial intelligence incident reporting framework to systematically record, analyze, and respond to AI incidents. The hybrid reporting framework proposes implementing mandatory, voluntary, and citizen reporting mechanisms. This document describes the critical content that should be included in a mandatory AI incident reporting regime and should also inform voluntary and citizen reporting efforts.

In this publication, we define a set of standardized key components of AI incidents that can be used as a reporting template to collect vital AI incident data. These components include, but are not limited to, information about the type of AI incident, the nature and severity of harm, technical data, affected entities and individuals, and the context and circumstances within which the incident unfolded. While intentionally high level, our proposed set of components distills information from existing AI initiatives that track real-world events, harms, and risks related to AI, and incorporates lessons learned from incident reporting systems and practices in the transportation, healthcare, and cybersecurity sectors.

If adopted and used widely and consistently by governments, regulators, professional organizations, developers, and researchers, these reporting components can help enhance AI safety and security measures by:

-Facilitating consistent data collection of AI incidents

-Promoting tracking, monitoring, research, and information sharing of AI incidents

-Enhancing knowledge around AI-related harms and risks

-Ensuring that essential AI incident data is collected to prevent reporting gaps

-Building a foundational framework for agile incident reporting that adapts to AI advancements

To fully utilize the benefits of this list of components, we recommend publishing mandatory AI incident reporting formats based on them and establishing an independent investigative agency to uncover incident data that may not be immediately discernible at the time of reporting. This list can also serve as a template for desirable disclosure guidelines of incident data for voluntary and citizen AI incident reporting systems.