Preface Acknowledgements List of Most-Used Abbreviations 1. The Groundwork for an Ethics of Artificial Intelligence in Defence 1. Introduction 2. Artificial Intelligence and the Predictability Problem 2.1. Human-Machine Teaming 2.2. Machine Learning 2.3. Data Curation 2.4. Technical Debt 3. The Methodology of Levels of Abstraction 4. Ethical Problems of Using AI for Defence Purposes 4.1. Sustainment and Support Uses of AI 4.2. Adversarial and Non-kinetic Uses of AI 4.3. Adversarial and Kinetic Uses of AI 5. Conclusion 2. Ethical Principles for AI in Defence 1. Introduction 2. Ethical Principles for the Use of AI 2.1. Responsible Uses of AI 2.2. Equitable Uses of AI 2.3. Traceability 2.4. Reliable and Governable 3. From Defence Principles to Practice 4. Five Ethical Principles for AI in Defence 4.1. Justified and Overridable Uses 4.2. Just and Transparent Systems and Processes 4.3. Human Moral Responsibility 4.4. Meaningful Human Control 4.5. Reliable AI Systems 5. A Three-Step Methodology to Extract Guidelines from AI Ethics Principles in Defence 5.1. Independent, Multistakeholder Ethics Board 5.2. Abstraction 5.3. Interpretation and Requirements Elicitation 5.4. Balancing The Principles 6. Conclusion 3. Sustainment and Support Uses of AI in Defence: The Case of AI-Augmented Intelligence Analysis 1. Introduction 2. Mapping Augmented Intelligence Analysis in Defence 3. Ethical Challenges of Augmented Intelligence Analysis 3.1. Intrusion 3.2. Explainability and Accountability 3.3. Bias 3.4. Authoritarianism and Political Security 4. Conclusion 4. Adversarial and Non-kinetic Uses of AI: Conceptual and Ethical Challenges 1. Introduction 2. The Weaponisation of AI in Cyberspace 2.1. Recommendations 3. AI for Adversarial and Non-kinetic Purposes: The Conceptual Shift 4. Information Ethics 5. Just Non-kinetic Cyberwarfare 6. Conclusion 5. Adversarial and Non-kinetic Uses: The Case of Artificial Intelligence for Cyber Deterrence 1. Introduction 2. Deterrence Theory 3. Attribution 4. Deterrence Strategies: Defence and Retaliation 4.1. Defence in Cyberspace 4.2. Retaliation in Cyberspace 4.2.1. Control and Risks of Cyber Deterrence by Retaliation 5. Credible Signalling 6. AI for Cyber Deterrence: A New Model 7. Conclusion 6. Adversarial and Kinetic Uses of AI: The Definition of Autonomous Weapon Systems 1. Introduction 2. Definitions of Autonomous Weapon Systems 2.1. Autonomy, Intervention, and Control 2.2. Learning Capabilities 2.3. Purpose of Deployment 3. A Definition of AWS 3.1. Autonomous, Self-Learning Weapons Systems 3.2. Human Control 4. Conclusion 7. Taking a Moral Gambit: Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems 1. Introduction 2. Moral Responsibility for AI Systems 3. Collective and Faultless Distributed Moral Responsibility 4. Moral Responsibility for AWS: The Collective Moral Responsibility Approach 4.1. Moral Responsibility for AWS: Distributing Moral Responsibility along the Chain of Command 4.2. Moral Responsibility for AWS: The Distributed Faultless Moral Responsibility Approach 5. Meaningful Moral Responsibility and the Moral Gambit 6. Discharging Meaningful Moral Responsibility for the Actions of Non-lethal AWS 7. Conclusion 8. Just War Theory and the Permissibility of Autonomous Weapons Systems 1. Introduction 2. Jus ad bellum and AWS 3. The Principle of Necessity 3.1. The Principle of Necessity and AWS 4. Distinction, Double Effect, and Due Care 4.1. AWS, Distinction, and Due Care 5. Conclusion Epilogue References Index
이용현황보기
The ethics of artificial intelligence in defence 이용현황 표 - 등록번호, 청구기호, 권별정보, 자료실, 이용여부로 구성 되어있습니다.
등록번호
청구기호
권별정보
자료실
이용여부
0003194252
172.42 -A25-3
서울관 인문자연과학자료실(314호)
이용가능
출판사 책소개
Mariarosaria Taddeo provides a systematic analysis of the ethical challenges that arise from the use of AI for national defence. Her work builds a framework for the identification, evaluation, and resolution of these challenges, with the goal of advancing relevant academic debate and informing the ethical governance of AI in defence.