본문 바로가기 주메뉴 바로가기
국회도서관 홈으로 정보검색 소장정보 검색

목차보기


Preface
Acknowledgment
1 Core AI: Problem Solving and Automated Reasoning
1.1 Early Milestones
1.1.1 Limits of Number Crunching
1.1.2 Alls Born
1.1.3 Early Strategy: Search Algorithms
1.1.4 Early Wisdom: Computers Need Knowledge
1.1.5 Programming Languages
1.1.6 Textbooks: Many Different Topics
1.1.7 Twenty-First Century Perspective
1.2 Problem Solving
1.2.1 Typical Problems
1.2.2 Classical Approaches to Search
1.2.3 Planning
1.2.4 Genetic Algorithm
1.2.5 Swarm Intelligence
1.2.6 Emergent Properties and Artificial Life
1.3 Automated Reasoning
1.3.1 Zebra
1.3.2 Can a Computer Solve Zebras?
1.3.3 Family Relations
1.3.4 Knowledge Representation
1.3.5 Automated Reasoning
1.3.6 Less Clear-Cut Concepts
1.3.7 Imperfect Knowledge
1.3.8 Uncertainty Processing
1.3.9 Expert Systems
1.4 Structure and Method
2 Blind Search
2.1 Motivation and Terminology
2.1.1 Simple Puzzle
2.1.2 Search Tree
2.1.3 Search Operators
2.1.4 Blind Search in Artificial Intelligence
2.1.5 Sliding Tiles
2.1.6 Missionaries and Cannibals
2.1.7 Programmer's Perspective
2.2 Depth-First and Breadth-First Search
2.2.1 Example Search Tree
2.2.2 Depth-First Search: Principle
2.2.3 Depth-First Search Algorithm
2.2.4 Numeric Example
2.2.5 Breadth-First Search: Principle
2.2.6 Breadth-First Search Algorithm
2.2.7 Numeric Example
2.3 Practical Considerations
2.3.1 Generic Model of Search
2.3.2 Exact Form of the Final State May Not Be Known
2.3.3 Unknown Form of the Final State: Examples
2.3.4 Testing a State for Being Final Can Be Expensive
2.3.5 Goal
1. What Does the Solution Look Like?
2.3.6 Goal
2. What Path Leads to the Solution?
2.3.7 Stopping Criteria
2.3.8 Examining Lseen Can Be Expensive
2.3.9 Searching Through an Ordered List
2.3.10 Hash Functions
2.4 Aspects of Search Performance
2.4.1 Measuring the Costs
2.4.2 Branching Factor
2.4.3 Depth of Search
2.4.4 Memory Costs of BFS
2.4.5 Memory Costs of DFS
2.4.6 Computational Costs of the Two Algorithms
2.4.7 Which of the Two Is Cheaper?
2.4.8 Searching for Bill's House
2.4.9 Domains with Multiple Final States
2.5 Iterative Deepening (and Broadening)
2.5.1 ID Algorithm
2.5.2 Why the Technique Works?
2.5.3 Numeric Example
2.5.4 What Contributes to Search Costs?
2.5.5 Is Iterative Deepening Wasteful?
2.5.6 Comparing ID with the Basic Approaches
2.5.7 Word of Caution
2.5.8 Alternative: Iterative Broadening
2.6 Practice Makes Perfect
2.7 Concluding Remarks
3 Heuristic Search and Annealing
3.1 Hill Climbing and Best-First Search
3.1.1 Evaluation Function
3.1.2 Numeric Example: Sliding Tiles
3.1.3 Sophisticated Evaluation Functions
3.1.4 Maximize or Minimize?
3.1.5 Hill Climbing
3.1.6 Best-First Search
3.1.7 Two Ways to Implement Best-First Search
3.1.8 Comparing the Two Techniques
3.1.9 Human Approach to Search
3.2 Practical Aspects of Evaluation Functions
3.2.1 Temporal Worsening of State Value
3.2.2 Many States Can Have the Same Value
3.2.3 Look-Ahead Evaluation Strategy
3.2.4 Beam Search
3.2.5 Role of N in Beam Search
3.2.6 Numeric Example
3.2.7 Expensive Evaluations
3.3 A-Star and IDA-Star
3.3.1 Motivation
3.3.2 Cost Function
3.3.3 A* Algorithm
3.3.4 Numeric Example
3.3.5 Two Versions of A*
3.3.6 More Sophisticated Cost Functions
3.3.7 "Leaps and Bounds"
3.3.8 IDA*
3.4 Simulated Annealing
3.4.1 Growing Defect-Free Crystals
3.4.2 Formal View
3.4.3 AI Perspective
3.4.4 Simple View of Simulated Annealing
3.4.5 Impact of State Values
3.4.6 Impact of Temperature
3.4.7 Cooling
3.4.8 Initial Temperature
3.5 Role of Background Knowledge
3.5.1 Magic Square Addressed by AI Search
3.5.2 Magic Square Addressed by a Mathematician
3.5.3 Lesson: The Benefits of Background Knowledge
3.5.4 Branching Factor in Sudoku
3.5.5 Zebra
3.6 Continuous Domains
3.6.1 Example of a Continuous Domain
3.6.2 Discretization
3.6.3 Gradient Ascent and Neural Networks
3.6.4 Swarm Intelligence
3.7 Practice Makes Perfect
3.8 Concluding Remarks
4 Adversary Search
4.1 Typical Problems
4.1.1 Example of a Simple Game
4.1.2 Other Games
4.1.3 More General View
4.1.4 Differences from Classical Search
4.2 Baseline Mini-Max
4.2.1 Maximizer and Minimizer
4.2.2 Game Tree
4.2.3 Parents Inherit from Children
4.2.4 Principle of the Mini-Max Approach
4.2.5 Numeric Example
4.2.6 Backed-up Value
4.3 Heuristic Mini-Max
4.3.1 Prohibitive Size of the Game Tree
4.3.2 Depth Has to Be Limited
4.3.3 Evaluation Function in Adversary Search
4.3.4 Where Do Evaluation Functions Come From?
4.3.5 Heuristic Mini-Max
4.3.6 What Affects Playing Strength
4.3.7 Flexible Depth of Evaluation
4.3.8 Evaluations Can Be Expensive
4.3.9 Success Stories
4.4 Alpha-Beta Pruning
4.4.1 Trivial Case
4.4.2 Superfluous Evaluation
4.4.3 Another Example
4.4.4 Towards the Pruning Algorithm
4.4.5 Alpha-Beta Pruning
4.4.6 Opposite Approach
4.5 Additional Game-Programming Techniques
4.5.1 Heuristics to Control Search Depth
4.5.2 Peek Beyond the Horizon
4.5.3 Opening Book
4.5.4 Look-up Tables of Endgames
4.5.5 Human Pattern-Recognition Skills
4.5.6 Human Way of "Pruning"
4.5.7 Pattern Recognition in Game Playing
4.6 Practice Makes Perfect
4.7 Concluding Remarks
5 Planning
5.1 Toy Blocks
5.1.1 Moving Blocks Around
5.1.2 Descriptors
5.1.3 Examples of State Descriptions
5.1.4 Comments
5.2 Available Actions
5.2.1 Actions in the Toy Domain
5.2.2 List of Preconditions
5.2.3 Add List
5.2.4 Delete List
5.2.5 Defining move (x, y, z)
5.2.6 Instantiation of the Generic Action
5.2.7 How Many Instantiations Exist?
5.2.8 Executing an Action
5.2.9 Example
5.3 Planning with STRIPS
5.3.1 Set of Goals
5.3.2 General Philosophy
5.3.3 Concrete Example
5.3.4 How to Identify the Action?
5.3.5 What Does the Penultimate State Look Like?
5.3.6 Pseudo-Code of STRIPS
5.4 Numeric Example
5.4.1 Which Actions Should Be Considered?
5.4.2 Checking the Lists
5.4.3 Word of Caution
5.4.4 Describe the Previous State
5.4.5 Iterative Procedure
5.5 Advanced Applications of AI Planning
5.5.1 Traveling Salesman Problem
5.5.2 Package Delivery and Packet Routing
5.5.3 Ambulance Routing
5.5.4 Knapsack Problem
5.5.5 Job-Shop Scheduling
5.5.6 Word of Caution
5.5.7 Important Comment
5.6 Practice Makes Perfect
5.7 Concluding Remarks
6 Genetic Algorithm
6.1 General Schema
6.1.1 Imperfect Copies, Survival of the Fittest
6.1.2 Individuals in GA Applications
6.1.3 Basic Loop
6.1.4 Population
6.1.5 Survival of the Fittest
6.1.6 How Many Generations?
6.1.7 Stopping Criteria
6.2 Imperfect Copies and Survival
6.2.1 Mating
6.2.2 Recombination
6.2.3 Mutation
6.2.4 Implementing the Survival Game
6.2.5 Exploiting the Survival Mechanism for Mating
6.2.6 Commenting on the Survival Game
6.2.7 Children Falling to Both Sides of Their Parents
6.2.8 Simple Tasks for GA
6.2.9 Exploration of the Parents' Neighborhood
6.2.10 Recombination versus Mutation
6.2.11 Why the Algorithm Works
6.3 Alternative GA Operators
6.3.1 Two-Point Crossover
6.3.2 Random Bit Exchange
6.3.3 Inversion
6.3.4 Programmer's Ways of Controlling the Process
6.4 Potential Problems
6.4.1 Degenerated Population
6.4.2 Harmless Degeneration versus Premature Degeneration
6.4.3 Recognizing a Degenerated State
6.4.4 Getting Out of the Degenerated State
6.4.5 Poorly Designed Fitness Functions
6.4.6 Fitness Functions Not Reflecting the GA's Goal
6.5 Advanced Variations
6.5.1 Numeric Chromosomes
6.5.2 Chromosomes in the Form of Tree Structures
6.5.3 Multiple Populations and Multiple Goals
6.5.4 Lamarckian Approach
6.6 GA and the Knapsack Problem
6.6.1 Knapsack's Rules (Revision)
6.6.2 Encoding the Problem by Binary Strings
6.6.3 Running the Program
6.6.4 Does the GA Find the Best Solution?
6.6.5 Observation: Implicit Parallelism
6.6.6 Encoding Knapsack Contents with a Numeric String
6.6.7 Mutation and Recombination in Numeric Strings
6.6.8 Summary
6.7 GA and the Prisoner's Dilemma
6.7.1 To Be Tough or to Squeal?
6.7.2 Practical Observations
6.7.3 Strategies for Repeated Events
6.7.4 Encoding the Strategy in a Chromosome
6.7.5 Early Rounds
6.7.6 Tournament
6.7.7 Performance
6.7.8 Summary
6.8 Practice Makes Perfect
6.9 Concluding Remarks
7 Artificial Life
7.1 Emergent Properties
7.1.1 From Atoms to Proteins
7.1.2 From Molecules to Society
7.1.3 From Letters to Poetry
7.1.4 A Road to Artificial Life
7.2 L-Systems
7.2.1 Original L-System's Rules
7.2.2 Another Example: Cantor Set
7.2.3 Lesson
7.3 Cellular Automata
7.3.1 Simple Example
7.3.2 Variations
7.3.3 Adding Another Dimension
7.4 Conways' Game of Life
7.4.1 Board and Its Cells
7.4.2 Rules
7.4.3 More Interesting Example
7.4.4 Typical Behaviors
7.4.5 Summary
7.5 Practice Makes Perfect
7.6 Concluding Remarks
8 Emergent Properties and Swarm Intelligence
8.1 Ant-Colony Optimization
8.1.1 Trivial Formulation
8.1.2 Ant's Choice
8.1.3 Pheromone Trail
8.1.4 Choosing the Path
8.1.5 Evaporation versus Additions
8.1.6 Programmer's Perspective
8.1.7 Probability of Choosing a Concrete Path
8.1.8 Path-Selecting Mechanism
8.1.9 Adding Pheromone
8.1.10 Pheromone Evaporation
8.1.11 Non-stationary Tasks
8.2 ACO Addressing the Traveling Salesman
8.2.1 Ants and Agents
8.2.2 ACO's View of the TSP
8.2.3 Initialization
8.2.4 Establishing the Probabilistic Decisions
8.2.5 Numeric Example
8.2.6 How Much Pheromone Is Deposited by a Single Ant?
8.2.7 Number of Ants Along Each Route
8.2.8 Pheromone Added to Each Edge
8.2.9 Updating the Values
8.2.10 Full-Fledged Probabilistic Formula
8.2.11 Outline of ACO's Handling of the TSP
8.2.12 Closing Comments
8.2.13 Main Limitation
8.3 Particle-Swarm Optimization
8.3.1 Particles or Birds?
8.3.2 Find the Maximum of a Multivariate Function
8.3.3 Terminology
8.3.4 Three Assumptions
8.3.5 Agent's Goal
8.3.6 Updating Velocity and Position: Simple Formula
8.3.7 Full-Scale Version of Velocity Updates
8.3.8 What Values for C1 and C2?
8.3.9 PSO's Overall Algorithm
8.3.10 Possible Complications
8.3.11 Dangers of Local Extremes
8.3.12 Multiple Flocks
8.4 Artificial-Bees Colony, ABC
8.4.1 Original Inspiration
8.4.2 What the Metaphor Offers to AI
8.4.3 Task
8.4.4 First Step
8.4.5 How to Select Promising Targets
8.4.6 Following Bees
8.4.7 Updating the Best Locations
8.4.8 Supporting Bees
8.4.9 Parameters
8.4.10 Algorithm
8.5 Practice Makes Perfect
8.6 Concluding Remarks
9 Elements of Automated Reasoning
9.1 Facts and Queries
9.1.1 List of Facts
9.1.2 Answering Users' Queries
9.1.3 Queries with Variables
9.1.4 More Than One Variable
9.1.5 Compound Queries
9.1.6 Exercise
9.1.7 Binding Variables to Concrete Values
9.1.8 How to Process Compound Queries
9.1.9 Ordering the Predicates
9.1.10 Query Answering and Search
9.1.11 Nested Arguments
9.2 Rules and Knowledge-Based Systems
9.2.1 Simple Rules
9.2.2 Longer Rules
9.2.3 Formal View of Rules
9.2.4 Closed-World Assumption
9.2.5 Knowledge-Based Systems
9.3 Simple Reasoning with Rules
9.3.1 Answering a Query
9.3.2 Beyond the Basics
9.3.3 Concepts Defined by More Than One Rule
9.3.4 Disjunctive Normal Form
9.3.5 Recursive Concept Definitions
9.3.6 Evaluating Recursive Concepts
9.3.7 Comments on Recursion
9.3.8 Summary
9.4 Practice Makes Perfect
9.5 Concluding Remarks
10 Logic and Reasoning, Simplified
10.1 Entailment, Inference, Theorem Proving
10.1.1 Entailment
10.1.2 Inference Procedure
10.1.3 Modus Ponens in Its Simplest Form
10.1.4 Example
10.1.5 Other Inference Mechanisms
10.1.6 Soundness of an Inference Procedure
10.1.7 Completeness of an Inference Procedure
10.1.8 Theorem Proving
10.1.9 Semidecidability
10.2 Reasoning with Modus Ponens
10.2.1 General Form of Modus Ponens
10.2.2 Horn Clauses
10.2.3 Truth and Falsity of Facts
10.2.4 Concrete Example
10.2.5 Practical Considerations
10.2.6 Inference in Hom-Clause Knowledge Bases
10.3 Reasoning Using the Resolution Principle
10.3.1 Normal Form
10.3.2 Resolution Principle
10.3.3 Theoretical Advantage
10.3.4 Concrete Example
10.3.5 Practical Considerations
10.3.6 Computational Costs
10.3.7 Backward Chaining
10.3.8 Concrete Example
10.3.9 Resolution as Search
10.4 Expressing Knowledge in Normal Form
10.4.1 Normal Form (Revision)
10.4.2 Conversion to Normal Form
10.4.3 Concrete Example
10.5 Practice Makes Perfect
10.6 Concluding Remarks
11 Logic and Reasoning Using Variables
11.1 Rules and Quantifiers
11.1.1 Objects and Functions
11.1.2 Relations
11.1.3 Constants and Variables
11.1.4 Order of Arguments
11.1.5 Atoms and Expressions
11.1.6 Logical Expressions in Automated Reasoning
11.1.7 Universal Quantifier
11.1.8 Existential Quantifier
11.1.9 Order of Quantifiers
11.1.10 Additional Examples
11.2 Removing Quantifiers
11.2.1 Removing Some Existential Quantifiers
11.2.2 Existentially Quantified Vectors
11.2.3 Frequently Overlooked Case
11.2.4 Skolemization
11.2.5 Removing the Remaining Existential Quantifiers
11.2.6 Consequence of the Disappeared E's
11.3 Binding, Unification, and Reasoning
11.3.1 Binding Variables
11.3.2 Binding List
11.3.3 Bindings of Nested Relations
11.3.4 Unification
11.3.5 Modus Ponens and Resolution Using Variables
11.4 Practical Inference Procedures
11.4.1 Concrete Example
11.4.2 Multiple Solutions
11.4.3 Number of Bindings
11.4.4 Starting from the Left
11.4.5 Accelerating the Reasoning Process
11.4.6 Look-Ahead Strategy
11.4.7 Back-Jumping
11.5 Practice Makes Perfect
11.6 Concluding Remarks
12 Alternative Ways of Representing Knowledge
12.1 Frames and Semantic Networks
12.1.1 Concrete Example of Frames
12.1.2 Inherited Values
12.1.3 Exceptions to Rules
12.1.4 Semantic Networks
12.2 Reasoning with Frame-Based Knowledge
12.2.1 Finding the Class of an Instance
12.2.2 Finding the Value of a Variable
12.2.3 Reasoning in Semantic Networks
12.2.4 Computational Costs of Reasoning in Frames
12.3 N-ary Relations in Frames and SNs
12.3.1 Binary Relations and Frames
12.3.2 Frame-Based Reasoning with Binary Relations
12.3.3 Translating Binary Relations to Rules
12.3.4 Rules to Facilitate Reasoning with Binary Relations
12.3.5 Difficulties Posed by N-ary Relations
12.4 Practice Makes Perfect
12.5 Concluding Remarks
13 Hurdles on the Road to Automated Reasoning
13.1 Tacit Assumptions
13.1.1 The Frame Problem
13.1.2 Tacit Assumptions
13.2 Non-Monotonicity
13.2.1 Monotonicity of Reasoning
13.2.2 Do Hens Fly?
13.2.3 Do They Not Fly?
13.2.4 Normal Circumstances
13.2.5 Abnormal Circumstances
13.2.6 Which Version to Prefer?
13.2.7 Theories, Assumptions, and Extensions
13.2.8 Multiple Extensions
13.2.9 Multi-Valued Logic
13.2.10 Frames and Semantic Networks
13.3 Mycin's Uncertainty Factors
13.3.1 Uncertainty Processing
13.3.2 Mycin's Certainty Factors
13.3.3 Truth of a Set of Facts and Rules
13.3.4 Certainty of a Negation
13.3.5 Numeric Example
13.3.6 Certainty Factors and Modus Ponens
13.3.7 Numeric Example
13.3.8 Combining Evidence
13.3.9 Intuitive Explanation
13.3.10 Numeric Example
13.3.11 Numeric Example
13.3.12 More Than Two Alternatives
13.3.13 Theoretical Foundations?
13.4 Practice Makes Perfect
13.5 Concluding Remarks
14 Probabilistic Reasoning
14.1 Theory of Probability (Revision)
14.1.1 Sources of Probabilistic Information
14.1.2 Unit Interval
14.1.3 Joint Probability
14.1.4 Numeric Example
14.1.5 Conditional Probability
14.1.6 More General Formula
14.1.7 Rare Events: m-estimate
14.1.8 Quantifying Confidence by m
14.1.9 Numeric Example
14.2 Probability and Reasoning
14.2.1 Examples from the Family-Relations Domain
14.2.2 Rules and Conditional Probabilities
14.2.3 Dependent and Independent Events
14.2.4 Bayes Formula
14.2.5 Bayes Formula and Probabilistic Reasoning
14.2.6 Choosing the Most Likely Hypothesis
14.3 Belief Networks
14.3.1 Belief Network
14.3.2 Numeric Example
14.3.3 Probability of a Concrete Situation
14.3.4 Probability of a Conclusion
14.3.5 Is B true?
14.4 Dealing with More Realistic Domains
14.4.1 Larger Belief Networks
14.4.2 Invisible Causes and Leak Nodes
14.4.3 Too Many Probabilities Are Needed
14.4.4 Naive Bayes
14.4.5 Is the Naive Bayes Assumption Harmful?
14.4.6 Probability of Negation (Reminder)
14.4.7 What Is the Probability of P(X|A1 V A2 V ... An)?
14.4.8 Probability of a Concrete Event
14.4.9 Numeric Example
14.4.10 Where Do the Probabilities Come From?
14.5 Demspter-Shafer Approach: Masses Instead of Probabilities
14.5.1 Motivation
14.5.2 Mass Instead of Probability
14.5.3 Frame of Discernment
14.5.4 Singletons and Composites
14.6 From Masses to Belief and Plausibility
14.6.1 Basic Belief Assignment
14.6.2 Elementary Properties of Any BBA
14.6.3 Belief in a Proposition
14.6.4 Plausibility of a Proposition
14.6.5 Uncertainty Is Quantified by the Two Values
14.6.6 Numeric Example
14.7 DST Rule of Evidence Combination
14.7.1 Multiple Sources of Mass Assignments
14.7.2 Level of Conflict
14.7.3 Rule of Combination
14.7.4 Numeric Example
14.7.5 More Than Two Sources
14.7.6 What the BBAs Typically Look Like
14.8 Practice Makes Perfect
14.9 Concluding Remarks
15 Fuzzy Sets
15.1 Fuzziness of Real-World Concepts
15.1.1 Crisp Concepts and Fuzzy Concepts
15.1.2 Paradox of Heap
15.1.3 Visual Example
15.1.4 Yet Another Example
15.2 Fuzzy Set Membership
15.2.1 Degree of Membership
15.2.2 Black Squares
15.2.3 Talented Student
15.2.4 Tall Person
15.2.5 Warm Room
15.2.6 Other Popular Shapes of the μA(x) Function
15.2.7 Sources of the Values of μA(x)
15.3 Fuzziness versus Other Paradigms
15.3.1 Probability of a Crisp Event
15.3.2 Extent of a Feature
15.3.3 Probability of a Fuzzy Value
15.3.4 Fuzzy Probabilities
15.4 Fuzzy Set Operations
15.4.1 Fuzzy Logic
15.4.2 Conjunction
15.4.3 Disjunction
15.4.4 Negation
15.4.5 Graphical Illustration
15.4.6 Numeric Examples
15.4.7 Complex Expressions
15.5 Counting Linguistic Variables
15.5.1 Examples of Linguistic Variables
15.5.2 Subjectivity of Linguistic Variables
15.5.3 Context Dependence
15.5.4 Counting Fuzzy Objects
15.5.5 Numeric Example
15.5.6 More Advanced Example
15.6 Fuzzy Reasoning
15.6.1 Fuzzy Rules
15.6.2 More Realistic Rules
15.6.3 Reasoning with Fuzzy Rules
15.6.4 Propagating Degrees of Membership
15.6.5 Fuzzy Control
15.7 Practice Makes Perfect
15.8 Concluding Remarks
16 Highs and Lows of Expert Systems
16.1 Early Pioneer: Mycin
16.1.1 Implementation
16.1.2 Intended Field of Application
16.1.3 Early Concerns
16.1.4 Early Hopes
16.2 Later Developments
16.2.1 Another Medical System
16.2.2 Prospector
16.2.3 Hundreds of Expert Systems
16.2.4 Dangers of Premature Excitement
16.2.5 Skepticism
16.2.6 Today's Situation
16.3 Some Experience
16.3.1 The 5-Minutes-to-5-Hours Rule
16.3.2 Bottleneck: The Knowledge Base
16.3.3 Communication Module
16.3.4 Graceful Degradation
16.4 Practice Makes Perfect
16.5 Concluding Remarks
17 Beyond Core AI
17.1 Computer Vision
17.1.1 Image and Its Pixels
17.1.2 Noise Removal
17.1.3 Edge Detection
17.1.4 Connecting the Edges
17.1.5 Texture
17.1.6 Color
17.1.7 Segmentation
17.1.8 Scene Interpretation
17.1.9 Modern Approach
17.2 Natural Language Processing
17.2.1 Signal Processing
17.2.2 Syntactic Analysis (Parsing)
17.2.3 Semantic Analysis
17.2.4 Disambiguation
17.2.5 Language Generation
17.2.6 Modern Approach: Machine Learning
17.3 Machine Learning
17.3.1 Knowledge Acquisition: The Bottleneck of AI
17.3.2 Learning from Examples
17.3.3 Rules and Decision Trees
17.3.4 Other Approaches
17.3.5 Prevailing Philosophy of Old Machine Learning
17.3.6 Machine Learning Today
17.4 Agent Technology
17.4.1 Why Agents?
17.4.2 Architecture
17.5 Concluding Remarks
18 Philosophical Musings
18.1 Turing Test
18.1.1 Turing's Basic Scenario
18.1.2 Additional Complications
18.1.3 Beating the Turing Test
18.2 Chinese Room and Other Reservations
18.2.1 Searle's Basic Scenario
18.2.2 Does the Person Understand Chinese?
18.2.3 Philosopher's View
18.2.4 What Chess-Playing Programs Have Taught Us?
18.2.5 Turing's Response to Theological Reservations
18.2.6 Weak AI versus Strong AI
18.3 Engineer's Perspective
18.3.1 Practicality
18.3.2 Should People Worry?
18.3.3 Augmenting Human Intelligence
18.3.4 Limitations of Existing AI
18.4 Concluding Remarks
Bibliography
Index

이용현황보기

Fundamentals of artificial intelligence : problem solving and automated reasoning 이용현황 표 - 등록번호, 청구기호, 권별정보, 자료실, 이용여부로 구성 되어있습니다.
등록번호 청구기호 권별정보 자료실 이용여부
0003024163 006.3 -A23-32 서울관 서고(열람신청 후 1층 대출대) 이용가능

출판사 책소개

알라딘제공

A hands-on introduction to the principles and practices of modern artificial intelligence

This comprehensive textbook focuses on the core techniques and processes underlying today's artificial intelligence, including algorithms, data structures, logic, automated reasoning, and problem solving. The book contains information about planning and about expert systems.

Fundamentals of Artificial Intelligence: Problem Solving and Automated Reasoning is written in a concise format with a view to optimizing learning. Each chapter contains a brief historical overview, control questions to reinforce important concepts, plus computer assignments and ideas for independent thought. The book includes many visuals to illustrate the essential ideas and many examples to show how to use these ideas in practical implementations.

  • Presented in a concise format to optimize learning
  • Includes historical overviews, summaries, exercises, thought experiments, and computer assignments
  • Written by a recognized artificial intelligence expert and experienced author