Title Page
Abstract
국문요약
Contents
1. Introduction 16
1.1. Gait disorder and Assessment tools 16
1.1.1. Gait and Gait Disorder 16
1.2. Gait Assessment Tools 17
1.2.1. Functional Ambulation Category Classification (FAC) 17
1.2.2. Berg Balance Scale (BBS) 17
1.2.3. 10M Walk Test (10MWT) 18
1.2.4. 6 Minutes' Walk Test (6MWT) 19
1.2.5. Timed Up-and-Go Test (TUG test) 20
2. Background 21
2.1. Traditional Timed Up-and-Go (TUG) test 21
2.2. Introduction of the Automatic TUG test 24
2.2.1. Move Tracking Sensor Based 24
2.2.2. Ambient Sensor Based 26
2.2.3. Vision Sensor Based 28
2.3. Background Knowledge 34
2.3.1. Recurrent Neural Network (RNN) 34
2.3.2. Long Short-Term Memory (LSTM) 35
2.3.3. Bi-directional LSTM (Bi-LSTM) 36
2.3.4. Temporal Convolutional Network (TCN) 36
2.3.5. Dynamic Time Warping (DTW) 37
2.4. Motivation and Problem statement 39
2.4.1. Motivation 39
2.4.2. Problem statement 40
2.5. Summary of Contributions 41
3. Proposed Automatic TUG test Method 42
3.1. Data Acquisition Systems and Participants 42
3.2. Proposed Method 46
3.2.1. Labeling Process 48
3.2.2. Preprocessing 50
3.2.3. Deep-learning-based TUG Subtask Segmentation Algorithm 52
3.2.4. Postprocessing 54
4. Experiments and Results 56
4.1. Metrics 56
4.2. Input Comparison Study 58
4.3. Optimization of Deep Learning Model 62
4.3.1. Model Optimization 62
4.3.2. Comparison with Bi-LSTM 64
4.3.3. Performance with optimized model 66
4.4. Comparison with Rule-based Method 70
4.4.1. TUG Event Detection 70
4.4.2. TUG Subtask Segmentation 73
4.5. Comparison with ANN-based Method 75
4.6. Cross Generalization Capability 76
5. Discussion and Conclusion 79
5.1. Discussion 79
5.2. Conclusion 81
6. Future work 82
References 83
Appendices 92
Appendix A. Programs 92
Appendix B. Related Work Table 96
Table 3.1. Available Dataset which was acquired in this study 44
Table 3.2. Labeling guidelines for six events. 49
Table 3.3. Summary of preprocessing parameters 51
Table 4.1. Comparison results of TUG subtask segmentation accuracy of joints closest to COG 60
Table 4.2. Optimal values of kernel and window sizes 62
Table 4.3. Summary for kernel and window size experiment 62
Table 4.4. Summary for hyperparameter tuning 63
Table 4.5. Summary for model parameter tuning 64
Table 4.6. Accuracy from ablation study of dilated TCN 64
Table 4.7. Subtask segmentation performance comparison between proposed method(TCN) and Bi-LSTM. 65
Table 4.8. Hyperparameter summary table 66
Table 4.9. Each group performance; (ACC; MAE) 67
Table 4.10. Performance for TUG subtask segmentation 69
Table 4.11. Comparison for TUG Subtask segmentation for older adults 73
Table 4.12. Result of the cross-dataset generalization test 78
Figure 1.1. Example of a scoring and score interpretation of FAC. 17
Figure 1.2. Example of Berg Balance Scale score sheet. 18
Figure 1.3. Representative diagram of the 10 meter walk test. 19
Figure 1.4. Schematic diagram of the 6-minute walk test. 19
Figure 1.5. Process of the traditional TUG test. Physical therapist measures total time using stopwatch. 20
Figure 2.1. Example of the optical motion capture system. Optical motion capture suit with infrared markers (left), 3D human pose inferred from each marker (right) 25
Figure 2.2. Example of IMUs-based system. Diagram of the 17 Imus and their locations on the suit. 25
Figure 2.3. Example of the system configuration for ambient sensor-based TUG automation. 27
Figure 2.4. Walk data analysis from RGB. 28
Figure 2.5. The procedure and video recording protocol of the TUG test with RGB camera system. 29
Figure 2.6. Pose estimation example [42] for RGB camera using OpenPose 30
Figure 2.7. Azure Kinect 31
Figure 2.8. The representative illustration for rule-based method. 32
Figure 2.9. Architecture illustration of the recurrent neural network (RNN) 34
Figure 2.10. Architecture illustration of the long short-term memory (LSTM) 35
Figure 2.11. Illustration of the Bi-directional LSTM (Bi-LSTM) 36
Figure 2.12. Illustration of the temporal convolutional network (TCN) 37
Figure 3.1. System configuration for the study. The participants performed a 3 m TUG test. A cone was placed at 3 m straight from a standard chair, and Azure Kinect was installed perpendicular to the walking direction at the... 42
Figure 3.2. various environment example that this study was conducted (a) Lecture room (b) Laboratory (C) fitness room, and (4) Rehabilitation center 44
Figure 3.3. Overall flowchart of the proposed method. 47
Figure 3.4. Example of the labeling procedure 48
Figure 3.5. Self-developed labeling Tool 49
Figure 3.6. characteristics of the butter worth LPF 50
Figure 3.7. Preprocessing by low pass filter and normalization for a pelvis trajectory. 52
Figure 3.8. Architecture of dilated temporal convolutional network (TCN). 53
Figure 3.9. Sample postprocessing for correcting frame-level misclassification. 55
Figure 4.1. Skeleton joints tracked by the Azure Kinect and used inputs for comparison (red box) 59
Figure 4.2. Loss and accuracy plot for model with pelvis input 63
Figure 4.3. TUG event detection performance for each subject group 67
Figure 4.4. TUG subtask segmentation performance for each subject group 68
Figure 4.5. TUG events in Skeleton TUG and this study 71
Figure 4.6. MAE and STD in seconds between skeleton TUG (gray bar) and the proposed method (blue bar) for each TUG event. Error bars are ± the STD of the values 72
Figure 4.7. Comparison of MAE and STD for total TUG time and subtask segmentation 74
Figure 4.8. Comparison results with ANN-based method 75