권호기사보기
기사명 | 저자명 | 페이지 | 원문 | 기사목차 |
---|
대표형(전거형, Authority) | 생물정보 | 이형(異形, Variant) | 소속 | 직위 | 직업 | 활동분야 | 주기 | 서지 | |
---|---|---|---|---|---|---|---|---|---|
연구/단체명을 입력해주세요. |
|
|
|
|
|
* 주제를 선택하시면 검색 상세로 이동합니다.
Title Page
Abstract
Contents
1. Introduction 13
1.1. Definitions of a Place 13
1.2. Place Recognition and Definition 15
1.2.1. Localization-based place recognition 15
1.2.2. Appearance-based place recognition 16
1.3. Visual Descriptor and Its Taxonomy 18
1.3.1. Single-modal visual descriptor 19
1.3.2. Multi-modal visual descriptor 20
1.4. Three-dimensional Space and Its Measurement 20
1.5. Semantic Place Datasets 22
1.5.1. Two-dimensional scene dataset 22
1.5.2. Three-dimensional indoor dataset 22
1.6. Contributions of Dissertation 24
1.7. Contents of Dissertation 25
2. Three-dimensional Place Datasets 27
2.1. Kyushu University Kinect Place Recognition Database 28
2.1.1. Acquisition procedure 30
2.1.2. Place categories in dataset 30
2.2. Kyushu University Indoor Semantic Place Dataset 34
2.2.1. Acquisition procedure 34
2.2.2. Place categories in dataset 36
2.3. SICK Fukuoka Outdoor Semantic Place Dataset 37
2.3.1. Acquisition procedure 37
2.3.2. Place categories in dataset 41
2.4. Fukuoka Outdoor Semantic Place Dataset 41
2.4.1. Acquisition procedure 42
2.4.2. Place categories in dataset 43
3. Co-occurrence of Local Binary Patterns 56
3.1. Local Pattern Descriptors 59
3.1.1. Local binary patterns 59
3.1.2. Local binary patterns referring neighborhood pixels 60
3.2. Histogram of Local Descriptors 65
3.2.1. Histogram of local patterns 65
3.2.2. Uniformed histogram of local binary patterns 66
3.3. Co-occurrence of Multi-modal Images 66
3.4. Classification using SVMs 68
3.5. Experimental Results 68
3.5.1. Evaluation process 69
3.5.2. Categorization using concatenation of LBP histograms 70
3.5.3. Categorization using Co-LBP 72
3.6. Discussions 76
4. Local N-ary Patterns: a Local Multi-modal Descriptor 78
4.1. Local N-ary Patterns Descriptor 79
4.1.1. Local n-ary patterns 79
4.2. Classification 88
4.2.1. Histogram of local descriptor 88
4.2.2. Classifier 89
4.3. Experimental Results 90
4.3.1. Performance analysis using Kyushu University Indoor Semantic Place Dataset 90
4.3.2. Performance analysis using SICK Fukuoka Outdoor Semantic Place Dataset 95
4.4. Discussions 96
5. Conclusions 98
Bibliography 102
Table 2.1. Name of three-dimensional place dataset 28
Table 2.2. Kyushu University Kinect Place Recognition Database: a number of color and... 29
Table 2.3. Kyushu University Indoor Semantic Place Dataset: summary of indoor places with... 44
Table 2.4. SICK Fukuoka Outdoor Semantic Place Dataset: a number of depth, reflectance... 46
Table 2.5. Fukuoka Outdoor Semantic Place Dataset: a number of depth, reflectance and... 51
Table 3.1. Notations of local patterns 58
Table 3.2. Comparison results of correct classification ratio (CCR) for place catego-... 70
Table 3.3. Comparison results of correct classification ratio (CCR) for single and mul-... 71
Table 3.4. Comparison of correct classification ratio (CCR) for Co-LBP and NI-LBPu4 (SVD)[이미지참조] 72
Table 3.5. Correct classification ratio (CCR) for various Co-LBPs 75
Table 3.6. Confusion matrix for the categorization of Co-NI-LBPu4[이미지참조] 76
Table 3.7. Correct classification ratio (CCR) by combining NI-LBPu4 and Co-NILBPu4[이미지참조] 76
Table 4.1. Notations of local n-ary patterns (LNP) 80
Table 4.2. Comparison of correct classification ratio (CCR) results between intuitive... 91
Table 4.3. Categorization for the original and reduced SVD dimension of local de-... 93
Table 4.4. Confusion matrix for the categorization of LBP images 94
Table 4.5. Confusion matrix for the categorization of LTP images 95
Table 4.6. Confusion matrix for the categorization of LQP images 95
Table 4.7. Correct classification ratio (CCR) of reflectance and depth images 96
Table 4.8. Comparison of correct classification ratio (CCR) for LTP 96
Figure 1.1. An example of (a) location and (b) place for Kyushu University, Ito Campus. 14
Figure 1.2. Localization-based place recognition (Topological localization): (left) Oc-... 16
Figure 1.3. Appearance-based place recognition (Scene segmentation): Examples of seman-... 17
Figure 1.4. Appearance-based place recognition (Place categorization using photometric... 17
Figure 1.5. Appearance-based place recognition (Place categorization using geometric ap-... 18
Figure 1.6. Two-dimensional scene dataset: SUN database contains 397 scene cate-... 23
Figure 1.7. Three-dimensional indoor place dataset: NYU dataset: Samples of the RGB... 23
Figure 1.8. Three-dimensional indoor place dataset: SUN database: Sample of densely... 24
Figure 2.1. The Microsoft Kinect for Xbox 360 sensor(RGB-D sensor) used in Kyushu University Kinect... 31
Figure 2.2. Examples of "Corridor" in Kyushu University Kinect Place Recognition... 32
Figure 2.3. Examples of "Kitchen" in Kyushu University Kinect Place Recognition Database:... 32
Figure 2.4. Examples of "Laboratory" in Kyushu University Kinect Place Recognition... 32
Figure 2.5. Examples of "Office" in Kyushu University Kinect Place Recognition Database:... 33
Figure 2.6. Examples of "Study room" in Kyushu University Kinect Place Recognition... 33
Figure 2.7. Examples of a partial and panoramic images for range and reflectance scan: the... 34
Figure 2.8. An experimental setup of a SICK LMS511 laser scanner 35
Figure 2.9. Kyushu University Indoor Semantic Place Dataset: examples of panoramic depth... 38
Figure 2.10. Kyushu University Indoor Semantic Place Dataset: examples of panoramic... 39
Figure 2.11. Partial scans in a panoramic image with the gray area indicating the overlapping... 40
Figure 2.12. An experimental setup of a SICK LMS151 laser scanner 40
Figure 2.13. A principal of time-of-flight laser scanner: distance value and reflectance value... 45
Figure 2.14. A principal of depth image from a range(distance) value of time-of-flight laser scanner 47
Figure 2.15. SICK Fukuoka Outdoor Semantic Place Dataset: examples of panoramic depth... 48
Figure 2.16. SICK Fukuoka Outdoor Semantic Place Dataset: examples of panoramic re-... 49
Figure 2.17. A configuration of FARO Focus3D laser scanner 50
Figure 2.18. An experimental setup of a FARO laser scanner 50
Figure 2.19. Fukuoka Outdoor Semantic Place Dataset: examples of high-resolution depth... 52
Figure 2.20. Fukuoka Outdoor Semantic Place Dataset: examples of high-resolution re-... 53
Figure 2.21. Fukuoka Outdoor Semantic Place Dataset: examples of high-resolution color... 54
Figure 2.22. A map of Fukuoka Outdoor Semantic Place Dataset: 650 locations with six... 55
Figure 3.1. A process for obtaining the Co-LBP descriptor for the grayscale and depth images... 57
Figure 3.2. An example of local binary transformation of the center pixel using 8-... 60
Figure 3.3. Experimental results of laboratory image: origianl color, original depth, grayscale... 61
Figure 3.4. An example of pixels from local binary patterns referring neighborhood pix-... 63
Figure 3.5. An example of the thresholding uniformity value for local binary patterns: the left... 67
Figure 3.6. Cumulative contribution ratio of SVD applied to Co-NI-LBPu4.[이미지참조] 73
Figure 3.7. Comparison of correct classification ratio (CCR) for Co-NI-LBPu4 and...[이미지참조] 74
Figure 3.8. Correct classification ratio (CCR) of various LBPs 75
Figure 4.1. An example of (a) sample pixels of local n-ary patterns (b) plane of local quater-... 82
Figure 4.2. Plane of Local Ternary Patterns: (a) intuitive LTP, (b) modal characteristic LTP 85
Figure 4.3. An example of intuitive local ternary patterns 86
Figure 4.4. Example images of raw scans, transformed LBPs and proposed multi-modal... 92
Figure 4.5. Correct classification ratio (CCR) for reduced SVD dimensionality of local... 94
The information of place can provide crucial clues that yield high levels of communication skills between a human and a robot about recognizing a surrounding environment. The place, in this study, represents a semantic meaning of space or a relative position from another space, distinctly different from a location. As a good example, there can be two different types of answers to a question, "Where is Kyushu University Ito campus?": (1) specifying a location (e.g., 33.59°N latitude, 130.21°E longitude) and (2) telling a type of the place (e.g., a university campus). In the first type of the answers, any semantic information cannot be contained, while the second type, telling a type of place, implies additional semantic meanings of the location. In many previous researches, the question, "Where am I?" is the most significant concern by specifying the more precise and accurate location for navigations and localizations. However, neither the location itself can not provide the semantic meanings of a place nor any of the current types of environment, and also, the navigation systems only with such a location data cannot predict what kinds of an event will occur. On the other hand, the place is hard to be recognized as a definition or definite boundary of each place, because the semantic meanings of a spatial area can be created by a set of the experiences of a human. In recent research of computer vision and robotics, many researchers have directed their attention to the pattern recognitions of the visual information, i.e., the understanding of the semantic meanings of places. The visual information of places can be classified into several corresponding categories so that the clustered visual patterns can be used to predict a new place with no information. For more efficient categorization of the places, a stepwise pattern recognition approach was used from with camera images, e.g., web images, to real-world three-dimensional (3D) information including the environment, the semantic meanings of places. In this dissertation, a novel multi-modal descriptor for the place categorization was employed as well as four 3D place datasets.
The contents of the dissertation consist of five chapters: Chapter 1 is an introduction, which describes the background and contributions of the research. In the background, the definition of place, a testimony of understanding space and 3D measurements, was introduced. In chapter 2, 3D Place Datasets, my 3D place datasets were presented, which can be used for evaluating the performance of the classification of in indoor/outdoor places including Kyushu University Kinect Place Recognition Database, Kyushu University Indoor Semantic Place Dataset, SICK Fukuoka Outdoor Semantic Place Dataset and Fukuoka Outdoor Semantic Place Dataset. In chapter 3, Co-occurrence Local Binary Patterns (Co-LBP) descriptor that contains the co-occurrence nature of the multi-modal LBP images was introduced. Local N-ary Patterns (LNP): a local multi-modal descriptor is also proposed in chapter 4. The LNP is a generalized multi-modal descriptor for place categorization. The final chapter 5 is conclusions.
*표시는 필수 입력사항입니다.
*전화번호 | ※ '-' 없이 휴대폰번호를 입력하세요 |
---|
기사명 | 저자명 | 페이지 | 원문 | 기사목차 |
---|
번호 | 발행일자 | 권호명 | 제본정보 | 자료실 | 원문 | 신청 페이지 |
---|
도서위치안내: / 서가번호:
우편복사 목록담기를 완료하였습니다.
*표시는 필수 입력사항입니다.
저장 되었습니다.