본문 바로가기 주메뉴 바로가기
국회도서관 홈으로 정보검색 소장정보 검색

초록보기

Optical remote sensing image detection has wide-ranging applications in both military and civilian sectors. Addressing the specific challenge of false positives and missed detections in optical remote sensing image analysis due to object size variations, a lightweight remote sensing image detection method based on an improved YOLOv5n has been proposed. This technology allows for rapid and effective analysis of re-mote sensing images, real-time detection, and target localization, even in scenarios with limited computational resources in current ma-chines/systems. To begin with, the YOLOv5n feature fusion network structure incorporates an adaptive spatial feature fusion mechanism to enhance the algorithm's ability to fuse features of objects at different scales. Additionally, an SIoU loss function has been developed based on the original YOLOv5n positional loss function, redefining the vector angle between position frame regressions and the penalty index. This adjustment aids in improving the convergence speed of model training and enhancing detection performance. To validate the effectiveness of the proposed method, experimental comparisons were conducted using optical remote sensing image datasets. The experimental results on optical remote sensing images serve to demonstrate the efficiency of this advanced technology. The findings indicate that the average mean accuracy of the improved network model has increased from the original 81.6% to 84.9%. Moreover, the average detection speed and network complexity are significantly superior to those of the other three existing object detection algorithms.

Optical remote sensing image detection has wide-ranging applications in both military and civilian sectors. Addressing the specific challenge of false positives and missed detections in optical remote sensing image analysis due to object size variations, a lightweight remote sensing image detection method based on an improved YOLOv5n has been proposed. This technology allows for rapid and effective analysis of re-mote sensing images, real-time detection, and target localization, even in scenarios with limited computational resources in current ma-chines/systems. To begin with, the YOLOv5n feature fusion network structure incorporates an adaptive spatial feature fusion mechanism to enhance the algorithm's ability to fuse features of objects at different scales. Additionally, an SIoU loss function has been developed based on the original YOLOv5n positional loss function, redefining the vector angle between position frame regressions and the penalty index. This adjustment aids in improving the convergence speed of model training and enhancing detection performance. To validate the effectiveness of the proposed method, experimental comparisons were conducted using optical remote sensing image datasets. The experimental results on optical remote sensing images serve to demonstrate the efficiency of this advanced technology. The findings indicate that the average mean accuracy of the improved network model has increased from the original 81.6% to 84.9%. Moreover, the average detection speed and network complexity are significantly superior to those of the other three existing object detection algorithms.

권호기사

권호기사 목록 테이블로 기사명, 저자명, 페이지, 원문, 기사목차 순으로 되어있습니다.
기사명 저자명 페이지 원문 목차
Decision-making for multi-view single object detection with graph convolutional networks Ren Wang, Tae Sung Kim, Tae-Ho Lee, Jin-Sung Kim, Hyuk-Jae Lee p. 207-213
(A) method for detecting lightweight optical remote sensing images using improved Yolov5n ChangMan Zou, Wang-Su Jeon, Sang-Yong Rhee, MingXing Cai p. 215-225
Inpainting GAN-based image blending with adaptive binary line mask Thanh Hien Truong, Tae-Ho Lee, Viduranga Munasinghe, Tae Sung Kim, Jin-Sung Kim, Hyuk-Jae Lee p. 227-236
Secure and lightweight authentication protocol in Internet of things Yanlong Yang, Mengzhu Lu, Xiaohan Niu p. 237-247
(A) robust online Korean teaching support technology based on TCNN Shunji Cui p. 249-257
Sustainable development of diversified teaching mode from ecological perspective : a case study on metaverse-based landscape oil painting course Zhi Li, Xiao Chen p. 259-270
Failure analysis of vital sign monitoring system in digital healthcare with FTA Faiza Sabir, Sarfaraz Ahmed, Gihwon Kwon p. 271-278
Ideological and political evaluation of English courses in heterogeneous campuses based on UAV network Mengmeng Liu p. 279-289

참고문헌 (34건) : 자료제공( 네이버학술정보 )

참고문헌 목록에 대한 테이블로 번호, 참고문헌, 국회도서관 소장유무로 구성되어 있습니다.
번호 참고문헌 국회도서관 소장유무
1 Y. L. Xue, Y. Sun, and H. R. Ma, "Lightweight small object detection in aerial remote sensing image", Electronics Optics & Control, vol. 29, no. 6, pp. 11-15, 2022. 미소장
2 S. Mukherjee, S. Ghosh, S. Ghosh, P. Kumer, and P. P. Roy, "Predicting video-frames using encoder-convlstm combination," in Proceedings of the ICASSP 2019-2019IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 2027-2031. 미소장
3 S. Mukherjee, R. Saini, P. Kumar, P. P. Roy, D.P. Dogra, and B. G. Kim, "Fight detection in hockey videos using deep network", Journal of Multimedia Information System, vol. 4, no. 4, pp. 225-232, 2017. 미소장
4 R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580-587. 미소장
5 R. Girshick, "Fast R-CNN," in Proceedings of the IEEE International Conference on Computer Vision(ICCV), 2015, pp. 1440-1448. 미소장
6 S. Q. Ren, K. M. He, R. Girshick, and J. Sun, "Faster RCNN:Towards real-time object detection with region proposal networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6 pp. 1137-1149, 2017. 미소장
7 J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection,"in Proceeding of the 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Jun. 2016, pp. 779-788. 미소장
8 J. Redmon and A. Farhadi, "YOLOv9000: Better, faster, stronger," in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Jul. 2017, pp. 6517-6525. 미소장
9 W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, and C. Y. Fu, et al., "SSD: Single shot multibox detector,"in Proceedings of the 4th European Conferece on Computer Vision, Switzerland, 2016, pp. 21-37. 미소장
10 J. Redmon and A. Farhadi, "YOLOv3: An incremental improvement," arXiv Prepr. arXiv1804.02767, 2018. 미소장
11 A. Bochkovskiy, C. Y. Wang, and H. Y. M. Liao, "YOLOv4: Optimal speed and accuracy of object detection,"in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, arXiv:1804.0276, 2018. 미소장
12 Y. Liu, B. Lu, J. Peng, and Z. Zhang, "Research on the use of YOLOv5 object detection algorithm in mask wearing recognition," World Scientific Research Journal, vol. 6, no. 11, pp. 276-284, 2020. 미소장
13 Y. A. O. Qunli, H. U. Xian, and L. Hong, "Aircraft detection in remote sensing imagery with multi-scale feature fusion convolutional neuralnetworks," Acta Geodaetica et Cartographica Sinica, vol. 48, no. 10, pp. 1266-1274, 2019. 미소장
14 V. Pandey, K. Anand, A. Kalra, A. Gupta, P. P. Roy, and B. G. Kim, "Enhancing object detection in aerial images,"Mathematical Biosciences and Engineering, vol. 19, no. 8, pp. 7920-7932, 2022. 미소장
15 S. T. Liu, D. Huang, and Y. H. Wang, "Learning spatial fusion for single-shot object detection," arXiv Prepr. arXiv1911.09516, 2019. 미소장
16 Z. Gevorgyan, "SIoU loss: More powerful learning for bounding box regression," arXiv Prepr. arxiv: 2205. 12740, 2022. 미소장
17 GitHub, RSOD-Dataset, Apr. 2019, https://github.com/RSIA-LIESMARS-WHU/RSOD-Dataset-. 미소장
18 Hyper, NWPU VHR-10 Geospatial Object Detection Remote Sensing Dataset, n.d. https://hyper.ai/datasets/5422. 미소장
19 T. Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Feature pyramid networks for object detectionin,"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2117-2125. 미소장
20 S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, "Path aggregation network for instance segmentation," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8759-8768. 미소장
21 J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, and H. Hu, et al., "Deformable convolutional networksin," in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 764-773. 미소장
22 S. H. Woo, J. C. Park, J. Y. Lee, and I. S. Kweon, "Cbam: Convolutional block attention module," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 3-19. 미소장
23 M. Tan, R. Pang, and Q. V. Le, "Efficientdet: Scalable and efficient object detection," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 10781-10790. 미소장
24 G. Ghiasi, T. Y. Lin, and Q. V. Le, "NAS-FPN: Learning scalable feature pyramid architecture for object detection,"in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 7036-7045. 미소장
25 S. Liu, D. Huang, and Y. Wang, "Learning spatial fusion for single-shot object detection," arXiv Prep. arXiv:1911.09516 , 2019. 미소장
26 J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detec tion," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779-788. 미소장
27 J. Redmon and A. Farhadi, "YOLO9000: Better, Faster, Stronger," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 7263-7271 미소장
28 J. Redmon and A. Farhadi. "Yolov3: An incremental improvement," arXiv Prepr. arXiv:1804.02767, 2018. 미소장
29 A. Bochkovskiy, C. Y. Wang, and H. Y. M. Liao, "Yolov4: Optimal speed and accuracy of object detection."arXiv Prepr. arXiv:2004.10934, 2020. 미소장
30 GitHub, ultralytics/yolov5, 2023, https://github.com/ultralytics/yolov5. 미소장
31 H. Rezatofighi, N. Tsoi, J. Y. Gwak, A. Sadeghian, I. Reid, and S. Savarese, "Generalized intersection over union: A metric and a loss for bounding box regression,"in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 658-666. 미소장
32 S. Liu, D. Huang, and Y. H. Wang, "Learning spatial fusion for single-shot object detection," arXiv Prepr. arXiv:1911.09516, 2019. 미소장
33 Y. Long, Y. P. Gong, Z. F. Xiao, and Q. Liu, "Accurate object localization in remote sensing images based on convolutional neural networks," IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 5, pp. 2486-2498, 2017. 미소장
34 G. Cheng, J. Han, and P, Zhou, and L. Guo, "Multiclassgeospatial object detection and geographic image classification based on collection of part detectors,"ISPRS Journal of Photogrammetry and Remote Sensing, vol. 98, pp. 119-132, 2014. 미소장