본문 바로가기 주메뉴 바로가기
국회도서관 홈으로 정보검색 소장정보 검색

초록보기

This paper compares the A.I. Pengtalk’s and English native speakers’ assessment of Korean-speaking children’s English pronunciation. For this, 60 sixth-graders participated as assessees and four English native speakers as assessors. The students enunciated 10 English sentences, repeating after Pengtalk, and their speech was recorded. Pengtalk assessed the students’ pronunciation in a three-point scale for overall accuracy and a five-point scale for accuracy of four phonological elements. The native English-speaking assessors listened to and rated the students’ recordings using the rubric offered in Pengtalk. The assessment results were analyzed using t-tests and one-way ANOVA. The findings are as follows. First, Pengtalk tended to rate the students’ overall pronunciation accuracy and their accuracy in the four elements, and across students’ English proficiency levels, significantly lower than the native assessors. Second, Pengtalk and the native assessors differed in the accuracy order of the four elements. Third, variations in the accuracy order according to students’ proficiency levels were greater in Pengtalk than among native assessors. Fourth, Pengtalk’s automatic speech recognition sometimes failed to distinguish English fricatives from stops. Fifth, Pengtalk’s overall pronunciation assessment of a sentence sometimes disconformed with its assessment of the four elements. Based on these results, suggestions are made for developing and using AI chatbot programs from the perspective of English pronunciation teaching.