권호기사보기
기사명 | 저자명 | 페이지 | 원문 | 기사목차 |
---|
대표형(전거형, Authority) | 생물정보 | 이형(異形, Variant) | 소속 | 직위 | 직업 | 활동분야 | 주기 | 서지 | |
---|---|---|---|---|---|---|---|---|---|
연구/단체명을 입력해주세요. |
|
|
|
|
|
* 주제를 선택하시면 검색 상세로 이동합니다.
In this paper, I study and describe a methodology to verify whether the LLM model can accurately reflect the moral values of individual countries and extract the proper results. Although it is a relatively difficult research topic considering the diversity of moral items and the subjectivity and specificity of their value assessment, I explain the methods of pre-training and optimizing individual models based on two basic datasets (PEW, World Values Survey). As for the PEW dataset, it has the disadvantage of being somewhat out of date as of 2014, and in the case of the World Values Survey, it mainly preserves interview data from South Korea from 2017-2018, but it has the complexity problem of having 19 moral items, as mentioned in the main text.
I provide a code script based on the existing study of Aida Ramezani and Yang Xu (2023), and indirectly prove its effectiveness. However, as revealed in the paper of Abid, Ab. et al (2021), English-based pre-train models (EPLMs) built on English data have the disadvantage of showing negative bias against certain ethnicities or countries such as Muslims. In order to eliminate such cultural and moral bias, it was found that the dataset should be built by more actively reflecting data from the relevant country (such as Wikipedia) during pre-train to enable more objective judgment of moral values, and such a method is proposed in the latter part of the main text.
In conclusion, there is a contradiction in that LLMs composed mainly of English data evaluate cultural norms of various countries, and to solve this problem, we found through this paper that a fine-tuned model extracts cultural norms and moral values of a specific country better than a pre-trained English model. On the other hand, mPLMs (multilingual pre-trained models) using multilingual data can be another alternative for extracting cultural and moral norms of multiple countries (Arora Arn. 2022). In particular, the above study showed that mPLMs pre-trained in 13 foreign languages showed differences from the answers of human evaluators, which is also a counter-evidence that some optimization is absolutely necessary.
The conclusion states that. In addition, it is true that this series of studies has the disadvantage of not being able to compare cultural diversity in moral norms between individual cultures. Currently, in order to secure the authenticity of LLM, there are seven indicators: Reliability, Safety, Fairness, Resistance to misuse, Explainability and reasoning, Adherence to social norms, and Robustness, and there are 29 sub-indicators under them. There is a movement in the academic world to evaluate, but the verification method for these seven indicators is left for further research.
*표시는 필수 입력사항입니다.
*전화번호 | ※ '-' 없이 휴대폰번호를 입력하세요 |
---|
기사명 | 저자명 | 페이지 | 원문 | 기사목차 |
---|
번호 | 발행일자 | 권호명 | 제본정보 | 자료실 | 원문 | 신청 페이지 |
---|
도서위치안내: 정기간행물실(524호) / 서가번호: 대학02
2021년 이전 정기간행물은 온라인 신청(원문 구축 자료는 원문 이용)
우편복사 목록담기를 완료하였습니다.
*표시는 필수 입력사항입니다.
저장 되었습니다.