1 |
과학기술정보통신부 (2021, 5, 13). [보도자료] 사람이 중심이 되는 인공지능을 위한 신뢰할 수 있는 인공지능 실현 전략(안). URL:https://www.msit.go.kr/bbs/view.do?sCode=user&mId=113&mPid=112&pageIndex=&bbsSeqNo=94&nttSeqNo=3180239&searchOpt=ALL&searchTxt= |
미소장 |
2 |
김국배 (2022, 11, 3). ‘알고리즘 개편 고지했나’ 네이버-공정위 끝까지 날선공방…내년 1월 결판. <이데일리>. URL: https://www.edaily.co.kr /news/read?newsId=03434166632522112&mediaCodeNo =257&OutLnkChk=Y |
미소장 |
3 |
김충령 (2015, 7, 4). 한글 ‘ㄱ’과 알파벳 ‘A’… 구글 검색 결과는 딴판. <조선일보>. URL: https://www.chosun.com/site/data/html_dir/2015/07/03/2015070304234.html |
미소장 |
4 |
오요한·홍성욱 (2018). 인공지능 알고리즘은 사람을 차별하는가? <과학기술연구>, 18권 3호, 153-215. |
미소장 |
5 |
유진상 (2022, 5, 24). 방통위 ‘포털 뉴스 신뢰성·투명성 제고를 위한 협의체’출범. . URL: https://it.chosun.com/site/data/html_dir/2022/05/24/2022052401575.html |
미소장 |
6 |
이중원 (2019). 인공지능에게 책임을 부과할 수 있는가?: 책무성 중심의 인공지능 윤리 모색. <과학철학> 22권 2호. 79-104. |
미소장 |
7 |
이지은 (2020). “한국여성의 인권에 대해 알고 싶으면, 구글에서 ‘길거리’를 검색해 보라”: 알고리즘을 통해 ‘대중들’ 사이의 적대를 가시화하기. <미디어, 젠더 & 문화>, 35권 1호, 5-44. |
미소장 |
8 |
장슬기 (2022, 11, 22). 포털 검색 성적 대상화 이미지 결과 개선되긴 했지만... <미디어오늘>. URL: http://www.mediatoday.co.kr/news/a rticleView.html?idxno=307072 |
미소장 |
9 |
정소영 (2022). 유럽연합 인공지능법안의 거버넌스 분석: 유럽인공지능위원회와 회원국 감독기관의 역할과 기능을 중심으로. <연세법학>, 39권, 33-65. |
미소장 |
10 |
정철운 (2020, 10, 6). 네이버의 ‘검색알고리즘 조작’이 사실로 드러났다. <미디어오늘>. URL: http://www.mediatoday.co.kr/news/articleView.html?idxno=209643 |
미소장 |
11 |
정치하는엄마들 (2022, 6, 8). [보도자료] 정치하는엄마들 미디어감시팀 <포털사이트 검색 이미지를 바꾸자!> 캠페인 진행. URL: https://www.politicalmamas.kr/post/2342 |
미소장 |
12 |
정치하는엄마들 (2022, 9, 5). [보도자료] 정치하는엄마들 미디어감시팀 구글·네이버·다음 등 포털 사이트에 ‘문제 검색어 및 이미지 삭제 요청’. URL: https://www.politicalmamas.kr/post/2484 |
미소장 |
13 |
채새롬 (2021, 7, 21). “뉴스 추천 알고리즘 공정한가요”... 네이버 답변은. <연합뉴스>. URL: https://www.yna.co.kr/view/AKR20210721123700017 |
미소장 |
14 |
최창원 (2022, 9, 6). 택시 기사 반발에 ‘기밀’ 내놓은 카카오, ‘배차 알고리즘’문제 없었다. . URL: https://www.bloter.net/newsView/blt202209060004 |
미소장 |
15 |
한예섭 (2022, 6, 9). ‘길거리’ 검색하면 ‘길거리 OO녀’, 성차별적 이미지 쏟아진다. <프레시안>. URL: http://www.pressian.com/pages/articles/2022060915584002743 |
미소장 |
16 |
홍남희 (2020). AI와 콘텐츠 규제: 자동화된 차단 기술의 문제들. 금희조·강혜원 (편), (295-325쪽). 커뮤니케이션북스. |
미소장 |
17 |
황용석·정재선·황현정·김형준 (2021). 알고리즘 추천 시스템의 공정성 확보를위한 시론적 연구. <방송통신연구>, 116호, 169-206. |
미소장 |
18 |
Ananny, M., & Crawford, K. (2018). Seeing without knowing:Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989. |
미소장 |
19 |
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. Retrieved from https://www.propublica.org/a rticle/machine-bias-risk-assessments-in-criminal-sentencing |
미소장 |
20 |
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115. |
미소장 |
21 |
Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2021). Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, 50(1), 3-44. |
미소장 |
22 |
Bernard, Z. (2017, December 20). The first bill to examine ‘algorithmic bias’ in government agencies has just passed in New York City. INSIDER. Retrieved from https://www. businessinsider.com/algorithmic-bias-accountability-bill-passes-in-new-york-city-2017-12 |
미소장 |
23 |
Bovens, M. (2007). Analysing and assessing accountability: A conceptual framework 1. European Law Journal, 13(4), 447-468. |
미소장 |
24 |
Cheng, L., Varshney, K. R., & Liu, H. (2021). Socially responsible ai algorithms: Issues, purposes, and challenges. Journal of Artificial Intelligence Research, 71, 1137-1181. |
미소장 |
25 |
Coglianese, C., & Lehr, D. (2019). Transparency and algorithmic governance. Administrative Law Review, 71(1), 1-56. |
미소장 |
26 |
Dastin, J. (2018, October 11). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved from https://www.reuters.com/article/us-amazon-com-jobsautomation-insight-idUSKCN1MK08G |
미소장 |
27 |
Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. arXiv. doi:10.48550/arXiv.1408.6491 |
미소장 |
28 |
Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398-415. |
미소장 |
29 |
Dieterich, W., Mendoza, C., & Brennan, T. (2016). COMPAS risk scales: Demonstrating accuracy equity and predictive parity. Northpointe. Retrieved form https://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616. pdf |
미소장 |
30 |
Dougherty, C. (2015, July 1). Google photos mistakenly labels black people ‘gorillas’. The New York Times. Retrieved from https://nyti.ms/2opE8CD |
미소장 |
31 |
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580. |
미소장 |
32 |
Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a ‘right to an explanation’ is probably not the remedy you are looking for. Duke Law & Technology Review, 16, 18-84. |
미소장 |
33 |
European Union. (2021). What is the EU AI Act? https://artificialintelligenceact.eu/ |
미소장 |
34 |
Felzmann, H., Fosch-Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2020). Towards transparency by design for artificial intelligence. Science and Engineering Ethics, 26(6), 3333-3361. |
미소장 |
35 |
Forssbaeck, J., & Oxelheim, L. (2014). The multifaceted concept of transparency. In J. Forssbaeck & L. Oxelheim (Eds.), The Oxford handbook of economic and institutional transparency (pp. 3-30). Oxford University Press. |
미소장 |
36 |
Goldman, E. (2006). Search engine bias and the demise of search engine utopianism. Yale Journal of Law and Technology, 8, 188-200. |
미소장 |
37 |
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 1-15. |
미소장 |
38 |
Heikkilä, M. (2022, October 4). The White House just unveiled a new AI Bill of Rights. MIT Technology Review. Retrieved from https://www.technologyreview.com/2022/10/04/1060600/white-house-ai-bill-of-rights/ |
미소장 |
39 |
Internet Trend. (2022, December 15). Search Engine. Retrieved from http://www.internettrend.co.kr/trendForward.tsp |
미소장 |
40 |
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. |
미소장 |
41 |
Johnson, D. G. (2021). Algorithmic accountability. Social Philosophy and Policy, 38(2), 111-127. |
미소장 |
42 |
Kaminski, M. E. (2020). Understanding transparency in algorithmic accountability. In W. Barfield (Ed.), Cambridge handbook of the law of algorithms (pp. 121-138). Cambridge University Press. |
미소장 |
43 |
Khalid, A. (2022, February 3). Democratic lawmakers take another stab at AI bias legislation. Engadget. Retrieved from https://www.engadget.com/wyden-algorithmic-accountabilityact-2022-205854772.html |
미소장 |
44 |
King, G., Pan, J., & Roberts, M. E. (2013). How censorship in China allows government criticism but silences collective expression. American Political Science Review, 107(2), 326-343. |
미소장 |
45 |
Kroll, J., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633-705. |
미소장 |
46 |
Liu, H. W., Lin, C. F., & Chen, Y. J. (2019). Beyond State v Loomis: Artificial intelligence, government algorithmization and accountability. International Journal of Law and Information Technology, 27(2), 122-141. |
미소장 |
47 |
Loi, M., Ferrario, A., & Viganò, E. (2021). Transparency as design publicity: Explaining and justifying inscrutable algorithms. Ethics and Information Technology, 23(3), 253-263. |
미소장 |
48 |
Meijer, A. (2014). Transparency. In M. Bovens, R. Goodin, & T. Schillemans (Eds.), The Oxford handbook of public accountability (pp. 507-524). Oxford University Press. |
미소장 |
49 |
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501-507. |
미소장 |
50 |
Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21. |
미소장 |
51 |
Naver. (2021. 11. 29). NAVER-SAPI AI Report. Retrieved from https://www.navercorp.com/value/research/view/15 |
미소장 |
52 |
Pasquale, F. (2019). The second wave of algorithmic accountability. Retrieved from https://lpeproject.org/blog/the-second-wave-of-algorithmic-accountability/ |
미소장 |
53 |
Popper, K. (2002). Conjectures and refutations: The growth of scientific knowledge. Routledge. |
미소장 |
54 |
Schot, J., & Rip, A. (1997). The past and future of constructive technology assessment. Technological Forecasting and Social Change, 54(2-3), 251-268. |
미소장 |
55 |
Sweeney, L. (2013). Discrimination in online ad delivery. Communications of the ACM, 56(5), 44-54. |
미소장 |
56 |
Tan, S., Caruana, R., Hooker, G., & Lou, Y. (2018, December). Distill-and-compare: Auditing black-box models using transparent model distillation. In Proceedings of the 2018AAAI/ACM Conference on AI, Ethics, and Society (pp.303-310). ACM. |
미소장 |
57 |
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2021). The ethics of algorithms:Key problems and solutions. AI & Society, 37, 215-230. |
미소장 |
58 |
Vedder, A., & Naudts, L. (2017). Accountability for the use of algorithms in a big data environment. International Review of Law, Computers & Technology, 31(2), 206-224. |
미소장 |
59 |
Wieringa, M. (2020, January). What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. In Proceedings of the 2020Conference on Fairness, Accountability, and Transparency (pp. 1-18). ACM. |
미소장 |
60 |
Wyden, R. (2022). Algorithmic Accountability Act of 2022. https://www.wyden.senate.gov/imo/media/doc/2022-02-03%20Algorithmic%20Accountability%20Act%20of%202022%20One -pager.pdf |
미소장 |