Naver has prepared an opportunity for South Korean industries to remind themselves the importance of AI (Artificial Intelligence) ethics by introducing AI Ethics Principles. Many have been suggesting that major corporations as well as startups need to continue to discuss about issues regarding AI ethics with AI users and come up with solutions to the issues.
Naver along with SAPI (Seoul National University AI Policy Initiative) held a joint webinar on Wednesday to talk about AI ethics and introduced “Naver AI Ethics Principles”.
Ethics Principles are made up of five provisions which are development of AI for people, respect towards diversity, combination of reasonable explanations and convenience, and design of services for safety, and protection of privacy and information security. Not only these principles are applied to developers from Naver, but they also apply to every members that utilize AI for different ideas, development, and marketing.
Song Dae-seop, who is the head of Naver’s Policy Research Institute, said that the announcement of the Ethics Principles are just a start and that the company will continue to materialize and improve the principles so that they can be actually implemented.
Naver defined AI as a daily tool for people and stated that the company will prioritize people-centered values during development and use of AI. The company said that it understands AI cannot be perfect even though it can make people’s lives more convenient and that it would continue to examine and improve AI so that AI can be a daily tool for people.
Naver emphasized on the fact that it will not discriminate unfairly based on the principle of “respect towards diversity” and expressed its will to put in strong efforts so that unfair discrimination without reasonable standards does not take place. It stated that AI is not a technology for technologies and that it would put in necessary efforts so that anyone can utilize AI conveniently and to have AI users have easy understanding about different AI services.

Photo Image
<Naver and SAPI (Seoul National University AI Policy Initiative) held a joint webinar on Wednesday and introduced Naver AI Ethics Principles. Professor Ko Hak-soo of SNU is giving a welcoming speech. Staff Reporter Park, Jiho | jihopress@etnews.com>

The company also put an emphasis on safety and protection of privacy. It has decided to design AI services so that there will not be any situation where AI threatens one’s life and body. It also presented its plan of protecting people’s privacy during different processes of developing and using AI and not having the ethics principles end as a simple slogan. It plans to prepare an internal communication channel so that its employees can have inquiries and discussions with each other during project and service development processes.
Discussions about AI ethics have come up on the surface due to a controversy regarding an AI chatbot called “Iruda” that occurred early this year. Due to the incident, there has been an increased need for exemplary cases where AI ethics become detailed processes within companies rather than simply becoming abstract principles. There also needs to be sharing and research on failed cases in addition to successful cases. Startups that lack time and resources can face difficulties in establishing internal AI ethics principles or processes.
Naver has been discussing about AI ethics principles even before the occurrence of “Iruda” incident. It has been working with SNU since 2018 regarding AI ethics principles. Experts agree that there needs to be joint effort at an industrial level.
“The announcement of Naver AI Ethics Principles is a first step. It is more important to discuss about how we can materialize and implement these principles.” said Professor Lim Yong of SNU School of Law. “Once leaders create AI ethics principles after going through trials and errors, many startups will follow their examples and create a virtuous cycle.”
Lee Kyung-jeon, who is a professor at Kyung Hee University, emphasized that industries deciding AI ethics principles on their own and materializing and implementing such principles is the fundamental solution to controversies related to AI rather than having the government tighten regulations.
Professor Lee also added that there needs to be consistent education on AI ethics for AI developers so that AI developers or workers in the AI industry follow AI ethics on their own.
Kim Jae-hwan, who is the policy director at Korea Internet Corporations Association, also emphasized that industries and AI users have to continue to discuss about ethics and technologies and prepare different ways to improve AI rather than just looking at AI negatively as a whole.
Staff Reporter Kim, Jiseon | river@etnews.com & Staff Reporter Kim, Myunghee | noprint@etnews.com