Artificial Intelligence Institute of Seoul National University (AIIS) opened its door in 2019. As machine learning has reached a level of deep learning, machines are now able to learn on an equal footing as humans. However, machines still lack many abilities to learn like humans. The AIIS is looking to develop artificial intelligence (AI) so that it can reach the level of human and will approach the task in a multidisciplinary manner by utilizing talented individuals and technologies from all fields. It is looking for new ways of learning through convergence of learning and inference, language and recognition, robotics and action, AI chip, vision and intelligence, data intelligence, neurocognition, and interaction between humans and AI.
“The goal of the AIIS is to teach artificial intelligence (AI) by imitating cognitive development process of a child so that AI can understand TV dramas and react like humans.” said Director Jang Byeong-tak of the AIIS.
Director Jang introduced the AIIS as a place where talented individuals in South Korea gather to study AI and conduct researches on applying various studies to real life. To support these individuals, the institute is looking for more advanced AI platforms.
Main projects that the institute are working on are “Video Turing test” and “Baby Mind project”. Machines are not able to hear and understand long sentences. It is very common for AI speakers that are on the market not being able to understand long sentences that last a minute or two and providing wrong answers. This is because machines are not able to understand the context of a story. The two projects were initiated in order to solve this issue.
◊AIIS looking to develop AI that understands dramas
The main purpose of the Video Turing test is to develop an AI platform that is able to understand videos. Turing test was first proposed by an English mathematician called Alan Turing back in 1950 and it is also known as the “imitation game”. Video Turing test is an attempt to create an AI that possesses computer vision that detects meaningful behaviors within repeated images, language intelligence for saying real-life languages, auditory function for separating meaningful sounds and dialogues, and cognitive ability for understanding temporal flows and time relationship. The project is being carried out jointly with MIT, Stanford University, and Technical University of Berlin.
The research is still at an early stage. However, it is producing meaningful results gradually. For example, Professor Lee Kyeong-mu of SNU’s Department of Electrical and Computer Engineering has developed a technology that is able to recover resolution by recognizing surrounding objects when a video resolution is low just like CCTV. His technology will be helpful for AI that is installed on autonomous vehicles in understanding and recognizing fast-moving objects or humans instantaneously. There is also a great chance that his technology will be utilized in the Video Turing test.
Professor Kim Geon-hee of SNU’s Department of Computer Science and Engineering have produced results in a field where computer vision and understanding of natural language is combined. He won a challenge back in July when his AI was able to understand commands given in a natural language and design clothes. His attempt was to combine vision and language so that AI would understand a human language.
Such study will be utilized for developing AI that is able to understand the flow of images and make necessary decisions rather than just understanding the image of an object.

Photo Image
<Director Jang Byeong-tak of the AIIS>

◊Breaking Moravec’s paradox
Baby mind project is based on applying human learning process to AI learning. Babies that are between 0 to 24 months old learn and experience through sensory organs such as eye, ear, mouth, hand, foot, and brain. Machines are not able to perform this learning process as they only rely on sensors and calculations for understanding outside information. This discovery is also called “Moravec’s paradox”. Although AI is able to surpass the world’s best chess and go experts, it is extremely difficult to create a machine that has same motor ability and sensory perception as a one year old. Director Jang believes that sensibility and motor ability in humans and animals are where they are now after going through millions of years of evolution.
This is the reason why Baby Mind project was initiated. The project will be led by Director Jang as he is also a professor at SNU’s Department of Computer Science and Engineering. He plans to utilize robots to expand cognitive functions of robot. He actually won the highest prize at an international robotics competition “RoboCup” where participants perform missions by programming their robots with cognitive functions. He also won the first place at the World Robot Summit set up by the Japanese Government. Director Jang believed that robots will be able to learn cognitive functions much faster if they are applied with cognitive functions just as a robot that is able to see and hear will learn how to feel while picking up an object. At the summit, he developed a robot that is able to quickly recognize various objects and handle those objects picking up objects and putting them in a box. In other words, he developed a robot with sense of touch.
Cho Kyu-jin, who is a professor at SNU’s Department of Mechanical Engineering, has developed an AI soft glove “Exo-Glove Poly II” that moves according to disabled person’s intention. Once a disabled person wears glasses equipped with a camera and the glove and looks at a cup, he or she will be able to hold the cup without any issue. Once the AI system that is installed in the camera recognizes the cup as an image and decides that there needs to be a motion of holding the cup, the system orders the glove to grab the cup in a smooth manner.
Director Jang explained that engineering alone cannot complete the Baby Mind project. Just as how humans learn different studies, the project will only be accomplished by combining various studies such as engineering, study of medicine, biology, psychology, and music.
This is also a reason why the AIIS is attempting multidisciplinary studies.
“The AIIS is approaching AI through multidisciplinary approach rather than just through engineering.” said Director Jang. “We hope that we can contribute to the further development of AI and create new fields in other studies through AI.”
Staff Reporter Lee, Kyungmin | kmlee@etnews.com