• Title/Summary/Keyword: Edutainment Robot

Search Result 7, Processing Time 0.017 seconds

An Edutainment Mon-E Robot for Young Children (유아용 에듀테인먼트 Mon-E로봇)

  • Kim, Jong-Cheol;Kim, Hyun-Ho
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.2
    • /
    • pp.147-155
    • /
    • 2011
  • This paper presents an edutainment robot for young children. The edutainment robot called 'Mon-e' has developed by the Central R&D Laboratory at KT. The main services of the Mon-E robot are autonomous moving service, object card and story book telling service and videophone service. The RFID technology was introduced for easy interface to young children. The face of Mon-E robot is mounted with an RFID reader. The RFID tag is pasted on story book and object card. If you approach a book or an object card to the face of Mon-E, the Mon-E robot recognizes the identified code and plays its service. In autonomous moving, if the Mon-E robot meets obstacles, it moves back and turns left or right or half rotation. In videophone service, if young children approach an RFID card to the Mon-E, the Mon-E can make a call to the specific number, which is contained in the RFID card. The developed Mon-E robot has tested in real world environment and is evaluated young children and their parents. In the result of evaluation, the feeling of satisfaction was high to main services of Mon-E robot.

Development of Videophone-based Application Services for KT Mon-e(KT Edutainment Robot for Young Children) (KT 몽이(유아용 에듀테인먼트 로봇)의 영상전화 기반 응용 서비스 개발)

  • Park, Kui-Hong;Kim, Jong-Cheol;Ahn, Hee-June
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.2
    • /
    • pp.93-101
    • /
    • 2010
  • This paper presents the system design and implementation of 'Mon-e', the KT's edutainment robot for young children. We paid our special attention to the computer- illiterate young children, and the provision of the physical and friendly human interface of robots. Specifically, the paper focuses on the video telephony and home monitoring service using the Mon-e robot. RFID cards -based calling makes it possible for the computer-illiterate children to make a phone call to their parents. The SIP and DTMF based remote control of the robot enables the search and track of the children. This experimental development shows the potentialities and values of the convergence service of telecommunication and robotics.

Research of intelligent rhythm service of edutainment humanoid robot (에듀테인먼트 휴머노이드 로봇의 지능적인 율동 서비스 연구)

  • Yoon, Taebok;Na, Eunsuk
    • Journal of Korea Game Society
    • /
    • v.18 no.4
    • /
    • pp.75-82
    • /
    • 2018
  • With the development of information and communication technology, various methods have been tried to provide learners with a fun educational environment through fun and interest. It is a good example to utilize technologies such as games and robots in education for edutainment and game-based learning. In this study, we propose an intelligent rhythm education system using user data collection and analysis for humanoid robot rhythm generation. To do this, the user selects music and inputs rhythm information according to the selected music. The robot utilization data of this user extracts patterns through collection and analysis. Patterns are based on frequency, and FFT similarity comparison method is applied when past data is insufficient. The proposed method is validated through experiments of kindergarten children.

Sound-based Emotion Estimation and Growing HRI System for an Edutainment Robot (에듀테인먼트 로봇을 위한 소리기반 사용자 감성추정과 성장형 감성 HRI시스템)

  • Kim, Jong-Cheol;Park, Kui-Hong
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.1
    • /
    • pp.7-13
    • /
    • 2010
  • This paper presents the sound-based emotion estimation method and the growing HRI (human-robot interaction) system for a Mon-E robot. The method of emotion estimation uses the musical element based on the law of harmony and counterpoint. The emotion is estimated from sound using the information of musical elements which include chord, tempo, volume, harmonic and compass. In this paper, the estimated emotions display the standard 12 emotions including Eckman's 6 emotions (anger, disgust, fear, happiness, sadness, surprise) and the opposite 6 emotions (calmness, love, confidence, unhappiness, gladness, comfortableness) of those. The growing HRI system analyzes sensing information, estimated emotion and service log in an edutainment robot. So, it commands the behavior of the robot. The growing HRI system consists of the emotion client and the emotion server. The emotion client estimates the emotion from sound. This client not only transmits the estimated emotion and sensing information to the emotion server but also delivers response coming from the emotion server to the main program of the robot. The emotion server not only updates the rule table of HRI using information transmitted from the emotion client and but also transmits the response of the HRI to the emotion client. The proposed system was applied to a Mon-E robot and can supply friendly HRI service to users.

An Educational Robot Game Framework for Programming Leaning in K-12 (프로그래밍 학습을 위한 교육용 로봇 게임 프레임워크)

  • Kwon, Dai-Young;Shim, Jae-Kwoun;Hur, Kyoung;Lee, Won-Gyu
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • v.2 no.1
    • /
    • pp.89-94
    • /
    • 2010
  • This paper proposes an educational robot game framework for novice students in k-12 to learn concepts of programming through interesting experiences. It is designed to be able to enjoy robot games without technical knowledge of robotics and programming. For this, in the proposed robot game framework, educational robots based on line-tracer are used and the programming APls that can be used for various educational programming languages are offered. And the proposed robot game framework also offers a game board to create several games with easy operations. Additionally, through experiments, it shows that novice students are able to create different games that have several game solutions for various programming using this robot game framework.

  • PDF

Different Look, Different Feel: Social Robot Design Evaluation Model Based on ABOT Attributes and Consumer Emotions (각인각색, 각봇각색: ABOT 속성과 소비자 감성 기반 소셜로봇 디자인평가 모형 개발)

  • Ha, Sangjip;Lee, Junsik;Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.55-78
    • /
    • 2021
  • Tosolve complex and diverse social problems and ensure the quality of life of individuals, social robots that can interact with humans are attracting attention. In the past, robots were recognized as beings that provide labor force as they put into industrial sites on behalf of humans. However, the concept of today's robot has been extended to social robots that coexist with humans and enable social interaction with the advent of Smart technology, which is considered an important driver in most industries. Specifically, there are service robots that respond to customers, the robots that have the purpose of edutainment, and the emotionalrobots that can interact with humans intimately. However, popularization of robots is not felt despite the current information environment in the modern ICT service environment and the 4th industrial revolution. Considering social interaction with users which is an important function of social robots, not only the technology of the robots but also other factors should be considered. The design elements of the robot are more important than other factors tomake consumers purchase essentially a social robot. In fact, existing studies on social robots are at the level of proposing "robot development methodology" or testing the effects provided by social robots to users in pieces. On the other hand, consumer emotions felt from the robot's appearance has an important influence in the process of forming user's perception, reasoning, evaluation and expectation. Furthermore, it can affect attitude toward robots and good feeling and performance reasoning, etc. Therefore, this study aims to verify the effect of appearance of social robot and consumer emotions on consumer's attitude toward social robot. At this time, a social robot design evaluation model is constructed by combining heterogeneous data from different sources. Specifically, the three quantitative indicator data for the appearance of social robots from the ABOT Database is included in the model. The consumer emotions of social robot design has been collected through (1) the existing design evaluation literature and (2) online buzzsuch as product reviews and blogs, (3) qualitative interviews for social robot design. Later, we collected the score of consumer emotions and attitudes toward various social robots through a large-scale consumer survey. First, we have derived the six major dimensions of consumer emotions for 23 pieces of detailed emotions through dimension reduction methodology. Then, statistical analysis was performed to verify the effect of derived consumer emotionson attitude toward social robots. Finally, the moderated regression analysis was performed to verify the effect of quantitatively collected indicators of social robot appearance on the relationship between consumer emotions and attitudes toward social robots. Interestingly, several significant moderation effects were identified, these effects are visualized with two-way interaction effect to interpret them from multidisciplinary perspectives. This study has theoretical contributions from the perspective of empirically verifying all stages from technical properties to consumer's emotion and attitudes toward social robots by linking the data from heterogeneous sources. It has practical significance that the result helps to develop the design guidelines based on consumer emotions in the design stage of social robot development.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.