• Title/Summary/Keyword: Robot Eyes

Search Result 41, Processing Time 0.023 seconds

3D Display Method for Moving Viewers (움직이는 관찰자용 3차원 디스플레이 방법)

  • Heo, Gyeong-Mu;Kim, Myeong-Sin
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.4
    • /
    • pp.37-45
    • /
    • 2000
  • In this paper we suggest a method of detecting the two eyes position of moving viewer by using images obtained through a color CCD camera, and also a method of rendering view-dependent 3D image which consists of depth estimation, image-based 3D object modeling and stereoscopic display process. Through the experiment of applying the suggested methods, we could find the accurate two-eyes position with the success rate of 97.5% within the processing time of 0.39 second using personal computer, and display the view-dependent 3D image using Fl6 flight model. And through the similarity measurement of stereo image rendered at z-buffer by Open Inventor and captured by stereo camera using robot, we could find that view-dependent 3D picture obtained by our proposed method is optimal to viewer.

  • PDF

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

A Study on Interactive Talking Companion Doll Robot System Using Big Data for the Elderly Living Alone (빅데이터를 이용한 독거노인 돌봄 AI 대화형 말동무 아가야(AGAYA) 로봇 시스템에 관한 연구)

  • Song, Moon-Sun
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.5
    • /
    • pp.305-318
    • /
    • 2022
  • We focused on the care effectiveness of the interactive AI robots. developed an AI toy robot called 'Agaya' to contribute to personalization with more human-centered care. First, by applying P-TTS technology, you can maximize intimacy by autonomously selecting the voice of the person you want to hear. Second, it is possible to heal in your own way with good memory storage and bring back memory function. Third, by having five senses of the role of eyes, nose, mouth, ears, and hands, seeking better personalised services. Fourth, it attempted to develop technologies such as warm temperature maintenance, aroma, sterilization and fine dust removal, convenient charging method. These skills will expand the effective use of interactive robots by elderly people and contribute to building a positive image of the elderly who can plan the remaining old age productively and independently

Development of Algorithm for Prediction of Bead Height on GMA Welding (GMA 용접의 최적 비드 높이 예측 알고리즘 개발)

  • 김인수;박창언;김일수;손준식;안영호;김동규;오영생
    • Journal of Welding and Joining
    • /
    • v.17 no.5
    • /
    • pp.40-46
    • /
    • 1999
  • The sensors employed in the robotic are welding system must detect the changes in weld characteristics and produce the output that is in some way related to the change being detected. Such adaptive systems, which synchronise the robot arm and eyes using a primitive brain will form the basis for the development of robotic GMA(Gas Metal Arc) welding which increasingly higher levels of artificial intelligence. The objective of this paper is to realize the mapping characteristics of bead height through learning. After learning, the neural estimation can estimate the bead height desired from the learning mapping characteristic. The design parameters of the neural network estimator(the number of hidden layers and the number of nodes in a layer) are chosen from an estimation error analysis. A series of bead of bead-on-plate GMA welding experiments was carried out in order to verify the performance of the neural network estimator. The experimental results show that the proposed neural network estimator can predict the bead height with reasonable accuracy and guarantee the uniform weld quality.

  • PDF

Interactive Virtual Studio & Immersive Viewer Environment (인터렉티브 가상 스튜디오와 몰입형 시청자 환경)

  • 김래현;박문호;고희동;변혜란
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06b
    • /
    • pp.87-93
    • /
    • 1999
  • In this paper, we introduce a novel virtual studio environment where a broadcaster in the virtual set interacts with tele-viewers as if they are sharing the same environment as participants. A tele-viewer participates physically in the virtual studio environment by a dummy-head equipped with video "eyes" and microphone "ears" physically located in the studio. The dummy head as a surrogate of the tole-viewer follows the tele-viewer's head movements and views and hears through the dummy head like a tele-operated robot. By introducing the tele-presence technology in the virtual studio setting, the broadcaster can not only interact with the virtual set elements like the regular virtual studio environment but also share the physical studio with the surrogates of the tele-viewers as participants. The tele-viewer may see the real broadcaster in the virtual set environment and other participants as avatars in place of their respective dummy heads. With an immersive display like HMD, the tele-viewer may look around the studio and interact with other avatars. The new interactive virtual studio with the immersive viewer environment may be applied to immersive tele-conferencing, tele-teaching, and interactive TV program productions.program productions.

  • PDF

Analysis Torque Characteristics and Improved Efficiency of Permanent Magnet Multi-D.O.F. Spherical Motor (영구자석형 다자유도 구형전동기의 토크특성 분석과 효율 향상에 대한 연구)

  • Lee, Ho-Joon;Kim, Yong;Jang, Ik-Sang;Park, Hyun-Jong;Kang, Dong-Woo;Won, Sung-Hong;Lee, Ju
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.1
    • /
    • pp.57-63
    • /
    • 2012
  • A surfaced permanent magnet spherical motor is capable of operating as three degree of freedom that used for the joints of the robot's arm, leg, and eyes. Ongoing research like new concept is essential part of motor field, it will make a great contribution in the future the overall portion of the motor, is becoming expected. The author analysis torque characteristics in spherical motor with state of rotating and positioning. And future design direction is smaller motors with equivalent or higher output. Solutions as torque and efficiency improvements are selecting the core with special processing type like powder metallurgy materials. Their special characteristic is high permeability and low eddy current losses at high speed, so improved the torque and efficiency.

Gaze Matching Based on Multi-microphone for Remote Tele-conference (멀티 마이크로폰 기반 원격지 간 화상회의 시선 일치 기법)

  • Lee, Daeseong;Jo, Dongsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.429-431
    • /
    • 2021
  • Recently, as an alternative to replace face-to-face meetings, video conferencing systems between remote locations has increased. However, video conferencing systems have limitations in terms of mismatch of the eyes of remote users. Therefore, it is necessary to apply a technology that can increase the level of immersion in video conferences by matching the gaze information of participants between different remote locations. In this paper, we propose a novel technique to realize video conferencing with the same gaze by estimating the speaker's location based on a multi-microphone. Using our method, it can be applied to various fields such as robot interaction and virtual human interface as well as video conferencing between remote locations.

  • PDF

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

Face Classification Using Cascade Facial Detection and Convolutional Neural Network (Cascade 안면 검출기와 컨볼루셔널 신경망을 이용한 얼굴 분류)

  • Yu, Je-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.1
    • /
    • pp.70-75
    • /
    • 2016
  • Nowadays, there are many research for recognizing face of people using the machine vision. the machine vision is classification and analysis technology using machine that has sight such as human eyes. In this paper, we propose algorithm for classifying human face using this machine vision system. This algorithm consist of Convolutional Neural Network and cascade face detector. And using this algorithm, we classified the face of subjects. For training the face classification algorithm, 2,000, 3,000, and 4,000 images of each subject are used. Training iteration of Convolutional Neural Network had 10 and 20. Then we classified the images. In this paper, about 6,000 images was classified for effectiveness. And we implement the system that can classify the face of subjects in realtime using USB camera.

Effective Nonlinear Filters with Visual Perception Characteristics for Extracting Sketch Features (인간시각 인식특성을 지닌 효율적 비선형 스케치 특징추출 필터)

  • Cho, Sung-Mok;Cho, Ok-Lae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.1 s.39
    • /
    • pp.139-145
    • /
    • 2006
  • Feature extraction technique in digital images has many applications such as robot vision, medical diagnostic system, and motion video transmission, etc. There are several methods for extracting features in digital images for example nonlinear gradient, nonlinear laplacian, and entropy convolutional filter. However, conventional convolutional filters are usually not efficient to extract features in an image because image feature formation in eyes is more sensitive to dark regions than to bright regions. A few nonlinear filters using difference between arithmetic mean and harmonic mean in a window for extracting sketch features are described in this paper They have some advantages, for example simple computation, dependence on local intensities and less sensitive to small intensity changes in very dark regions. Experimental results demonstrate more successful features extraction than other conventional filters over a wide variety of intensity variations.

  • PDF