• Title/Summary/Keyword: Robot Interface

Search Result 444, Processing Time 0.035 seconds

A Development of Multi-Emotional Signal Receiving Modules for Cellphone Using Robotic Interaction

  • Jung, Yong-Rae;Kong, Yong-Hae;Um, Tai-Joon;Kim, Seung-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2231-2236
    • /
    • 2005
  • CP (Cellular Phone) is currently one of the most attractive technologies and RT (Robot Technology) is also considered as one of the most promising next generation technology. We present a new technological concept named RCP (Robotic Cellular Phone), which combines RT and CP. RCP consists of 3 sub-modules, $RCP^{Mobility}$, $RCP^{Interaction}$, and $RCP^{Integration}$. $RCP^{Interaction}$ is the main focus of this paper. It is an interactive emotion system which provides CP with multi-emotional signal receiving functionalities. $RCP^{Interaction}$ is linked with communication functions of CP in order to interface between CP and user through a variety of emotional models. It is divided into a tactile, an olfactory and a visual mode. The tactile signal receiving module is designed by patterns and beat frequencies which are made by mechanical-vibration conversion of the musical melody, rhythm and harmony. The olfactory signal receiving module is designed by switching control of perfume-injection nozzles which are able to give the signal receiving to the CP-called user through a special kind of smell according to the CP-calling user. The visual signal receiving module is made by motion control of DC-motored wheel-based system which can inform the CP-called user of the signal receiving through a desired motion according to the CP-calling user. In this paper, a prototype system is developed for multi-emotional signal receiving modes of CP. We describe an overall structure of the system and provide experimental results of the functional modules.

  • PDF

Development of the 3D Imaging System and Automatic Registration Algorithm for the Intelligent Excavation System (IES) (지능형 굴삭 시스템을 위한 모바일 3D 이미징 시스템 및 자동 정합 알고리즘의 개발)

  • Chae, Myung-Jin;Lee, Gyu-Won;Kim, Jung-Ryul;Park, Jae-Woo;Yoo, Hyun-Seok;Cho, Moon-Young
    • Korean Journal of Construction Engineering and Management
    • /
    • v.10 no.1
    • /
    • pp.136-145
    • /
    • 2009
  • The objective of the Intelligent Excavation System (IES) is to recognize the work environment and produce work plan and automatically control the excavator through integrating sensor and robot technologies. This paper discusses one of the core technologies of IES development project, development of 3D work environment modeling. 3D laser scanner is used for 3-dimensional mathematical model that can be visualized in virtual space in 3D. This paper describes (1) how the most appropriate 3D imaging system has been chosen; (2) the development of user interface and customization of the s/w to control the scanner for IES project; (3) the development of the mobile station for the scanner; (4) and the algorithm for the automatic registration of laser scan segments for IES project. The development system has been tested on the construction field and lessons learned and future development requirements are suggested.

Vowel Classification of Imagined Speech in an Electroencephalogram using the Deep Belief Network (Deep Belief Network를 이용한 뇌파의 음성 상상 모음 분류)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.59-64
    • /
    • 2015
  • In this paper, we found the usefulness of the deep belief network (DBN) in the fields of brain-computer interface (BCI), especially in relation to imagined speech. In recent years, the growth of interest in the BCI field has led to the development of a number of useful applications, such as robot control, game interfaces, exoskeleton limbs, and so on. However, while imagined speech, which could be used for communication or military purpose devices, is one of the most exciting BCI applications, there are some problems in implementing the system. In the previous paper, we already handled some of the issues of imagined speech when using the International Phonetic Alphabet (IPA), although it required complementation for multi class classification problems. In view of this point, this paper could provide a suitable solution for vowel classification for imagined speech. We used the DBN algorithm, which is known as a deep learning algorithm for multi-class vowel classification, and selected four vowel pronunciations:, /a/, /i/, /o/, /u/ from IPA. For the experiment, we obtained the required 32 channel raw electroencephalogram (EEG) data from three male subjects, and electrodes were placed on the scalp of the frontal lobe and both temporal lobes which are related to thinking and verbal function. Eigenvalues of the covariance matrix of the EEG data were used as the feature vector of each vowel. In the analysis, we provided the classification results of the back propagation artificial neural network (BP-ANN) for making a comparison with DBN. As a result, the classification results from the BP-ANN were 52.04%, and the DBN was 87.96%. This means the DBN showed 35.92% better classification results in multi class imagined speech classification. In addition, the DBN spent much less time in whole computation time. In conclusion, the DBN algorithm is efficient in BCI system implementation.

Hand Gesture Recognition using Multivariate Fuzzy Decision Tree and User Adaptation (다변량 퍼지 의사결정트리와 사용자 적응을 이용한 손동작 인식)

  • Jeon, Moon-Jin;Do, Jun-Hyeong;Lee, Sang-Wan;Park, Kwang-Hyun;Bien, Zeung-Nam
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.2
    • /
    • pp.81-90
    • /
    • 2008
  • While increasing demand of the service for the disabled and the elderly people, assistive technologies have been developed rapidly. The natural signal of human such as voice or gesture has been applied to the system for assisting the disabled and the elderly people. As an example of such kind of human robot interface, the Soft Remote Control System has been developed by HWRS-ERC in $KAIST^[1]$. This system is a vision-based hand gesture recognition system for controlling home appliances such as television, lamp and curtain. One of the most important technologies of the system is the hand gesture recognition algorithm. The frequently occurred problems which lower the recognition rate of hand gesture are inter-person variation and intra-person variation. Intra-person variation can be handled by inducing fuzzy concept. In this paper, we propose multivariate fuzzy decision tree(MFDT) learning and classification algorithm for hand motion recognition. To recognize hand gesture of a new user, the most proper recognition model among several well trained models is selected using model selection algorithm and incrementally adapted to the user's hand gesture. For the general performance of MFDT as a classifier, we show classification rate using the benchmark data of the UCI repository. For the performance of hand gesture recognition, we tested using hand gesture data which is collected from 10 people for 15 days. The experimental results show that the classification and user adaptation performance of proposed algorithm is better than general fuzzy decision tree.

  • PDF

Development of a Hand Shape Editor for Sign Language Expression (수화 표현을 위한 손 모양 편집 프로그램의 개발)

  • Oh, Young-Joon;Park, Kwang-Hyun;Bien, Zeung-Nam
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.44 no.4 s.316
    • /
    • pp.48-54
    • /
    • 2007
  • Hand shape is one of important elements in Korean Sign Language (KSL), which is a communication method for the deaf. To express sign motion in a virtual reality environment based on OpenGL, we need an editor which can insert and modify sign motion data. However, it is very difficult that people, who lack knowledge of sign 1anguage, exactly edit and express hand shape using the existing editors. We also need a program to efficiently construct and store the hand shape data because the number of data is very large in a sign word dictionary. In this paper we developed a KSL hand shape editor to easily construct and edit hand shape by a graphical user interface (GUI), and to store it in a database. Hand shape codes are used in a sign word editor to synthesize sign motion and decreases total amount of KSL data.

A Development of Multi-Emotional Signal Receiving Modules for Ubiquitous RCP Interaction (유비쿼터스 RCP 상호작용을 위한 다감각 착신기능모듈의 개발)

  • Jang Kyung-Jun;Jung Yong-Rae;Kim Dong-Wook;Kim Seung-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.1
    • /
    • pp.33-40
    • /
    • 2006
  • We present a new technological concept named RCP (Robotic Cellular Phone), which combines RT and CP. That is an ubiquitous robot. RCP consists of 3 sub-modules, RCP Mobility, RCP interaction, and RCP Integration. RCP Interaction is the main focus of this paper. It is an interactive emotion system which provides CP with multi-emotional signal receiving functionalities. RCP Interaction is linked with communication functions of CP in order to interface between CP and user through a variety of emotional models. It is divided into a tactile, an olfactory and a visual mode. The tactile signal receiving module is designed by patterns and beat frequencies which are made by mechanical-vibration conversion of the musical melody, rhythm and harmony. The olfactory signal receiving module is designed by switching control of perfume-injection nozzles which are able to give the signal receiving to the CP-called user through a special kind of smell according to the CP-calling user. The visual signal receiving module is made by motion control of DC-motored wheel-based system which can inform the CP-called user of the signal receiving through a desired motion according to the CP-calling user. In this paper, a prototype system is developed far multi-emotional signal receiving modes of CP. We describe an overall structure of the system and provide experimental results of the functional modules.

Hand Interface using Intelligent Recognition for Control of Mouse Pointer (마우스 포인터 제어를 위해 지능형 인식을 이용한 핸드 인터페이스)

  • Park, Il-Cheol;Kim, Kyung-Hun;Kwon, Goo-Rak
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.5
    • /
    • pp.1060-1065
    • /
    • 2011
  • In this paper, the proposed method is recognized the hands using color information with input image of the camera. It controls the mouse pointer using recognized hands. In addition, specific commands with the mouse pointer is designed to perform. Most users felt uncomfortable since existing interaction multimedia systems depend on a particular external input devices such as pens and mouse However, the proposed method is to compensate for these shortcomings by hand without the external input devices. In experimental methods, hand areas and backgrounds are separated using color information obtaining image from camera. And coordinates of the mouse pointer is determined using coordinates of the center of a separate hand. The mouse pointer is located in pre-filled area using these coordinates, and the robot will move and execute with the command. In experimental results, the recognition of the proposed method is more accurate but is still sensitive to the change of color of light.

Development of EEG Signals Measurement and Analysis Method based on Timbre (음색 기반 뇌파측정 및 분석기법 개발)

  • Park, Seung-Min;Lee, Young-Hwan;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.3
    • /
    • pp.388-393
    • /
    • 2010
  • Cultural Content Technology(CT, Culture Technology) for the development of cultural industry and the commercialization of technology, cultural contents, media, mount, pass the value chain process and increase the added value of cultural products that are good for all forms of intangible technology. In the field of Culture Technology, Music by analyzing the characteristics of the development of a variety of applications has been studied. Associated with EEG measures and the results of their research in response to musical stimuli are used to detect and study is getting attention. In this paper, the musical stimuli in EEG signals by amplifying the corresponding reaction to the averaging method, ERP (Event-Related Potentials) experiments based on the process of extracting sound methods for removing noise from the ICA algorithm to extract the tone and noise removal according to the results are applied to analyze the characteristics of EEG.

Robust Real-time Pose Estimation to Dynamic Environments for Modeling Mirror Neuron System (거울 신경 체계 모델링을 위한 동적 환경에 강인한 실시간 자세추정)

  • Jun-Ho Choi;Seung-Min Park
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.3
    • /
    • pp.583-588
    • /
    • 2024
  • With the emergence of Brain-Computer Interface (BCI) technology, analyzing mirror neurons has become more feasible. However, evaluating the accuracy of BCI systems that rely on human thoughts poses challenges due to their qualitative nature. To harness the potential of BCI, we propose a new approach to measure accuracy based on the characteristics of mirror neurons in the human brain that are influenced by speech speed, depending on the ultimate goal of movement. In Chapter 2 of this paper, we introduce mirror neurons and provide an explanation of human posture estimation for mirror neurons. In Chapter 3, we present a powerful pose estimation method suitable for real-time dynamic environments using the technique of human posture estimation. Furthermore, we propose a method to analyze the accuracy of BCI using this robotic environment.

Development of Rotation Invariant Real-Time Multiple Face-Detection Engine (회전변화에 무관한 실시간 다중 얼굴 검출 엔진 개발)

  • Han, Dong-Il;Choi, Jong-Ho;Yoo, Seong-Joon;Oh, Se-Chang;Cho, Jae-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.116-128
    • /
    • 2011
  • In this paper, we propose the structure of a high-performance face-detection engine that responds well to facial rotating changes using rotation transformation which minimize the required memory usage compared to the previous face-detection engine. The validity of the proposed structure has been verified through the implementation of FPGA. For high performance face detection, the MCT (Modified Census Transform) method, which is robust against lighting change, was used. The Adaboost learning algorithm was used for creating optimized learning data. And the rotation transformation method was added to maintain effectiveness against face rotating changes. The proposed hardware structure was composed of Color Space Converter, Noise Filter, Memory Controller Interface, Image Rotator, Image Scaler, MCT(Modified Census Transform), Candidate Detector / Confidence Mapper, Position Resizer, Data Grouper, Overlay Processor / Color Overlay Processor. The face detection engine was tested using a Virtex5 LX330 FPGA board, a QVGA grade CMOS camera, and an LCD Display. It was verified that the engine demonstrated excellent performance in diverse real life environments and in a face detection standard database. As a result, a high performance real time face detection engine that can conduct real time processing at speeds of at least 60 frames per second, which is effective against lighting changes and face rotating changes and can detect 32 faces in diverse sizes simultaneously, was developed.