• Title/Summary/Keyword: human computer interface & interaction

Search Result 157, Processing Time 0.026 seconds

Design and implementation of User Interface for Elementary School Students (초등학생을 위한 사용자 인터페이스의 설계 및 구현)

  • Lee, Yeong-Wha;Jun, Woo-Chun
    • Journal of The Korean Association of Information Education
    • /
    • v.11 no.1
    • /
    • pp.119-130
    • /
    • 2007
  • Although online education has many advantages such as providing huge information and overcoming time and space limit over the traditional education, its effects may not be good unless proper user interface is not provided. Only well-designed interface can allow students to learn correct information quickly and study creatively and actively. The purpose of this study is to design and implement user interface suitable for elementary school students. In this study, in order to design proper user interface for elementary school students, factors affecting human-computer interaction have been analyzed and adopted. Also, design and structure of existing educational portal sites are analyzed so that the most common design elements are adopted. Finally, extensive survey is carried out in order to examine preference and navigation style of elementary school students and teachers. The survey results are reflected on the design principles of this work.

  • PDF

The Modified Block Matching Algorithm for a Hand Tracking of an HCI system (HCI 시스템의 손 추적을 위한 수정 블록 정합 알고리즘)

  • Kim Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.4 no.4
    • /
    • pp.9-14
    • /
    • 2003
  • A GUI (graphical user interface) has been a dominant platform for HCI (human computer interaction). A GUI - based interaction has made computers simpler and easier to use. The GUI - based interaction, however, does not easily support the range of interaction necessary to meet users' needs that are natural. intuitive, and adaptive. In this paper, the modified BMA (block matching algorithm) is proposed to track a hand in a sequence of an image and to recognize it in each video frame in order to replace a mouse with a pointing device for a virtual reality. The HCI system with 30 frames per second is realized in this paper. The modified BMA is proposed to estimate a position of the hand and segmentation with an orientation of motion and a color distribution of the hand region for real - time processing. The experimental result shows that the modified BMA with the YCbCr (luminance Y, component blue, component red) color coordinate guarantees the real - time processing and the recognition rate. The hand tracking by the modified BMA can be applied to a virtual reclity or a game or an HCI system for the disable.

  • PDF

Dynamic Gesture Recognition for the Remote Camera Robot Control (원격 카메라 로봇 제어를 위한 동적 제스처 인식)

  • Lee Ju-Won;Lee Byung-Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.7
    • /
    • pp.1480-1487
    • /
    • 2004
  • This study is proposed the novel gesture recognition method for the remote camera robot control. To recognize the dynamics gesture, the preprocessing step is the image segmentation. The conventional methods for the effectively object segmentation has need a lot of the cole. information about the object(hand) image. And these methods in the recognition step have need a lot of the features with the each object. To improve the problems of the conventional methods, this study proposed the novel method to recognize the dynamic hand gesture such as the MMS(Max-Min Search) method to segment the object image, MSM(Mean Space Mapping) method and COG(Conte. Of Gravity) method to extract the features of image, and the structure of recognition MLPNN(Multi Layer Perceptron Neural Network) to recognize the dynamic gestures. In the results of experiment, the recognition rate of the proposed method appeared more than 90[%], and this result is shown that is available by HCI(Human Computer Interface) device for .emote robot control.

A Preliminary Study for Emotional Expression of Software Robot -Development of Hangul Processing Technique for Inference of Emotional Words- (소프트웨어 로봇의 감성 표현을 위한 기반연구 - 감성어 추론을 위한 한글 처리 기술 개발 -)

  • Song, Bok-Hee;Yun, Han-Kyung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2012.05a
    • /
    • pp.3-4
    • /
    • 2012
  • 사용자 중심의 man machine interface 기술의 발전은 사용자 인터페이스 기술과 인간공학의 접목으로 인하여 많은 진전이 있으며 계속 진행되고 있다. 근래의 정보전달은 사운드와 텍스트 또는 영상을 통하여 이루어지고 있으나, 감성적인 측면에서의 정보전달에 관한 연구는 활발하지 못한 실정이다. 특히, Human Computer Interaction분야에서 음성이나 표정의 전달에 관한 감성연구는 초기단계로 이모티콘이나 플래쉬콘 등이 감정전달을 위하여 사용되고 있으나 부자연스럽고 기계적인 실정이다. 본 연구는 사용자와 상호작용에서 컴퓨터 또는 응용소프트웨어 등이 자신의 가상객체(Software Robot, Sobot)를 활용하여 인간친화적인 상호작용을 제공하기위한 기반연구로써 한글에서 감성어를 추출하여 분류하고 처리하는 기술을 개발하여 컴퓨터가 전달하고자하는 정보에 인공감정을 이입시켜 사용자들의 감성만족도를 향상시키는데 적용하고자한다.

  • PDF

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

An alternative method for smartphone input using AR markers

  • Kang, Yuna;Han, Soonhung
    • Journal of Computational Design and Engineering
    • /
    • v.1 no.3
    • /
    • pp.153-160
    • /
    • 2014
  • As smartphones came into wide use recently, it has become increasingly popular not only among young people, but among middle-aged people as well. Most smartphones adopt capacitive full touch screen, so touch commands are made by fingers unlike the PDAs in the past that use touch pens. In this case, a significant portion of the smartphone's screen is blocked by the finger so it is impossible to see the screens around the finger touching the screen; this causes difficulties in making precise inputs. To solve this problem, this research proposes a method of using simple AR markers to improve the interface of smartphones. A marker is placed in front of the smartphone camera. Then, the camera image of the marker is analyzed to determine the position of the marker as the position of the mouse cursor. This method can enable click, double-click, drag-and-drop used in PCs as well as touch, slide, long-touch-input in smartphones. Through this research, smartphone inputs can be made more precise and simple, and show the possibility of the application of a new concept of smartphone interface.

A Study on the LED Button Guide to improve the IPTV's Usability (IPTV 사용성 향상을 위한 LED 버튼 가이드)

  • Kim, Sung-Hee;Kim, You-Min;Jung, Jae-Wook;Lee, Dong-Wook;Ryu, Won;Hahn, Min-Soo
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.933-937
    • /
    • 2009
  • The IPTV which was commercialized and is being serviced to customers at present has a complicated GUI (Graphical User Interface) to provide two-way services and a remote control containing more than 40 buttons unlike the conventional TV. Accordingly, the remote control becomes one of the causes that make the usability of the IPTV worsen. In this paper, we suggest a LED button guide system as a solution to improve a usability of the IPTV, and analyze the effects of the interface obtained from the user evaluation on the user action.

  • PDF

Hand Gesture Interface for Manipulating 3D Objects in Augmented Reality (증강현실에서 3D 객체 조작을 위한 손동작 인터페이스)

  • Park, Keon-Hee;Lee, Guee-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.20-28
    • /
    • 2010
  • In this paper, we propose a hand gesture interface for the manipulation of augmented objects in 3D space using a camera. Generally a marker is used for the detection of 3D movement in 2D images. However marker based system has obvious defects since markers are always to be included in the image or we need additional equipments for controling objects, which results in reduced immersion. To overcome this problem, we replace marker by planar hand shape by estimating the hand pose. Kalman filter is for robust tracking of the hand shape. The experimental result indicates the feasibility of the proposed algorithm for hand based AR interfaces.

Voice Driven Sound Sketch for Animation Authoring Tools (애니메이션 저작도구를 위한 음성 기반 음향 스케치)

  • Kwon, Soon-Il
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.4
    • /
    • pp.1-9
    • /
    • 2010
  • Authoring tools for sketching the motion of characters to be animated have been studied. However the natural interface for sound editing has not been sufficiently studied. In this paper, I present a novel method that sound sample is selected by speaking sound-imitation words(onomatopoeia). Experiment with the method based on statistical models, which is generally used for pattern recognition, showed up to 97% in the accuracy of recognition. In addition, to address the difficulty of data collection for newly enrolled sound samples, the GLR Test based on only one sample of each sound-imitation word showed almost the same accuracy as the previous method.

3D Image Qube Password Interface Design and Implementation for Entrance/Exit of Sailors (선박승무원 출입관리를 위한 3차원 영상 큐브 암호 인터페이스 설계 및 구현)

  • Son, Nam-Rye;Jeong, Min-A;Lee, Seong-Ro
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.1A
    • /
    • pp.25-32
    • /
    • 2010
  • Recently a passenger ship and liner has been spread throughout men of diversity, the space and informations is not open to general passenger. Therefore security systems are necessary for special sailors to admit them. Although security systems has a variety usage methods which are organism recognition(finger printer, iritis and vein etc) a few years ago, these usages has a defect reusing other objects because of leaving a trace. Therefore this paper designs and implements using 3D Qube image password interface which hand gestures are recognized after acquiring from 2D input image for protective marker of finger printer.