• Title/Summary/Keyword: Human-Computer Interaction Analysis

Search Result 131, Processing Time 0.026 seconds

Robot Gesture Reconition System based on PCA algorithm (PCA 알고리즘 기반의 로봇 제스처 인식 시스템)

  • Youk, Yui-Su;Kim, Seung-Young;Kim, Sung-Ho
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.400-402
    • /
    • 2008
  • The human-computer interaction technology (HCI) that has played an important role in the exchange of information between human being and computer belongs to a key field for information technology. Recently, control studies through which robots and control devices are controlled by using the movements of a person's body or hands without using conventional input devices such as keyboard and mouse, have been going only in diverse aspects, and their importance has been steadily increasing. This study is proposing a recognition method of user's gestures by applying measurements from an acceleration sensor to the PCA algorithm.

  • PDF

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Survey: Gesture Recognition Techniques for Intelligent Robot (지능형 로봇 구동을 위한 제스처 인식 기술 동향)

  • Oh Jae-Yong;Lee Chil-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.9
    • /
    • pp.771-778
    • /
    • 2004
  • Recently, various applications of robot system become more popular in accordance with rapid development of computer hardware/software, artificial intelligence, and automatic control technology. Formerly robots mainly have been used in industrial field, however, nowadays it is said that the robot will do an important role in the home service application. To make the robot more useful, we require further researches on implementation of natural communication method between the human and the robot system, and autonomous behavior generation. The gesture recognition technique is one of the most convenient methods for natural human-robot interaction, so it is to be solved for implementation of intelligent robot system. In this paper, we describe the state-of-the-art of advanced gesture recognition technologies for intelligent robots according to three methods; sensor based method, feature based method, appearance based method, and 3D model based method. And we also discuss some problems and real applications in the research field.

Survey: Tabletop Display Techniques for Multi-Touch Recognition (멀티터치를 위한 테이블-탑 디스플레이 기술 동향)

  • Kim, Song-Gook;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.84-91
    • /
    • 2007
  • Recently, the researches based on vision about user attention and action awareness are being pushed actively for human computer interaction. Among them, various applications of tabletop display system are developed more in accordance with touch sensing technique, co-located and collaborative work. Formerly, although supported only one user, support multi-user at present. Therefore, collaborative work and interaction of four elements (human, computer, displayed objects, physical objects) that is ultimate goal of tabletop display are realizable. Generally, tabletop display system designs according to four key aspects. 1)multi-touch interaction using bare hands. 2)implementation of collaborative work, simultaneous user interaction. 3)direct touch interaction. 4)use of physical objects as an interaction tool. In this paper, we describe a critical analysis of the state-of-the-art in advanced multi-touch sensing techniques for tabletop display system according to the four methods: vision based method, non-vision based method, top-down projection system and rear projection system. And we also discuss some problems and practical applications in the research field.

Introduction to Visual Analytics Research (비주얼 애널리틱스 연구 소개)

  • Oh, Yousang;Lee, Chunggi;Oh, Juyoung;Yang, Jihyeon;Kwag, Heena;Moon, Seongwoo;Park, Sohwan;Ko, Sungahn
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.27-36
    • /
    • 2016
  • As big data become more complex than ever, there has been a need for various techniques and approaches to better analyze and explore such big data. A research discipline of visual analytics has been proposed to help users' visual data analysis and decision-making. Since 2006 when the first symposium of visual analytics was held, the visual analytics research has become popular as the advanced technology in computer graphics, data mining, and human-computer interaction has been incorporated in visual analytics. In this work we introduce the visual analytics research by reviewing and surveying the papers published in IEEE VAST 2015 in terms of data and visualization techniques to help domestics researchers' understanding on visual analytics.

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).

A Study on Effects of Agent Movement on User’s Impression

  • Yamazaki, Tatsuya
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1886-1888
    • /
    • 2002
  • Non-verbal information plays an important role not only in human-to-human communications but also in human computer interaction. In this paper, we examine effects of human-like agent's primitive movements on user's impression, where the human-like agent's primitive movements include eye, mouth, and head. SD (Semantic Differential) method was used for evaluation, and two factors were extracted as a result of the factor analysis. It is found that the first factor influenced the user's impression particularly.

  • PDF

Technology Requirements for Wearable User Interface

  • Cho, Il-Yeon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.5
    • /
    • pp.531-540
    • /
    • 2015
  • Objective: The objective of this research is to investigate the fundamentals of human computer interaction for wearable computers and derive technology requirements. Background: A wearable computer can be worn anytime with the support of unrestricted communications and a variety of services which provide maximum capability of information use. Key challenges in developing such wearable computers are the level of comfort that users do not feel what they wear, and easy and intuitive user interface. The research presented in this paper examines user interfaces for wearable computers. Method: In this research, we have classified the wearable user interface technologies and analyzed the advantages and disadvantages from the user's point of view. Based on this analysis, we issued a user interface technology to conduct research and development for commercialization. Results: Technology requirements are drawn to make wearable computers commercialized. Conclusion: The user interface technology for wearable system must start from the understanding of the ergonomic aspects of the end user, because users wear the system on their body. Developers do not try to develop a state-of-the-art technology without the requirement analysis of the end users. If people do not use the technology, it can't survive in the market. Currently, there is no dominant wearable user interface in the world. So, this area might try a new challenge for the technology beyond the traditional interface paradigm through various approaches and attempts. Application: The findings in this study are expected to be used for designing user interface for wearable systems, such as digital clothes and fashion apparel.

Analysis on Psychological and Educational Effects in Children and Home Robot Interaction (아동과 홈 로봇의 심리적.교육적 상호작용 분석)

  • Kim, Byung-Jun;Han, Jeong-Hye
    • Journal of The Korean Association of Information Education
    • /
    • v.9 no.3
    • /
    • pp.501-510
    • /
    • 2005
  • To facilitate interaction between home robot and humans, it's urgently needed to make in-depth research in Human-Robot Interaction(HRI). The purpose of this study was to examine how children interacted with a newly developed home robot named 'iRobi' in a bid to identify how the home robot affected their psychology and the effectiveness of learning through the home robot. Concerning the psychological effects of the home robot, the children became familiar with the robot, and found it possible to interact with it, and their initial anxiety was removed. As to its learning effect, the group that studied by using the home robot outperformed the others utilizing the other types of learning media (books, WBI)in attention, learning interest and academic achievement. Accordingly, home robot could serve as one of successful vehicles to expedite the psychological and educational interaction of children.

  • PDF

User Identification Using Real Environmental Human Computer Interaction Behavior

  • Wu, Tong;Zheng, Kangfeng;Wu, Chunhua;Wang, Xiujuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3055-3073
    • /
    • 2019
  • In this paper, a new user identification method is presented using real environmental human-computer-interaction (HCI) behavior data to improve method usability. User behavior data in this paper are collected continuously without setting experimental scenes such as text length, action number, etc. To illustrate the characteristics of real environmental HCI data, probability density distribution and performance of keyboard and mouse data are analyzed through the random sampling method and Support Vector Machine(SVM) algorithm. Based on the analysis of HCI behavior data in a real environment, the Multiple Kernel Learning (MKL) method is first used for user HCI behavior identification due to the heterogeneity of keyboard and mouse data. All possible kernel methods are compared to determine the MKL algorithm's parameters to ensure the robustness of the algorithm. Data analysis results show that keyboard data have a narrower range of probability density distribution than mouse data. Keyboard data have better performance with a 1-min time window, while that of mouse data is achieved with a 10-min time window. Finally, experiments using the MKL algorithm with three global polynomial kernels and ten local Gaussian kernels achieve a user identification accuracy of 83.03% in a real environmental HCI dataset, which demonstrates that the proposed method achieves an encouraging performance.