• Title/Summary/Keyword: Computer interaction

Search Result 1,971, Processing Time 0.029 seconds

Recognition of Hand gesture to Human-Computer Interaction (손동작 인식을 통한 Human-Computer Interaction 구현)

  • 이래경;김성신
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.1
    • /
    • pp.28-32
    • /
    • 2001
  • 인간의 손동작 인식은 오랫동안 언어로서의 역할을 해왔던 통신수단의 한 방법이다. 현대의 사회가 정보화 사회로 진행됨에 따라 보다 빠르고 정확한 의사소통 및 정보의 전달을 필요로 하는 가운데 사람과 컴퓨터간의 상호 연결 혹은 사람의 의사 표현에 있어 기존의 장치들이 가지는 단점을 보안하며 이 부분에 사람의 두 손으로 표현되는 자유로운 몸짓을 이용하려는 연구가 최근에 많이 진행되고 있는 추세이다. 본 논문에선 2차원 입력 영상으로부터 동적인 손동작의 사용 없이 손의 특징을 이용한 새로운 인식 알고리즘을 제안하고, 보다 높은 인식률과 실 시간적 처리를 위해 Radial Basis Function Network 및 부가적인 특징점을 통한 손동작의 인식을 구현하였다. 또한 인식된 손동작의 의미를 바탕으로 인식률 및 손동작 표현의 의미성에 대한 정확도를 판별하기 위해 로봇의 제어에 적용한 실험을 수행하였다.

  • PDF

A Study on Comparative Analysis of Interaction of Class Based on ICT : In The Case of Social Studies of Elementary School (ICT기반 수업 상호작용 비교 분석 연구 : 초등학교 사회과목 대상으로)

  • Jo, Jaechoon;Lim, Heuiseok
    • The Journal of Korean Association of Computer Education
    • /
    • v.18 no.6
    • /
    • pp.63-69
    • /
    • 2015
  • The Interaction is an important factor in classroom. Existing interaction analysis method has been analysed between teacher and students only language-centered. In this paper, we developed an ICT based interaction analysis system to analyse interaction of ICT including language-centered and analysed interaction of class through FIACS and ICT-FIACS. This system consists of ten kinds of classification items and analysis indexes. In order to comparative analyse between ICT-FIACS and FIACS, we analysed interaction of ICT in six grade classroom at elementary school. In result of analysis, ICT utilization index (63.62%), teachers of ICT utilization index (57.71%) and students of ICT utilization (42.29%) were analysed. Through this system, interaction of ICT can be analysed as well as language-centered interaction in ICT based Classroom.

Evaluating the Effectiveness of Nielsen's Usability Heuristics for Computer Engineers and Designers without Human Computer Interaction Background (비 HCI 전공자들을 대상으로 한 Nielsen의 Usability Heuristics에 대한 이해 정도 평가)

  • Jeong, YoungJoo;Sim, InSook;Jeong, GooCheol
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • v.2 no.2
    • /
    • pp.165-171
    • /
    • 2010
  • Usability heuristics("heuristics") are general principles for usability evaluation during user interface design. Our ultimate goal is to extend the practice of usability evaluation methods to a wider audience(e.g. user interface designers and engineers) than Human Computer Interaction(HCI) professionals. To this end, we explored the degree to which Jakob Nielsen's ten usability heuristics are understood by professors and students in design and computer engineering. None of the subjects received formal training in HCI, though some may have had an awareness of some HCI principles. The study identified easy-to-understand heuristics, examined the reasons for the ambiguities in others, and discovered differences between the responses of professors and students to the heuristics. In the course of the study, the subjects showed an increased tendency to think in terms of user-centric design. Furthermore, the findings in this study offer suggestions for improving these heuristics to resolve ambiguities and to extend their practice for user interface designers and engineers.

  • PDF

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

An Exponential Smoothing Adaptive Failure Detector in the Dual Model of Heartbeat and Interaction

  • Yang, Zhiyong;Li, Chunlin;Liu, Yanpei;Liu, Yunchang;Xu, Lijun
    • Journal of Computing Science and Engineering
    • /
    • v.8 no.1
    • /
    • pp.17-24
    • /
    • 2014
  • In this paper, we propose a new implementation of a failure detector. The implementation uses a dual model of heartbeat and interaction. First, the heartbeat model is adopted to shorten the detection time, if the detection process does not receive the heartbeat message in the expected time. The interaction model is then used to check the process further. The expected time is calculated using the exponential smoothing method. Exponential smoothing can be used to estimate the next arrival time not only in the random data, but also in the data of linear trends. It is proven that the new detector in the paper can eventually be a perfect detector.

A Novel Interaction Method for Mobile Devices Using Low Complexity Global Motion Estimation

  • Nguyen, Toan Dinh;Kim, JeongHwan;Kim, SooHyung;Yang, HyungJeong;Lee, GueeSang;Chang, JuneYoung;Eum, NakWoong
    • ETRI Journal
    • /
    • v.34 no.5
    • /
    • pp.734-742
    • /
    • 2012
  • A novel interaction method for mobile phones using their built-in cameras is presented. By estimating the path connecting the center points of frames captured by the camera phone, objects of interest can be easily extracted and recognized. To estimate the movement of the mobile phone, corners and corresponding Speeded-Up Robust Features descriptors are used to calculate the spatial transformation parameters between the previous and current frames. These parameters are then used to recalculate the locations of the center points in the previous frame into the current frame. The experiment results obtained from real image sequences show that the proposed system is efficient, flexible, and able to provide accurate and stable results.

PERSONAL SPACE-BASED MODELING OF RELATIONSHIPS BETWEEN PEOPLE FOR NEW HUMAN-COMPUTER INTERACTION

  • Amaoka, Toshitaka;Laga, Hamid;Saito, Suguru;Nakajima, Masayuki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.746-750
    • /
    • 2009
  • In this paper we focus on the Personal Space (PS) as a nonverbal communication concept to build a new Human Computer Interaction. The analysis of people positions with respect to their PS gives an idea on the nature of their relationship. We propose to analyze and model the PS using Computer Vision (CV), and visualize it using Computer Graphics. For this purpose, we define the PS based on four parameters: distance between people, their face orientations, age, and gender. We automatically estimate the first two parameters from image sequences using CV technology, while the two other parameters are set manually. Finally, we calculate the two-dimensional relationship of multiple persons and visualize it as 3D contours in real-time. Our method can sense and visualize invisible and unconscious PS distributions and convey the spatial relationship of users by an intuitive visual representation. The results of this paper can be used to Human Computer Interaction in public spaces.

  • PDF

Efficient Emotional Relaxation Framework with Anisotropic Features Based Dijkstra Algorithm

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.79-86
    • /
    • 2020
  • In this paper, we propose an efficient emotional relaxation framework using Dijkstra algorithm based on anisotropic features. Emotional relaxation is as important as emotional analysis. This is a framework that can automatically alleviate the person's depression or loneliness. This is very important for HCI (Human-Computer Interaction). In this paper, 1) Emotion value changing from facial expression is calculated using Microsoft's Emotion API, 2) Using these differences in emotion values, we recognize abnormal feelings such as depression or loneliness. 3) Finally, emotional mesh based matching process considering the emotional histogram and anisotropic characteristics is proposed, which suggests emotional relaxation to the user. In this paper, we propose a system which can recognize the change of emotion easily by using face image and train personal emotion by emotion relaxation.

Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction (멀티모달 인터랙션을 위한 사용자 병렬 모달리티 입력방식 및 입력 동기화 방법 설계)

  • Im, Mi-Jeong;Park, Beom
    • Journal of the Ergonomics Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.135-146
    • /
    • 2006
  • Multimodal interfaces are recognition-based technologies that interpret and encode hand gestures, eye-gaze, movement pattern, speech, physical location and other natural human behaviors. Modality is the type of communication channel used for interaction. It also covers the way an idea is expressed or perceived, or the manner in which an action is performed. Multimodal Interfaces are the technologies that constitute multimodal interaction processes which occur consciously or unconsciously while communicating between human and computer. So input/output forms of multimodal interfaces assume different aspects from existing ones. Moreover, different people show different cognitive styles and individual preferences play a role in the selection of one input mode over another. Therefore to develop an effective design of multimodal user interfaces, input/output structure need to be formulated through the research of human cognition. This paper analyzes the characteristics of each human modality and suggests combination types of modalities, dual-coding for formulating multimodal interaction. Then it designs multimodal language and input synchronization method according to the granularity of input synchronization. To effectively guide the development of next-generation multimodal interfaces, substantially cognitive modeling will be needed to understand the temporal and semantic relations between different modalities, their joint functionality, and their overall potential for supporting computation in different forms. This paper is expected that it can show multimodal interface designers how to organize and integrate human input modalities while interacting with multimodal interfaces.

A Cyber-Physical Information System for Smart Buildings with Collaborative Information Fusion

  • Liu, Qing;Li, Lanlan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1516-1539
    • /
    • 2022
  • This article shows a set of physical information fusion IoT systems that we designed for smart buildings. Its essence is a computer system that combines physical quantities in buildings with quantitative analysis and control. In the part of the Internet of Things, its mechanism is controlled by a monitoring system based on sensor networks and computer-based algorithms. Based on the design idea of the agent, we have realized human-machine interaction (HMI) and machine-machine interaction (MMI). Among them, HMI is realized through human-machine interaction, while MMI is realized through embedded computing, sensors, controllers, and execution. Device and wireless communication network. This article mainly focuses on the function of wireless sensor networks and MMI in environmental monitoring. This function plays a fundamental role in building security, environmental control, HVAC, and other smart building control systems. The article not only discusses various network applications and their implementation based on agent design but also demonstrates our collaborative information fusion strategy. This strategy can provide a stable incentive method for the system through collaborative information fusion when the sensor system is unstable in the physical measurements, thereby preventing system jitter and unstable response caused by uncertain disturbances and environmental factors. This article also gives the results of the system test. The results show that through the CPS interaction of HMI and MMI, the intelligent building IoT system can achieve comprehensive monitoring, thereby providing support and expansion for advanced automation management.