• Title/Summary/Keyword: human-computer interaction

Search Result 620, Processing Time 0.029 seconds

PERSONAL SPACE-BASED MODELING OF RELATIONSHIPS BETWEEN PEOPLE FOR NEW HUMAN-COMPUTER INTERACTION

  • Amaoka, Toshitaka;Laga, Hamid;Saito, Suguru;Nakajima, Masayuki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.746-750
    • /
    • 2009
  • In this paper we focus on the Personal Space (PS) as a nonverbal communication concept to build a new Human Computer Interaction. The analysis of people positions with respect to their PS gives an idea on the nature of their relationship. We propose to analyze and model the PS using Computer Vision (CV), and visualize it using Computer Graphics. For this purpose, we define the PS based on four parameters: distance between people, their face orientations, age, and gender. We automatically estimate the first two parameters from image sequences using CV technology, while the two other parameters are set manually. Finally, we calculate the two-dimensional relationship of multiple persons and visualize it as 3D contours in real-time. Our method can sense and visualize invisible and unconscious PS distributions and convey the spatial relationship of users by an intuitive visual representation. The results of this paper can be used to Human Computer Interaction in public spaces.

  • PDF

Efficient Emotional Relaxation Framework with Anisotropic Features Based Dijkstra Algorithm

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.79-86
    • /
    • 2020
  • In this paper, we propose an efficient emotional relaxation framework using Dijkstra algorithm based on anisotropic features. Emotional relaxation is as important as emotional analysis. This is a framework that can automatically alleviate the person's depression or loneliness. This is very important for HCI (Human-Computer Interaction). In this paper, 1) Emotion value changing from facial expression is calculated using Microsoft's Emotion API, 2) Using these differences in emotion values, we recognize abnormal feelings such as depression or loneliness. 3) Finally, emotional mesh based matching process considering the emotional histogram and anisotropic characteristics is proposed, which suggests emotional relaxation to the user. In this paper, we propose a system which can recognize the change of emotion easily by using face image and train personal emotion by emotion relaxation.

A Cyber-Physical Information System for Smart Buildings with Collaborative Information Fusion

  • Liu, Qing;Li, Lanlan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1516-1539
    • /
    • 2022
  • This article shows a set of physical information fusion IoT systems that we designed for smart buildings. Its essence is a computer system that combines physical quantities in buildings with quantitative analysis and control. In the part of the Internet of Things, its mechanism is controlled by a monitoring system based on sensor networks and computer-based algorithms. Based on the design idea of the agent, we have realized human-machine interaction (HMI) and machine-machine interaction (MMI). Among them, HMI is realized through human-machine interaction, while MMI is realized through embedded computing, sensors, controllers, and execution. Device and wireless communication network. This article mainly focuses on the function of wireless sensor networks and MMI in environmental monitoring. This function plays a fundamental role in building security, environmental control, HVAC, and other smart building control systems. The article not only discusses various network applications and their implementation based on agent design but also demonstrates our collaborative information fusion strategy. This strategy can provide a stable incentive method for the system through collaborative information fusion when the sensor system is unstable in the physical measurements, thereby preventing system jitter and unstable response caused by uncertain disturbances and environmental factors. This article also gives the results of the system test. The results show that through the CPS interaction of HMI and MMI, the intelligent building IoT system can achieve comprehensive monitoring, thereby providing support and expansion for advanced automation management.

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.9-20
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF

Human-Object Interaction Framework Using RGB-D Camera (RGB-D 카메라를 사용한 사용자-실사물 상호작용 프레임워크)

  • Baeka, Yong-Hwan;Lim, Changmin;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.11-23
    • /
    • 2016
  • Recent days, touch interaction interface is the most widely used interaction interface to communicate with digital devices. Because of its usability, touch technology is applied almost everywhere from watch to advertising boards and it is growing much bigger. However, this technology has a critical weakness. Normally, touch input device needs a contact surface with touch sensors embedded in it. Thus, touch interaction through general objects like books or documents are still unavailable. In this paper, a human-object interaction framework based on RGB-D camera is proposed to overcome those limitation. The proposed framework can deal with occluded situations like hovering the hand on top of the object and also moving objects by hand. In such situations object recognition algorithm and hand gesture algorithm may fail to recognize. However, our framework makes it possible to handle complicated circumstances without performance loss. The framework calculates the status of the object with fast and robust object recognition algorithm to determine whether it is an object or a human hand. Then, the hand gesture recognition algorithm controls the context of each object by gestures almost simultaneously.

Laser pointer detection using neural network for human computer interaction (인간-컴퓨터 상호작용을 위한 신경망 알고리즘기반 레이저포인터 검출)

  • Jung, Chan-Woong;Jeong, Sung-Moon;Lee, Min-Ho
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.1
    • /
    • pp.21-30
    • /
    • 2011
  • In this paper, an effective method to detect the laser pointer on the screen using the neural network algorithm for implementing the human-computer interaction system. The proposed neural network algorithm is used to train the patches without a laser pointer from the input camera images, the trained neural network then generates output values for an input patch from a camera image. If a small variation is perceived in the input camera image, amplify the small variations and detect the laser pointer spot in the camera image. The proposed system consists of a laser pointer, low-price web-camera and image processing program and has a detection capability of laser spot even if the background of computer monitor has a similar color with the laser pointer spot. Therefore, the proposed technique will be contributed to improve the performance of human-computer interaction system.

Comparative Study on the Educational Use of Home Robots for Children

  • Han, Jeong-Hye;Jo, Mi-Heon;Jones, Vicki;Jo, Jun-H.
    • Journal of Information Processing Systems
    • /
    • v.4 no.4
    • /
    • pp.159-168
    • /
    • 2008
  • Human-Robot Interaction (HRI), based on already well-researched Human-Computer Interaction (HCI), has been under vigorous scrutiny since recent developments in robot technology. Robots may be more successful in establishing common ground in project-based education or foreign language learning for children than in traditional media. Backed by its strong IT environment and advances in robot technology, Korea has developed the world's first available e-Learning home robot. This has demonstrated the potential for robots to be used as a new educational media - robot-learning, referred to as 'r-Learning'. Robot technology is expected to become more interactive and user-friendly than computers. Also, robots can exhibit various forms of communication such as gestures, motions and facial expressions. This study compared the effects of non-computer based (NCB) media (using a book with audiotape) and Web-Based Instruction (WBI), with the effects of Home Robot-Assisted Learning (HRL) for children. The robot gestured and spoke in English, and children could touch its monitor if it did not recognize their voice command. Compared to other learning programs, the HRL was superior in promoting and improving children's concentration, interest, and academic achievement. In addition, the children felt that a home robot was friendlier than other types of instructional media. The HRL group had longer concentration spans than the other groups, and the p-value demonstrated a significant difference in concentration among the groups. In regard to the children's interest in learning, the HRL group showed the highest level of interest, the NCB group and the WBI group came next in order. Also, academic achievement was the highest in the HRL group, followed by the WBI group and the NCB group respectively. However, a significant difference was also found in the children's academic achievement among the groups. These results suggest that home robots are more effective as regards children's learning concentration, learning interest and academic achievement than other types of instructional media (such as: books with audiotape and WBI) for English as a foreign language.

A Study on E-Learning System of Korean Traditional Dance for Transmission and Dissemination (한국 전통춤의 전승 및 보급을 위한 이러닝 시스템에 관한 연구)

  • Lee, Jongwook;Lee, Ji-Hyun
    • Journal of the HCI Society of Korea
    • /
    • v.12 no.3
    • /
    • pp.5-11
    • /
    • 2017
  • Korean traditional dance has cultural value and is human's cultural heritage. But they are in danger which is caused by lack of bearers and public interest. E-Learning of traditional dance using network technology and digital media can be a solution to extinction problem. The aim of this study is to propose the E-Learning courses and systems for learning traditional dance. E-Learning systems were evaluated in accordance with the HCI (Human Computer Interaction) user evaluation. This study contribute to overcoming distance constraints by offering synchronous E-Learning education system of traditional dance as intangible cultural heritage through new media experience.

Real-time Interactive Particle-art with Human Motion Based on Computer Vision Techniques (컴퓨터 비전 기술을 활용한 관객의 움직임과 상호작용이 가능한 실시간 파티클 아트)

  • Jo, Ik Hyun;Park, Geo Tae;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.1
    • /
    • pp.51-60
    • /
    • 2018
  • We present a real-time interactive particle-art with human motion based on computer vision techniques. We used computer vision techniques to reduce the number of equipments that required for media art appreciations. We analyze pros and cons of various computer vision methods that can adapted to interactive digital media art. In our system, background subtraction is applied to search an audience. The audience image is changed into particles with grid cells. Optical flow is used to detect the motion of the audience and create particle effects. Also we define a virtual button for interaction. This paper introduces a series of computer vision modules to build the interactive digital media art contents which can be easily configurated with a camera sensor.

Survey: Tabletop Display Techniques for Multi-Touch Recognition (멀티터치를 위한 테이블-탑 디스플레이 기술 동향)

  • Kim, Song-Gook;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.84-91
    • /
    • 2007
  • Recently, the researches based on vision about user attention and action awareness are being pushed actively for human computer interaction. Among them, various applications of tabletop display system are developed more in accordance with touch sensing technique, co-located and collaborative work. Formerly, although supported only one user, support multi-user at present. Therefore, collaborative work and interaction of four elements (human, computer, displayed objects, physical objects) that is ultimate goal of tabletop display are realizable. Generally, tabletop display system designs according to four key aspects. 1)multi-touch interaction using bare hands. 2)implementation of collaborative work, simultaneous user interaction. 3)direct touch interaction. 4)use of physical objects as an interaction tool. In this paper, we describe a critical analysis of the state-of-the-art in advanced multi-touch sensing techniques for tabletop display system according to the four methods: vision based method, non-vision based method, top-down projection system and rear projection system. And we also discuss some problems and practical applications in the research field.