• Title/Summary/Keyword: 휴먼 컴퓨터 인터페이스

Search Result 48, Processing Time 0.031 seconds

Digital Business Card System based on Augmented Reality (증강현실을 기반으로 한 디지털 명함 시스템)

  • Park, Man-Seub;Kim, Chang-Su;Jung, Hoe-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.562-568
    • /
    • 2014
  • With the development of computer technology, augmented reality (Augmented Reality, AR) technology in the future, one of the main directions of development of human interface technology is emerging. On augmented reality based on the design and implementation of a digital business card system. In this paper, a Smartphone is simply information through recognizable digital business card contains information about the system. Digital business card system is compared to the way existing hardware in a way visually-based high precision. In addition, registered as a 3D computer vision of augmented reality technology skills and real-world situations convergence technology for research. Future research, 3D electronic map for Smartphone apps as of the application user interface on the side for research is needed.

Design and Implementation of a Stereoscopic Image Control System based on User Hand Gesture Recognition (사용자 손 제스처 인식 기반 입체 영상 제어 시스템 설계 및 구현)

  • Song, Bok Deuk;Lee, Seung-Hwan;Choi, HongKyw;Kim, Sung-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.396-402
    • /
    • 2022
  • User interactions are being developed in various forms, and in particular, interactions using human gestures are being actively studied. Among them, hand gesture recognition is used as a human interface in the field of realistic media based on the 3D Hand Model. The use of interfaces based on hand gesture recognition helps users access media media more easily and conveniently. User interaction using hand gesture recognition should be able to view images by applying fast and accurate hand gesture recognition technology without restrictions on the computer environment. This paper developed a fast and accurate user hand gesture recognition algorithm using the open source media pipe framework and machine learning's k-NN (K-Nearest Neighbor). In addition, in order to minimize the restriction of the computer environment, a stereoscopic image control system based on user hand gesture recognition was designed and implemented using a web service environment capable of Internet service and a docker container, a virtual environment.

A Novel EMG-based Human-Computer Interface for Electric-Powered Wheelchair Users with Motor Disabilities (거동장애를 가진 전동휠체어 사용자를 위한 근전도 기반의 휴먼-컴퓨터 인터페이스)

  • Lee Myung-Joon;Chu Jun-Uk;Ryu Je-Cheong;Mun Mu-Seong;Moon Inhyuk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.1
    • /
    • pp.41-49
    • /
    • 2005
  • Electromyogram (EMG) signal generated by voluntary contraction of muscles is often used in rehabilitation devices because of its distinct output characteristics compared to other bio-signals. This paper proposes a novel EMG-based human-computer interface for electric-powered wheelchair users with motor disabilities by C4 or C5 spine cord injury. User's commands to control the electric-powered wheelchair are represented by shoulder elevation motions, which are recognized by comparing EMG signals acquired from the levator scapulae muscles with a preset double threshold value. The interface commands for controlling the electric-powered wheelchair consist of combinations of left-, right- and both-shoulders elevation motions. To achieve a real-time interface, we implement an EMG processing hardware composed of analog amplifiers, filters, a mean absolute value circuit and a high-speed microprocessor. The experimental results using an implemented real-time hardware and an electric-powered wheelchair showed that the EMG-based human-computer interface is feasible for the users with severe motor disabilities.

한글문자의 구조와 구성원리에 대한 과학적 고찰-한글표기시스팀의 과학성과 공학성

  • Jeong, Hui-Seong
    • ETRI Journal
    • /
    • v.10 no.4
    • /
    • pp.99-117
    • /
    • 1988
  • This paper has two main purposes, One is a scientific theory for the Hangul writing system. It is a mathematical model which is formulated a logical characteristics involved in the compositionality principle of Hun Min Jung Um[6], and is based on the traditional scientific methodology and theory. The other is an engineering model for the Hangul writing system. It has proven that the model is more theoretical and practical that the conventional models called Hangul automata. This paper also shows the reasons why Korean has a great national pride in Hangul[3]. Consequently, we suggest that Hangul writing system has a potential capability as an ultimate human interface connecting human, computer and information.

  • PDF

Face Detection in Color images (컬러이미지에서의 얼굴검출)

  • 박동희;박호식;남기환;한준희;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.236-238
    • /
    • 2003
  • Human face detection is often the first step in applications such as video surveillance, human computer interface, fare recognition, and image database management. We have constructed a simple and fast system to detect frontal human faces in complex environment and different illumination. This paper presents a fast segmentation method to combine neighboring pixels with similar hue. The algorithm constructs eye, mouth, and boundary maps for verifying each fare candidate. We test the system on images in complex environment and with confusing objects. The experiment shows a robust detection result with few false detected fates.

  • PDF

TMCS : Tangible Media Control System (감각형 미디어 제어 시스템)

  • 오세진;장세이;우운택
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1356-1363
    • /
    • 2004
  • We propose Tangible Media Control System (TMCS), which allows users to manipulate media contents with physical objects in an intuitive way. Currently, most people access digital media contents by exploiting GUI. However, It provides limited manipulations of the media contents. The proposed system, instead of mouse and keyboard, adopts two types of tangible objects, i.e RFID-enabled object and tracker-embedded object. The TMCS enables users to easily access and control digital media contents with the tangible objects. In addition, it supports an interactive media controller which users can synthesize media contents and generate new media contents according to users' taste. It also offers personalized contents, which is suitable for users' preferences, by exploiting context such as user's profile and situational information. Therefore. the proposed system can be applied to various interactive applications such as multimedia education, entertainment and multimedia editor.

Design of an Infrared Multi-touch Screen Controller using Stereo Vision (스테레오 비전을 이용한 저전력 적외선 멀티 터치스크린 컨트롤러의 설계)

  • Jung, Sung-Wan;Kwon, Oh-Jun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.2
    • /
    • pp.68-76
    • /
    • 2010
  • Touch-enabled technology is increasingly being accepted as a main communication interface between human and computers. However, conventional touchscreen technologies, such as resistive overlay, capacitive overlay, and SAW(Surface Acoustic Wave), are not cost-effective for large screens. As an alternative to the conventional methods, we introduce a newly emerging method, an optical imaging touchscreen which is much simpler and more cost-effective. Despite its attractive benefits, optical imaging touchscreen has to overcome some problems, such as heavy computational complexity, intermittent ghost points, and over-sensitivity, to be commercially used. Therefore, we designed a hardware controller for signal processing and multi-coordinate computation, and proposed Infrared-blocked DA(Dark Area) manipulation as a solution. While the entire optical touch control took 34ms with a 32-bit microprocessor, the designed hardware controller can manage 2 valid coordinates at 200fps and also reduce energy consumption of infrared diodes from 1.8Wh to 0.0072Wh.

Development of Virtual Science Experience Space(VSES) using Haptic Device (역감 제시 장치를 이용한 가상 과학 체험 공간 개발)

  • 김호정;류제하
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.11
    • /
    • pp.1044-1053
    • /
    • 2003
  • A virtual science experience space(VSES) using virtual reality technology including haptic device is proposed to overcome limits which the existing science education has and to improve the effect of it. Four example scientific worlds such as Micro World, Friction World, Electromechanical World and Macro World are demonstrated by the developed VSES. Van der Waals forces in Micro World and Stick-Slip friction in Friction World, the principle of induction motor and power generator in Electromechanical World and Coriolis acceleration that is brought about by relative motion on the rotating coordinate are modeled mathematically based on physical principles. Emulation methods for haptic interface are suggested. The proposed VSES consists of haptic device, HMD or Crystal Eyes and a digital computer with stereoscopic graphics and GUI. The proposed system is believed to increase the realism and immersion for user.

Face Emotion Recognition by Fusion Model based on Static and Dynamic Image (정지영상과 동영상의 융합모델에 의한 얼굴 감정인식)

  • Lee Dae-Jong;Lee Kyong-Ah;Go Hyoun-Joo;Chun Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.573-580
    • /
    • 2005
  • In this paper, we propose an emotion recognition using static and dynamic facial images to effectively design human interface. The proposed method is constructed by HMM(Hidden Markov Model), PCA(Principal Component) and wavelet transform. Facial database consists of six basic human emotions including happiness, sadness, anger, surprise, fear and dislike which have been known as common emotions regardless of nation and culture. Emotion recognition in the static images is performed by using the discrete wavelet. Here, the feature vectors are extracted by using PCA. Emotion recognition in the dynamic images is performed by using the wavelet transform and PCA. And then, those are modeled by the HMM. Finally, we obtained better performance result from merging the recognition results for the static images and dynamic images.

Real Time Gaze Discrimination for Human Computer Interaction (휴먼 컴퓨터 인터페이스를 위한 실시간 시선 식별)

  • Park Ho sik;Bae Cheol soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.3C
    • /
    • pp.125-132
    • /
    • 2005
  • This paper describes a computer vision system based on active IR illumination for real-time gaze discrimination system. Unlike most of the existing gaze discrimination techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze discrimination system can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using generalized regression neural networks (GRNNs). With GRNNs, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Futhermore, the mapping function can generalize to other individuals not used in the training. To further improve the gaze estimation accuracy, we employ a reclassification scheme that deals with the classes that tend to be misclassified. This leads to a 10% improvement in classification error. The angular gaze accuracy is about 5°horizontally and 8°vertically. The effectiveness of our gaze tracker is demonstrated by experiments that involve gaze-contingent interactive graphic display.