• Title/Summary/Keyword: Gesture Control

Search Result 187, Processing Time 0.025 seconds

A Study on Development of Enhancement Guidelines of Smart Application Accessibility for the Disabled (장애인을 위한 스마트 애플리케이션 접근성 향상 가이드라인 개발 연구)

  • Jun, Woochun
    • Journal of The Korean Association of Information Education
    • /
    • v.19 no.1
    • /
    • pp.69-76
    • /
    • 2015
  • In current information society, there is a barrier called digital divide. Due to this barrier, the disabled have difficulties to communicate with the world. Recently, with advanced smart techniques, smart devices become necessity for the disabled. Currently improvements of smart application accessibility become a great concern for the disabled. The purpose of this paper is to present some guidelines for enhancing smart application accessibility for the disabled. The enhancement guidelines are developed based on the existing mobile accessibility guidelines and modified for better adaptability for the disabled. Our principles are as follows: minimizing blue light emission from smart devices, automatic focus on input window, action trace, font color change, emergency notification by motion, gesture recognition, control location, scrolling avoidance, auditory service for visual warning, and icon literation.

Infrared LED Pointer for Interactions in Collaborative Environments (협업 환경에서의 인터랙션을 위한 적외선 LED 포인터)

  • Jin, Yoon-Suk;Lee, Kyu-Hwa;Park, Jun
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.1
    • /
    • pp.57-63
    • /
    • 2007
  • Our research was performed in order to implement a new pointing device for human-computer interactions in a collaborative environments based on Tiled Display system. We mainly focused on tracking the position of an infrared light source and applying our system to various areas. More than simple functionality of mouse clicking and pointing, we developed a device that could be used to help people communicate better with the computer. The strong point of our system is that it could be implemented in any place where a camera can be installed. Due to the fact that this system processes only infrared light, computational overhead for LED recognition was very low. Furthermore, by analyzing user's movement, various actions are expected to be performed with more convenience. This system was tested for presentation and game control.

  • PDF

Color and Blinking Control to Support Facial Expression of Robot for Emotional Intensity (로봇 감정의 강도를 표현하기 위한 LED 의 색과 깜빡임 제어)

  • Kim, Min-Gyu;Lee, Hui-Sung;Park, Jeong-Woo;Jo, Su-Hun;Chung, Myung-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.547-552
    • /
    • 2008
  • Human and robot will have closer relation in the future, and we can expect that the interaction between human and robot will be more intense. To take the advantage of people's innate ability of communication, researchers concentrated on the facial expression so far. But for the robot to express emotional intensity, other modalities such as gesture, movement, sound, color are also needed. This paper suggests that the intensity of emotion can be expressed with color and blinking so that it is possible to apply the result to LED. Color and emotion definitely have relation, however, the previous results are difficult to implement due to the lack of quantitative data. In this paper, we determined color and blinking period to express the 6 basic emotions (anger, sadness, disgust, surprise, happiness, fear). It is implemented on avatar and the intensities of emotions are evaluated through survey. We figured out that the color and blinking helped to express the intensity of emotion for sadness, disgust, anger. For fear, happiness, surprise, the color and blinking didn't play an important role; however, we may improve them by adjusting the color or blinking.

  • PDF

Developing Interactive Game Contents using 3D Human Pose Recognition (3차원 인체 포즈 인식을 이용한 상호작용 게임 콘텐츠 개발)

  • Choi, Yoon-Ji;Park, Jae-Wan;Song, Dae-Hyeon;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.619-628
    • /
    • 2011
  • Normally vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment. On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part. In this paper, we describe a development of interactive game contents using pose recognition interface that using 3D human body joint information. Our system was proposed for the purpose that users can control the game contents with body motion without any additional equipment. Poses are recognized comparing current input pose and predefined pose template which is consist of 14 human body joint 3D information. We implement the game contents with the our pose recognition system and make sure about the efficiency of our proposed system. In the future, we will improve the system that can be recognized poses in various environments robustly.

The Effects of Breathing Control on Kinetic Parameters of Lower Limbs during Walking Motion in Korean Dance (한국무용 걸음체 동작 시 호흡의 사용유무가 하지의 운동역학적 변인에 미치는 영향)

  • Park, Yang-Sun;Jang, Ji-Young
    • Korean Journal of Applied Biomechanics
    • /
    • v.19 no.4
    • /
    • pp.627-636
    • /
    • 2009
  • This study aims to provide a scientific basis for the abstract beauty of dance by analyzing the effects of controlling the breath during the walking motion of Korean dance. The objective of the study is to determine the significance of breathing during Korean dance, as it is externally expressed and technologically segmented, let alone the internal beauty of Korean dance. The results of this study show that the position of the body center and ASIS during the walking motion that uses breath was lower than that of the walking motion that does not use the breath. In addition, in each replacement of the knee joint and ankle joint, a narrow angle, in which bending is used a lot, appeared during the walking motion that uses the breath, but not during the walking gesture that does not use the breath. This occurred during the bending motion. In the first peak point, the vertical ground reaction force during the walking motion that uses the breath was higher than that during the walking motion that does not use the breath.

Comparison of the Character Movements from Key-frame and Motion Capture Animation (키 프레임과 모션캡처 애니메이션의 캐릭터 움직임 비교)

  • Yoo, Mi-Ohk;Park, Kyoung-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.9
    • /
    • pp.74-83
    • /
    • 2008
  • In animation films, the movements of characters are exaggerated and comical. Traditional key-frame animation techniques allow to control exaggeration and comicality of characters at animators' wills. But, recently introduced motion capture techniques have limits on representing comicality and exaggeration although it is convenient to capture subjects' natural looks. This paper chooses two animations from key-frame and motion capture techniques and looks into comicality and exaggeration of characters by analyzing movements and motion of them Movements are classified as four fundamental motion elements - running, jump, gesture and walking - and are analyzed to compare the way of representation from two films. By studying similarity and differentiation of movements of two films, this paper discusses the advantages and disadvantages of key-frame and motion capture techniques in terms of exaggeration and comicality. Comparison of the character movements from two techniques shows that there are common differentiations of those movements.

Pre-service Teachers' Perceptions of the Importance and Performance of Effective Teaching Behaviors (효과적인 교수행동에 대한 예비교사들의 중요도와 실행도 인식)

  • Kang, Sook-Hi
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.2
    • /
    • pp.520-528
    • /
    • 2015
  • This study is intended to find out pre-service teachers' perceptions of effective teaching behaviors. 89 pre-service teachers observed teaching demonstrations of each other and rated the degree of importance of teaching behaviors and their own performance level. It was found that Class Operations received the highest score in both the importance and performance levels, followed by Lesson Organizations and Verbal Expressions. However, they considered Gesture and Clothing as well as Facial Expressions and Gaze less important. The results of t-tests showed that the differences between the importance and performance levels for all five areas were statistically significant. The results of importance-performance analysis indicated that Verbal Expressions, Eye Contacts, Time Control, and Emphasis on Important Contents are the factors which need to be improved for effective teaching.

A clinical study of the power control of Nd : YAG laser for painless irradiation on intraoral soft tissues (구강내 연조직에 대한 무통적조사를 위한 Nd:YAG laser의 출력조절에 관한 임상적 연구)

  • Han, Sang-Hak;Kim, Hyun-Sub;Lim, Kee-Jung;Kim, Byung-Ock;Han, Kyung-Yoon
    • Journal of Periodontal and Implant Science
    • /
    • v.26 no.2
    • /
    • pp.522-530
    • /
    • 1996
  • Most dentists are very interested in laser therapy on the intraoral soft tissue lesions because they want to accomplish the analgesic and aseptic surgery with little or no bleeding. In order to determine the difference of pain threshold according to different gingival tissues with or without inflammation, 25 patients with inflammatory periodontal disease and 10 volunteers with good general and oral health were selected as the inflamed group and the normal group, respectively. Interdental papilla, marginal gingiva, attached gingiva, and alveolar mucosa were irradiated by the contact delivery($300{\mu]m$ fiber optic, for 5 seconds) of a pulsed Nd:YAG laser(EN.EL.EN06O, Italy). And the laser power was gradually increased from 0.5W by the increment of 0.1W. The highest laser power was recorded as the first painful power when the painful gesture was recognized at first. The difference of the first painful power of laser according to different gingival tissues with or without inflammation was statistically analyzed by paired t-test in MICROSTAT program. Following results were obtained: 1. In the comparison related with the inflammation, the first painful power was significantly lower in the inflamed group than in the normal group, regardless of interdental papilla and marginal gingiva(p<0.05). 2. In the comparison related with the tissue structure, the first painful. power was significantly lower in alveolar mucosa than in attached gingiva(p<0.05). The results suggest that, for the painless therapy by a pulsed-Nd:YAG laser irradiation, the laser surgery over 2.0W of power should be necessarily accomplished under the local anethesia, and the local anesthesia should be considered according to the degree of inflammation, the tissue structure, and the purpose of laser therapy.

  • PDF

Vision and Depth Information based Real-time Hand Interface Method Using Finger Joint Estimation (손가락 마디 추정을 이용한 비전 및 깊이 정보 기반 손 인터페이스 방법)

  • Park, Kiseo;Lee, Daeho;Park, Youngtae
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a vision and depth information based real-time hand gesture interface method using finger joint estimation. For this, the areas of left and right hands are segmented after mapping of the visual image and depth information image, and labeling and boundary noise removal is performed. Then, the centroid point and rotation angle of each hand area are calculated. Afterwards, a circle is expanded at following pattern from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing and the hand model is recognized. Experimental results that our method enabled fingertip distinction and recognized various hand gestures fast and accurately. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 90% and the performance indicated over 25 fps. The proposed method can be used as a without contacts input interface in HCI control, education, and game applications.

Hand Interface using Intelligent Recognition for Control of Mouse Pointer (마우스 포인터 제어를 위해 지능형 인식을 이용한 핸드 인터페이스)

  • Park, Il-Cheol;Kim, Kyung-Hun;Kwon, Goo-Rak
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.5
    • /
    • pp.1060-1065
    • /
    • 2011
  • In this paper, the proposed method is recognized the hands using color information with input image of the camera. It controls the mouse pointer using recognized hands. In addition, specific commands with the mouse pointer is designed to perform. Most users felt uncomfortable since existing interaction multimedia systems depend on a particular external input devices such as pens and mouse However, the proposed method is to compensate for these shortcomings by hand without the external input devices. In experimental methods, hand areas and backgrounds are separated using color information obtaining image from camera. And coordinates of the mouse pointer is determined using coordinates of the center of a separate hand. The mouse pointer is located in pre-filled area using these coordinates, and the robot will move and execute with the command. In experimental results, the recognition of the proposed method is more accurate but is still sensitive to the change of color of light.