• Title/Summary/Keyword: Gesture Interface

Search Result 231, Processing Time 0.028 seconds

A Research on Context-aware Digital Signage using a Kinect (키넥트를 활용한 상황인지형 디지털 사이니지 연구)

  • Ro, Kwanghyun;Lee, Seokkee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.265-273
    • /
    • 2014
  • In this paper, context-aware technologies using a Kinect sensor for a digital signage which is increasingly growing as the 4th screen media is presented. Generalized context-aware functions for a digital signage were studied and a context-aware digital signage platform was developed. It can support a natural user interface for controlling a digital signage and actively provide contents adapted to its context. As a basic context, user's gesture and voice, the number of users, sound direction were considered. In the future, the advanced functions such as age, gender will be studied. The implemented platform could be a good reference model when developing a general context-aware digital signage.

Development of a Hand~posture Recognition System Using 3D Hand Model (3차원 손 모델을 이용한 비전 기반 손 모양 인식기의 개발)

  • Jang, Hyo-Young;Bien, Zeung-Nam
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.219-221
    • /
    • 2007
  • Recent changes to ubiquitous computing requires more natural human-computer(HCI) interfaces that provide high information accessibility. Hand-gesture, i.e., gestures performed by one 'or two hands, is emerging as a viable technology to complement or replace conventional HCI technology. This paper deals with hand-posture recognition. Hand-posture database construction is important in hand-posture recognition. Human hand is composed of 27 bones and the movement of each joint is modeled by 23 degrees of freedom. Even for the same hand-posture,. grabbed images may differ depending on user's characteristic and relative position between the hand and cameras. To solve the difficulty in defining hand-postures and construct database effective in size, we present a method using a 3D hand model. Hand joint angles for each hand-posture and corresponding silhouette images from many viewpoints by projecting the model into image planes are used to construct the ?database. The proposed method does not require additional equations to define movement constraints of each joint. Also using the method, it is easy to get images of one hand-posture from many vi.ewpoints and distances. Hence it is possible to construct database more precisely and concretely. The validity of the method is evaluated by applying it to the hand-posture recognition system.

  • PDF

W3C based Interoperable Multimodal Communicator (W3C 기반 상호연동 가능한 멀티모달 커뮤니케이터)

  • Park, Daemin;Gwon, Daehyeok;Choi, Jinhuyck;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.140-152
    • /
    • 2015
  • HCI(Human Computer Interaction) enables the interaction between people and computers by using a human-familiar interface called as Modality. Recently, to provide an optimal interface according to various devices and service environment, an advanced HCI method using multiple modalities is intensively studied. However, the multimodal interface has difficulties that modalities have different data formats and are hard to be cooperated efficiently. To solve this problem, a multimodal communicator is introduced, which is based on EMMA(Extensible Multimodal Annotation Markup language) and MMI(Multimodal Interaction Framework) of W3C(World Wide Web Consortium) standards. This standard based framework consisting of modality component, interaction manager, and presentation component makes multiple modalities interoperable and provides a wide expansion capability for other modalities. Experimental results show that the multimodal communicator is facilitated by using multiple modalities of eye tracking and gesture recognition for a map browsing scenario.

An Educational Platform for Digital Media Prototype Development: an analysis and a usability study (디지털 미디어 콘텐츠 개발을 위한 교육용 플랫폼의 활용성)

  • Kim, Na-Young
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.8
    • /
    • pp.77-87
    • /
    • 2011
  • The advent of new platforms each year along with the advancement of technology provides a new opportunity for digital media designers to develop creative and innovative contents. This phenomenon affect the same way the students that major in the digital media, and the use of the platforms that is based on the new technology in the development of contents gives a newer and useful opportunity for learning to the students who recently study the digital media area. As the main technology of the recent digital media that attract many students' attention, we are presenting virtual reality display, movement cognition, physical engine and the gesture interface, and developed the consolidated platform based on these four technologies, and designed them in a way that can be more easily implemented in a simpler way. In order to study the efficiency of the platform with the objective of the development of digital media contents, we have developed four different prototype contents, and have measured based on the user's preference, efficiency and satisfaction. In the results of usability evaluation, functionality, effectiveness, efficiency, satisfaction were rated as 'high'. This results shows that the suggested 3D platform environment provides students to develop a rapid prototype fast and easy, and this may have a positive influence on students major in the digital media to conduct creative development research.

A Conversational Interactive Tactile Map for the Visually Impaired (시각장애인의 길 탐색을 위한 대화형 인터랙티브 촉각 지도 개발)

  • Lee, Yerin;Lee, Dongmyeong;Quero, Luis Cavazos;Bartolome, Jorge Iranzo;Cho, Jundong;Lee, Sangwon
    • Science of Emotion and Sensibility
    • /
    • v.23 no.1
    • /
    • pp.29-40
    • /
    • 2020
  • Visually impaired people use tactile maps to get spatial information about their surrounding environment, find their way, and improve their independent mobility. However, classical tactile maps that make use of braille to describe the location within the map have several limitations, such as the lack of information due to constraints on space and limited feedback possibilities. This study describes the development of a new multi-modal interactive tactile map interface that addresses the challenges of tactile maps to improve the usability and independence of visually impaired people when using tactile maps. This interface adds touch gesture recognition to the surface of tactile maps and enables the users to verbally interact with a voice agent to receive feedback and information about navigation routes and points of interest. A low-cost prototype was developed to conduct usability tests that evaluated the interface through a survey and interview given to blind participants after using the prototype. The test results show that this interactive tactile map prototype provides improved usability for people over traditional tactile maps that use braille only. Participants reported that it was easier to find the starting point and points of interest they wished to navigate to with the prototype. Also, it improved self-reported independence and confidence compared with traditional tactile maps. Future work includes further development of the mobility solution based on the feedback received and an extensive quantitative study.

Experience Design Guideline for Smart Car Interface (스마트카의 인터페이스를 위한 경험 디자인 가이드라인)

  • Yoo, Hoon Sik;Ju, Da Young
    • Design Convergence Study
    • /
    • v.15 no.1
    • /
    • pp.135-150
    • /
    • 2016
  • Due to the development of communication technology and expansion of Intelligent Transport System (ITS), the car is changing from a simple mechanical device to second living space which has comprehensive convenience function and is evolved into the platform which is playing as an interface for this role. As the interface area to provide various information to the passenger is being expanded, the research importance about smart car based user experience is rising. This study has a research objective to propose the guidelines regarding the smart car user experience elements. In order to conduct this study, smart car user experience elements were defined as function, interaction, and surface and through the discussions of UX/UI experts, 8 representative techniques, 14 representative techniques, and 8 locations of the glass windows were specified for each element. Following, the smart car users' priorities of the experience elements, which were defined through targeting 100 drivers, were analyzed in the form of questionnaire survey. The analysis showed that the users' priorities in applying the main techniques were in the order of safety, distance, and sensibility. The priorities of the production method were in the order of voice recognition, touch, gesture, physical button, and eye tracking. Furthermore, regarding the glass window locations, users prioritized the front of the driver's seat to the back. According to the demographic analysis on gender, there were no significant differences except for two functions. Therefore this showed that the guidelines of male and female can be commonly applied. Through user requirement analysis about individual elements, this study provides the guides about the requirement in each element to be applied to commercialized product with priority.

An EPG Configuration Constructing Method and Structure for Dynamically Implementing Viewer Chosen EPG Configurations (시청자 선택 기반의 EPG 형상의 동적 구현을 위한 EPG형상 제작 방법과 구조)

  • Ko, Kwang-Il
    • Convergence Security Journal
    • /
    • v.11 no.4
    • /
    • pp.51-58
    • /
    • 2011
  • Due to the digital technology, the TV broadcasting platform is evolving to the digital-TV, which is supporting data broadcasting service. Although the data broadcasting services (i.e., games, wether information, stock trading service) provide rich entertainment to viewers, they make the operation manners of digital-TV so complex that some viewers feel difficulty in using their TV sets. Several researches have been performed to address the problem by improving the functions of EPG such as searching and reserving programs, applying gesture and voice recognition technologies to operating EPG, guiding the design of the EPG's user interface, and developing agents helping EPG to behave intelligently. A research, however, that tries to address the problem that viewers have different familiarities with IT services has not been performed yet. The paper tackles the problem by letting a viewer to choose an EPG configuration (among the several EPG configurations provided by a broadcasting network) and designing an EPG that implements an EPG configuration based on the choice.

Motor Imagery based Brain-Computer Interface for Cerebellar Ataxia (소뇌 운동실조 이상 환자를 위한 운동상상 기반의 뇌-컴퓨터 인터페이스)

  • Choi, Young-Seok;Shin, Hyun-Chool;Ying, Sarah H.;Newman, Geoffrey I.;Thakor, Nitish
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.6
    • /
    • pp.609-614
    • /
    • 2014
  • Cerebellar ataxia is a steadily progressive neurodegenerative disease associated with loss of motor control, leaving patients unable to walk, talk, or perform activities of daily living. Direct motor instruction in cerebella ataxia patients has limited effectiveness, presumably because an inappropriate closed-loop cerebellar response to the inevitable observed error confounds motor learning mechanisms. Recent studies have validated the age-old technique of employing motor imagery training (mental rehearsal of a movement) to boost motor performance in athletes, much as a champion downhill skier visualizes the course prior to embarking on a run. Could the use of EEG based BCI provide advanced biofeedback to improve motor imagery and provide a "backdoor" to improving motor performance in ataxia patients? In order to determine the feasibility of using EEG-based BCI control in this population, we compare the ability to modulate mu-band power (8-12 Hz) by performing a cued motor imagery task in an ataxia patient and healthy control.

A Study on Development of Wearable Technology Based Biker Suits Part.1 (이륜차운전자를 위한 웨어러블 테크놀로지 의류 개발에 관한 연구 제1보)

  • Lee, Hyun-Seung;Lee, Jae-Jung
    • Journal of the Korean Society of Costume
    • /
    • v.61 no.8
    • /
    • pp.57-72
    • /
    • 2011
  • The purpose of this research is to develop a safe and convenient wearable technology wear for bikers. For this, we studied the current usage of two-wheeled vehicles and have also researched the rate of accidents and its causes. We then used them along with previous studies in terms of visual perception as factors to decide the crucial elements of the riders' apparel. Case studies and the break down for the established prototypes for bikers were practiced as well. Based on this process, a survey was conducted to find out the needs of the bikers in the areas of both apparel and technology and then proceeded to produce the appropriate design and device modules. In the apparel sector, the result of the survey indicated that it was considerable that any digital devices were not shown to sustain a natural visible look. It also was essential that the materials were durable and made for safety and easy movement. In the digital function sector, it was significant that a motion input interface which will be embedded into the wear was needed to avoid any dangerous situations. This would ensure the safety of not only the rider but the surrounding riders as well. Lastly, protecting the rider's skin from any harmful elements was regarded necessary as well. Based on these requirements, a new prototype was created and will be tested if the requirements stated above are all met and will be evaluated according to the effectiveness of its functions.

Hand Interface using Intelligent Recognition for Control of Mouse Pointer (마우스 포인터 제어를 위해 지능형 인식을 이용한 핸드 인터페이스)

  • Park, Il-Cheol;Kim, Kyung-Hun;Kwon, Goo-Rak
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.5
    • /
    • pp.1060-1065
    • /
    • 2011
  • In this paper, the proposed method is recognized the hands using color information with input image of the camera. It controls the mouse pointer using recognized hands. In addition, specific commands with the mouse pointer is designed to perform. Most users felt uncomfortable since existing interaction multimedia systems depend on a particular external input devices such as pens and mouse However, the proposed method is to compensate for these shortcomings by hand without the external input devices. In experimental methods, hand areas and backgrounds are separated using color information obtaining image from camera. And coordinates of the mouse pointer is determined using coordinates of the center of a separate hand. The mouse pointer is located in pre-filled area using these coordinates, and the robot will move and execute with the command. In experimental results, the recognition of the proposed method is more accurate but is still sensitive to the change of color of light.