• Title/Summary/Keyword: Interactive Display

Search Result 192, Processing Time 0.024 seconds

Glasses-free Interactive 3D Display: The Effects of Viewing Distance, Orientation and Manual Interaction on Visual Fatigue (무안경식 Interactive 3D Display: 시청거리, 시청방위, 협응동작이 시각피로에 미치는 영향)

  • Kim, Duk-Joong;Li, Hyung-Chul O.;Kim, Shin-Woo
    • Journal of Broadcast Engineering
    • /
    • v.17 no.4
    • /
    • pp.572-583
    • /
    • 2012
  • In this study, we investigated visual fatigue in i3D system and basic factors that contribute to visual fatigue in the system. i3D is a type of glasses-free display which supports elementary manual interaction of users with the display. In Experiment 1, we performed open-ended survey of visual fatigue and collected responses from observers which then were used as survey questions for visual fatigue. The questions were validated by factor analysis from which we derived fatigue measurement scale. In Experiment 2, we measured visual fatigue in various conditions using survey questions obtained from Experiment 1. Using manual interaction (present/absent), viewing distance(1/2/4m), and viewing orientation($0/28/56^{\circ}$) as three factors in within-subject design, we measured visual fatigue in each condition. The results indicated that visual fatigue deceases with farther viewing distance, but viewing orientation and manual interaction does not influence visual fatigue. Although fatigue unexpectedly decreased in an extreme viewing condition (e.g., distance 1m, orientation $56^{\circ}$), the results were obtained because of technical limitation of glasses-free 3D display. General discussion provides discussion on limits of the current study and suggestions for future research.

Interactive electronic book utilizing tabletop display and digi-pet (테이블탑 디스플레이와 디지팻을 활용한 상호작용 전자책)

  • Song, dae-hyeon;Park, jae-wan;Lee, yong-chul;Kim, dong-min;Moon, joo-pil;Lee, chil-woo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.175-178
    • /
    • 2009
  • In this paper, we describe about interactive electronic book utilizing tabletop display interface and digi-pet. Because of this system can define a lot of gestures using touch point of finger, help that user experiences electronic book effectively more than existent input device. Also, support 1 users in 1 input device in existent mode, but this system can expect various effect because support Multi-user. The digi-pet(digital pet) is physical tools that convey emotion between tabletop display platform and user. And this shows various results acting as assistance because becomes master of story of imagination and participate directly on talk and makes change of story. In this paper, we described about oriented electronic book system the future that combines tabletop display and Digi-pet. This system is expected that the usefulness is increased by leaps and bounds along with technology development forward.

  • PDF

Interactive Visualization for Patient-to-Patient Comparison

  • Nguyen, Quang Vinh;Nelmes, Guy;Huang, Mao Lin;Simoff, Simeon;Catchpoole, Daniel
    • Genomics & Informatics
    • /
    • v.12 no.1
    • /
    • pp.21-34
    • /
    • 2014
  • A visual analysis approach and the developed supporting technology provide a comprehensive solution for analyzing large and complex integrated genomic and biomedical data. This paper presents a methodology that is implemented as an interactive visual analysis technology for extracting knowledge from complex genetic and clinical data and then visualizing it in a meaningful and interpretable way. By synergizing the domain knowledge into development and analysis processes, we have developed a comprehensive tool that supports a seamless patient-to-patient analysis, from an overview of the patient population in the similarity space to the detailed views of genes. The system consists of multiple components enabling the complete analysis process, including data mining, interactive visualization, analytical views, and gene comparison. We demonstrate our approach with medical scientists on a case study of childhood cancer patients on how they use the tool to confirm existing hypotheses and to discover new scientific insights.

Eye-Catcher : Real-time 2D/3D Mixed Contents Display System

  • Chang, Jin-Wook;Lee, Kyoung-Il;Park, Tae-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.51-54
    • /
    • 2008
  • In this paper, we propose a practical method for displaying 2D/True3D mixed contents in real-time. Many companies released their 3D display recently, but the costs of producing True3D contents are still very expensive. Since there are already a lot of 2D contents in the world and it is more effective to mix True3D objects into the 2D contents than making True3D contents directly, people became interested in mixing 2D/True3D contents. Moreover, real-time 2D/True3D mixing is helpful for 3D displays because the scenario of the contents can be easily changed on playback-time by adjusting the 3D effects and the motion of the True3D object interactively. In our system, True3D objects are rendered into multiple view-point images, which are composed with 2D contents by using depth information, and then they are multiplexed with pre-generated view masks. All the processes are performed on a graphics processor. We were still able to play a 2D/True3D mixed contents with Full HD resolution in real-time using a normal graphics processor.

  • PDF

Real Time Eye and Gaze Tracking

  • Park Ho Sik;Nam Kee Hwan;Cho Hyeon Seob;Ra Sang Dong;Bae Cheol Soo
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.857-861
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks (GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

  • PDF

Implementation of Gesture Interface for Projected Surfaces

  • Park, Yong-Suk;Park, Se-Ho;Kim, Tae-Gon;Chung, Jong-Moon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.378-390
    • /
    • 2015
  • Image projectors can turn any surface into a display. Integrating a surface projection with a user interface transforms it into an interactive display with many possible applications. Hand gesture interfaces are often used with projector-camera systems. Hand detection through color image processing is affected by the surrounding environment. The lack of illumination and color details greatly influences the detection process and drops the recognition success rate. In addition, there can be interference from the projection system itself due to image projection. In order to overcome these problems, a gesture interface based on depth images is proposed for projected surfaces. In this paper, a depth camera is used for hand recognition and for effectively extracting the area of the hand from the scene. A hand detection and finger tracking method based on depth images is proposed. Based on the proposed method, a touch interface for the projected surface is implemented and evaluated.

An Efficient Feature Point Detection for Interactive Pen-Input Display Applications (인터액티브 펜-입력 디스플레이 애플리케이션을 위한 효과적인 특징점 추출법)

  • Kim Dae-Hyun;Kim Myoung-Jun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.705-716
    • /
    • 2005
  • There exist many feature point detection algorithms that developed in pattern recognition research . However, interactive applications for the pen-input displays such as Tablet PCs and LCD tablets have set different goals; reliable segmentation for different drawing styles and real-time on-the-fly fieature point defection. This paper presents a curvature estimation method crucial for segmenting freeHand pen input. It considers only local shape descriptors, thus, peforming a novel curvature estimation on-the-fly while drawing on a pen-input display This has been used for pen marking recognition to build a 3D sketch-based modeling application.

A Study on the Reduction in VR Cybersickness using an Interactive Wind System (Interactive Wind System을 이용한 VR 사이버 멀미 개선 연구)

  • Lim, Dojeon;Lee, Yewon;Cho, Yesol;Ryoo, Taedong;Han, Daseong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.3
    • /
    • pp.43-53
    • /
    • 2021
  • This paper presents an interactive wind system that generates artificial winds in a virtual reality (VR) environment according to online user inputs from a steering wheel and an acceleration pedal. Our system is composed of a head-mounted display (HMD) and three electric fans to make the user sense touch from the winds blowing from three different directions in a racing car VR application. To evaluate the effectiveness of the winds for reducing VR cybersickness, we employ the simulator sickness questionnaire (SSQ), which is one of the most common measures for cybersickness. We conducted experiments on 13 subjects for the racing car contents first with the winds and then without them or vice versa. Our results showed that the VR contents with the artificial winds clearly reduce cybersickness while providing a positive user experience.

Multimodal Interaction on Automultiscopic Content with Mobile Surface Haptics

  • Kim, Jin Ryong;Shin, Seunghyup;Choi, Seungho;Yoo, Yeonwoo
    • ETRI Journal
    • /
    • v.38 no.6
    • /
    • pp.1085-1094
    • /
    • 2016
  • In this work, we present interactive automultiscopic content with mobile surface haptics for multimodal interaction. Our system consists of a 40-view automultiscopic display and a tablet supporting surface haptics in an immersive room. Animated graphics are projected onto the walls of the room. The 40-view automultiscopic display is placed at the center of the front wall. The haptic tablet is installed at the mobile station to enable the user to interact with the tablet. The 40-view real-time rendering and multiplexing technology is applied by establishing virtual cameras in the convergence layout. Surface haptics rendering is synchronized with three-dimensional (3D) objects on the display for real-time haptic interaction. We conduct an experiment to evaluate user experiences of the proposed system. The results demonstrate that the system's multimodal interaction provides positive user experiences of immersion, control, user interface intuitiveness, and 3D effects.

A Study on Gender Classification Based on Diagonal Local Binary Patterns (대각선형 지역적 이진패턴을 이용한 성별 분류 방법에 대한 연구)

  • Choi, Young-Kyu;Lee, Young-Moo
    • Journal of the Semiconductor & Display Technology
    • /
    • v.8 no.3
    • /
    • pp.39-44
    • /
    • 2009
  • Local Binary Pattern (LBP) is becoming a popular tool for various machine vision applications such as face recognition, classification and background subtraction. In this paper, we propose a new extension of LBP, called the Diagonal LBP (DLBP), to handle the image-based gender classification problem arise in interactive display systems. Instead of comparing neighbor pixels with the center pixel, DLBP generates codes by comparing a neighbor pixel with the diagonal pixel (the neighbor pixel in the opposite side). It can reduce by half the code length of LBP and consequently, can improve the computation complexity. The Support Vector Machine is utilized as the gender classifier, and the texture profile based on DLBP is adopted as the feature vector. Experimental results revealed that our approach based on the diagonal LPB is very efficient and can be utilized in various real-time pattern classification applications.

  • PDF