• Title/Summary/Keyword: Human visual system

Search Result 867, Processing Time 0.026 seconds

Human Sensibility Measurement for Visual Picture Stimulus using Heart Rate Variability Analysis (심박변화 분석을 이용한 장면시자극에 대한 감성측정에 관한 연구)

  • 권의철;김동윤;김동선;임영훈;손진훈
    • Science of Emotion and Sensibility
    • /
    • v.1 no.1
    • /
    • pp.93-103
    • /
    • 1998
  • In this paper, we present change of human sensiblity when the 26 healthy female subjects were exposed with visual picture stimulus. We used Intermational Affective Picture System as the visual stimulus. The methods are AutoRegressive(AR) spectrum which is a linear method and Return Map which is a nonlinear mithod. SR spectrum may variability(HRV). The LF/HF of HRV and the variation of Return Map were analyzed from ECG signal of the female subjects. Return Map of RR intervals were analyzed by computiong the variation. When the subjets were stimulated by the pleasant pictures, LF/HF and variation were decreased compared with unpleasant stimulus, We may obtain good parameters for the measurement of the change of human sensibility for the visual picture stimulus.

  • PDF

Constructing a Noise-Robust Speech Recognition System using Acoustic and Visual Information (청각 및 시가 정보를 이용한 강인한 음성 인식 시스템의 구현)

  • Lee, Jong-Seok;Park, Cheol-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.719-725
    • /
    • 2007
  • In this paper, we present an audio-visual speech recognition system for noise-robust human-computer interaction. Unlike usual speech recognition systems, our system utilizes the visual signal containing speakers' lip movements along with the acoustic signal to obtain robust speech recognition performance against environmental noise. The procedures of acoustic speech processing, visual speech processing, and audio-visual integration are described in detail. Experimental results demonstrate the constructed system significantly enhances the recognition performance in noisy circumstances compared to acoustic-only recognition by using the complementary nature of the two signals.

Human Visual System-Aware Optimal Power-Saving Color Transformation for Mobile OLED Devices (모바일 OLED 디스플레이를 위한 인간 시각 만족의 최적 전력 절감 색 변환)

  • Lee, Jae-Hyeok;Kim, Eun-Sil;Kim, Young-Jin
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.126-134
    • /
    • 2016
  • Due to the merits of OLED displays such as fast responsiveness, wide view angle, and power efficiency, their use has increased. However, despite the power efficiency of OLED displays, the portion of their power consumption among the total power consumption is still high since user interaction-based applications such as instant messaging, video play, and games are frequently used. Their power consumption varies significantly depending on the display contents and thus color transformation is one of the low-power techniques used in OLED displays. Prior low-power color transformation techniques have not been rigorously studied in terms of satisfaction of the human visual system, and have not considered optimal visual satisfaction and power consumption at the same time in relation to color transformation. In this paper, we propose a novel low-power color transformation technique which strictly considers human visual system-awareness as well as optimization of both visual satisfaction and power consumption in a balanced way. Experimental results show that the proposed technique achieves better human visual satisfaction in terms of visuality and also shows on average 13.4% and 22.4% improvement over a prior one in terms of power saving.

Corresponding Color Reproduction on CRT between Illuminated Environment viewing Conditions (관찰환경에 따른 소프트카피의 대응적 색재현)

  • 곽한봉;안성아;서봉우;이영호;안석출
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2001.06a
    • /
    • pp.241-244
    • /
    • 2001
  • A various color device became generalization. Therefore, request about expression of correct color is increased. Device independent color reproduction system acquires and reproduce color of object regardless characteristic of Input/Output device. Human visual system is partially adapted to the CRT monitor's white point and the ambient light. The visual experiments were performed on the effect of the ambient lighting under mixed chromatic adaptation. In this paper, It was found that human visual system is 40% to 60% adapted to CRT monitor's white point light and the rest to ambient light.

  • PDF

An Analysis of Collaborative Visualization Processing of Text Information for Developing e-Learning Contents

  • SUNG, Eunmo
    • Educational Technology International
    • /
    • v.10 no.1
    • /
    • pp.25-40
    • /
    • 2009
  • The purpose of this study was to explore procedures and modalities on collaborative visualization processing of text information for developing e-Learning contents. In order to investigate, two research questions were explored: 1) what are procedures on collaborative visualization processing of text information, 2) what kinds of patterns and modalities can be found in each procedure of collaborative visualization of text information. This research method was employed a qualitative research approaches by means of grounded theory. As a result of this research, collaborative visualization processing of text information were emerged six steps: identifying text, analyzing text, exploring visual clues, creating visuals, discussing visuals, elaborating visuals, and creating visuals. Collaborative visualization processing of text information came out the characteristic of systemic and systematic system like spiral sequencing. Also, another result of this study, modalities in collaborative visualization processing of text information was divided two dimensions: individual processing by internal representation, social processing by external representation. This case study suggested that collaborative visualization strategy has full possibility of providing ideal methods for sharing cognitive system or thinking system as using human visual intelligence.

Modeling Time Pressure Effect on Visual Search Strategy (시간 압박이 시각 탐색 전략에 미치는 영향 모델링)

  • Choi, Yoonhyung;Myung, Rohae
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.42 no.6
    • /
    • pp.377-385
    • /
    • 2016
  • The previous Adaptive Control of Thought-Rational (ACT-R) cognitive architecture model has a limitation in that it cannot accurately predict human visual search strategy, because time effect, one of important human cognitive features, is not considered. Thus, the present study proposes ACT-R cognitive modeling that contains the impact of time using a revised utility system in the ACT-R model. Then, the validation of the model is performed by comparing results of the model with eye-tracking experimental data and SEEV-T (SEEV-Time; SEEV model which considers time effect) model in "Where's Wally" game. The results demonstrate that the model data fit fairly well with the eye-tracking data ($R^2=0.91$) and SEEV-T model ($R^2=0.93$). Therefore, the modeling method which considers time effect using a revised utility system should be used in predicting the human visual search paradigm when the available time is limited.

Visual Search Model based on Saliency and Scene-Context in Real-World Images (실제 이미지에서 현저성과 맥락 정보의 영향을 고려한 시각 탐색 모델)

  • Choi, Yoonhyung;Oh, Hyungseok;Myung, Rohae
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.4
    • /
    • pp.389-395
    • /
    • 2015
  • According to much research on cognitive science, the impact of the scene-context on human visual search in real-world images could be as important as the saliency. Therefore, this study proposed a method of Adaptive Control of Thought-Rational (ACT-R) modeling of visual search in real-world images, based on saliency and scene-context. The modeling method was developed by using the utility system of ACT-R to describe influences of saliency and scene-context in real-world images. Then, the validation of the model was performed, by comparing the data of the model and eye-tracking data from experiments in simple task in which subjects search some targets in indoor bedroom images. Results show that model data was quite well fit with eye-tracking data. In conclusion, the method of modeling human visual search proposed in this study should be used, in order to provide an accurate model of human performance in visual search tasks in real-world images.

Perception of Ship's Movement in Docking Maneuvering using Ship-Handling Simulator

  • Arai, Yasuo;Minamiya, Taro;Okuda, Shigeyuki
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2006.10a
    • /
    • pp.3-10
    • /
    • 2006
  • Recently it is coming to be hish reality on visual system in ship-handling simulator depending on the technical development of 3D computer graphics. Even with high reality, it is possible that visual information presented seafarers through screen or display is not equivalent to the real world. In docking maneuvering, visual targets or obstructs are sighted close to ship's operator or within few hundred meters, so it might be possible to affect visual information such as the difference between both eyes' and single eye's visual sight. Because it is not possible to perceive of very slow ship's movement by visual in case of very large vessels, so the Doppler Docking SONAR and/or Docking Speed and Distance Measurement Equipment were developed and applied for safety docking maneuvering. By the way, the simulator training includes the ship's maneuvering training in docking, but in Ship-handling Simulator and also onboard, there are some limitations of perception of ship's movement with visual information. In this paper, perception of ship's movement with visual system in Ship-handling Simulator and competition of performances of visual systems that are conventional screen type with Fixed Eye-point system and Mission Simulator. We got some conclusions not only on the effectiveness for visual system but also on the human behavior in docking maneuver.

  • PDF

An Image Watermarking Scheme by Image Fusion in the Frequency Domain (주파수 영역에서 영상융합에 의한 영상 워터마킹 기법)

  • Ahn Chi-Hhyun;Shin Phil-Sun;Hwang Jae-Ho;Hong Choong-Seon;Lee Dae-Young;Kim Dong-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1411-1420
    • /
    • 2005
  • This paper presents a robust watermarking approach that the frequency coefficients of the binary logo image are inserted in the DC and each frequency area of the host image for copyright protection of image data. We use the 1 level discrete wavelet transform(DWT) coefficients of 64*64 binary logo image as watermark because the presentation of a recognizable mark is much more convincing than numerical values and allows the opportunity to exploit the human visual system's ability to recognize a pattern. The proposed method makes use of 1-level DWT of the logo image, the DWT coefficients of the logo image are inserted by human visual system(HVS) and region of interest(ROI) in the frequency domain of the host image. Thereby, the detected logo image confirms copyright. Because small size watermark is inserted by HVS and ROI, the results confirm the superiority of the Proposed method on invisibility and robustness.

  • PDF

Digital Cage Watermarking using Human Visual System and Discrete Cosine Transform (인지 시각시스템 및 이산코사인변환을 이용한 디지털 이미지 워터마킹)

  • 변성철;김종남;안병하
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.1
    • /
    • pp.17-23
    • /
    • 2003
  • In this Paper. we Propose a digital watermarking scheme for digital images based on a perceptual model, the frequency masking, texture making, and luminance masking Properties of the human visual system(HVS), which have been developed in the context of image compression. We embed two types of watermark, one is pseudo random(PN) sequences, the other is a logo image. To embed the watermarks, original images are decomposed into $8\times8$ blocks, and the discrete cosine transform(DCT) is carried out for each block. Watermarks are casted in the low frequency components of DCT coefficients. The perceptual model adjusts adaptively scaling factors embedding watermarks according to the local image properties. Experimental results show that the proposed scheme presents better results than that of non-perceptual watermarking methods for image qualify without loss of robustness.