• Title/Summary/Keyword: visual estimation method

Search Result 258, Processing Time 0.024 seconds

Estimation of Visual Evoked Potentials Using Time-Frequency Analysis (시-주파수 분석법을 이용한 시각자극 유발전위에 관한 연구)

  • 홍석균;성홍모;윤영로;윤형로
    • Journal of Biomedical Engineering Research
    • /
    • v.22 no.3
    • /
    • pp.259-267
    • /
    • 2001
  • The visual evoked potentials(VEPs) is used to assist in the diagnosis of specific disorders associated with involvement of the sensory visual pathways. The P100 latency is an important parameter which is diagnosis of optic nerve disorders. There are characteristics of latency delay, wave distortion, amplitude deduction in abnormal subjects. It is difficult to diagnose in the case of producing peak at the P100 latency. In this paper, difference of pattern between normal VEPs and abnormal VEPs using the Choi-Williams distribution method is studied. We observed the relationship about time and spectrum. The result shown that normal VEPs had maximum spectral value at 20Hz~26.7Hz and abnormal VEPs had maximum spectral value at 16.7Hz~20Hz. Also normal VEPs spectrum is higher than abnormal VEPs spectrum.

  • PDF

A study on the Visual and Aural Information Effect as the Amenity Evaluation Index (쾌적성 평가지표로서 시각 및 청각정보의 영향에 관한 연구)

  • Shin, Hoon;Song, Min-Jeong;Kim, Sun-Woo;Jang, Gil-Soo
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2007.05a
    • /
    • pp.511-514
    • /
    • 2007
  • This study aims to derive the effect of road traffic noise perception by the visual and aural information through a laboratory experiment. To verify the result more precisely, ME(Magnitude Estimation) and SD(Semantic Differential Method) evaluation on the effect of visual and aural effect were carried out by 43 university students. As the result, up to 10% of psychological reduction effect was shown under the 65dB(A). As the noise level, it was analyzed that the vision affected about 7dB(A) and sound affected 5dB(A). However, if these two are given simultaneously, mainly sound affects to reduce the annoyance of noise and the vision next. Compared with the urban central circumstances, this effect(2dB(A) under 65dB(A) noise) was shown smaller than field test.

  • PDF

Auditory and Visual Information Effect on the Loudness of Noise (시각 및 청각 정보가 소음의 인지도에 미치는 영향)

  • Shin, Hoon;Park, Sa-Gun;Song, Min-Jeong;Jang, Gil-Soo
    • KIEAE Journal
    • /
    • v.6 no.4
    • /
    • pp.69-76
    • /
    • 2006
  • The effects of the additional visual and auditory stimuli on the loudness evaluation of road traffic noise was investigated by the method of magnitude estimation. As a result, it was shown that additional visual stimulus of noise barrier can influence on the loudness perception of road traffic noise. Also, additional auditory stimuli such as green music or sound of flowing water can influence on the loudness perception of road traffic noise, approximately 5~10% lower than the absence of stimuli. But this effect was disappeared in the range of over 65dB(A).

An study on the Effects of Visual and Aural Information on Environmental Sound Amenity Evaluation (시각 및 청각 정보가 환경음의 쾌적성 평가에 미치는 영향에 관한 연구)

  • Shin, Hoon;Baek, Kun-Jong;Song, Min-Jeong;Jang, Gil-Soo
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.17 no.9
    • /
    • pp.813-818
    • /
    • 2007
  • This study aims to know the effect of road traffic noise perception when the visual and aural information is added in a laboratory experiment. ME (magnitude estimation) and SD (semantic differential method) evaluation on the effect of visual and aural effect were carried out by 43 university students. As the result, up to 10 % of psychological reduction effect was shown under the 65 dB(A). As the noise level, it was analyzed that the vision affected about 7 dB(A) and sound affected 5 dB(A). However, if these two are given simultaneously, mainly sound affects to reduce the annoyance of noise and the vision next. Compared with the urban central circumstances, this effect (2 dB(A) under 65 dB(A) noise) was shown smaller than field test.

The Auditory and Visual Information Effects on the Loudness of Noises Perception (친환경적 시각 및 청각정보가 소음의 인지도에 미치는 영향)

  • Shin, Hoon;Song, Min-Jeong;Kook, Chan;Jang, Gil-Soo;Kim, Sun-Woo
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.970-973
    • /
    • 2006
  • The effects of the additional visual and auditory stimuli on the loudness evaluation of road traffic noise was investigated by the method of magnitude estimation. As a result, it was shown that additional visual stimulus of noise barrier can influence on the loudness perception of road traffic noise. Also, additional auditory stimuli such as green music or sound of flowing water can influence on the loudness perception of road traffic noise. approximately $5{\sim}10%$ lower than the absence of stimuli. But this effect was disappeared in the range of over 65dB(A).

  • PDF

Omni-directional Visual-LiDAR SLAM for Multi-Camera System (다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM)

  • Javed, Zeeshan;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.353-358
    • /
    • 2022
  • Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.

A Study on Weldability Estirmtion of Laser Welded Specimens by Vision Sensor (비전 센서를 이용한 레이져 용접물의 용접성 평가에 관한 연구)

  • 엄기원;이세헌;이정익
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1101-1104
    • /
    • 1995
  • Through welding fabrication, user can feel an surficaial and capable unsatisfaction because of welded defects, Generally speaking, these are called weld defects. For checking these defects effectively without time loss effectively, weldability estimation system setup isan urgent thing for detecting whole specimen quality. In this study, by laser vision camera, catching a rawdata on welded specimen profiles, treating vision processing with these data, qualititative defects are estimated from getting these information at first. At the same time, for detecting quantitative defects, whole specimen weldability estimation is pursued by multifeature pattern recognition, which is a kind of fuzzy pattern recognition. For user friendly, by weldability estimation results are shown each profiles, final reports and visual graphics method, user can easily determined weldability. By applying these system to welding fabrication, these technologies are contribution to on-line weldability estimation.

  • PDF

A New Refinement Method for Structure from Stereo Motion (스테레오 연속 영상을 이용한 구조 복원의 정제)

  • 박성기;권인소
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.11
    • /
    • pp.935-940
    • /
    • 2002
  • For robot navigation and visual reconstruction, structure from motion (SFM) is an active issue in computer vision community and its properties arc also becoming well understood. In this paper, when using stereo image sequence and direct method as a tool for SFM, we present a new method for overcoming bas-relief ambiguity. We first show that the direct methods, based on optical flow constraint equation, are also intrinsically exposed to such ambiguity although they introduce robust methods. Therefore, regarding the motion and depth estimation by the robust and direct method as approximated ones. we suggest a method that refines both stereo displacement and motion displacement with sub-pixel accuracy, which is the central process f3r improving its ambiguity. Experiments with real image sequences have been executed and we show that the proposed algorithm has improved the estimation accuracy.

Object Dimension Estimation for Remote Visual Inspection in Borescope Systems

  • Kim, Hyun-Sik;Park, Yong-Suk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.4160-4173
    • /
    • 2019
  • Borescopes facilitate the inspection of areas inside machines and systems that are not directly accessible for visual inspection. They offer real-time, up-close access to confined and hard-to-access spaces without having to dismantle or destructure the object under inspection. Borescopes are ideal instruments for routine maintenance, quality inspection and monitoring of systems and structures. The main application being fault or defect detection, it is useful to have measuring capability to quantify object dimensions in a target area. High-end borescopes use multi-optic solutions to provide measurement information of viewed objects. Multi-optic solutions can provide accurate measurements at the expense of structural complexity and cost increase. Measuring functionality is often unavailable in low-end, single camera borescopes. In this paper, a single camera measurement solution that enables the size estimation of viewed objects is proposed. The proposed solution computes and overlays a scaled grid of known spacing value over the screen view, enabling the human inspector to estimate the size of the objects in view. The proposed method provides a simple means of measurement that is applicable to low-end borescopes with no built-in measurement capability.

Vision-Based Obstacle Collision Risk Estimation of an Unmanned Surface Vehicle (무인선의 비전기반 장애물 충돌 위험도 평가)

  • Woo, Joohyun;Kim, Nakwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.12
    • /
    • pp.1089-1099
    • /
    • 2015
  • This paper proposes vision-based collision risk estimation method for an unmanned surface vehicle. A robust image-processing algorithm is suggested to detect target obstacles from the vision sensor. Vision-based Target Motion Analysis (TMA) was performed to transform visual information to target motion information. In vision-based TMA, a camera model and optical flow are adopted. Collision risk was calculated by using a fuzzy estimator that uses target motion information and vision information as input variables. To validate the suggested collision risk estimation method, an unmanned surface vehicle experiment was performed.