• Title/Summary/Keyword: 눈탐지

Search Result 67, Processing Time 0.022 seconds

Eye Tracking Kiosk System using Eyetracker (Eyetracker를 활용한 아이트래킹 키오스크 시스템)

  • Noh, Hyun-Soo;Kim, Jung-Jae;Won, Jong-Un;Lim, Hee-Ho;Li, Ji-Yoon;Jung, Soon-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.1184-1187
    • /
    • 2021
  • 키오스크의 카메라에 촬영된 이미지를 바탕으로 사용자를 인식하여 사용자의 눈 이동을 탐지한다. 사용자가 현재 바라보고 있는 화면의 위치를 선택할 수 있도록 하여, 키오스크에 접촉하지 않음으로써 위생문제를 해결하고 직관적인 서비스를 제공한다.

Development of Adaptive Eye Tracking System Using Auto-Focusing Technology of Camera (눈동자 자동 추적 카메라 시스템 설계와 구현)

  • Wei, Zukuan;Liu, Xiaolong;Oh, Young-Hwan;Yook, Ju-Hye
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.159-167
    • /
    • 2012
  • Eye tracking technology tracks human eyes movements to understand user's intention. This technology has been improving slowly and should be used for a variety of occasions now. For example, it enables persons with disabilities to operate a computer with their eyes. This article will show a typical implementation of an eye tracking system for persons with disabilities, after introducing the design principles and specific implementation details of an eye tracking system. The article discussed the realization of self-adapting regulation algorithm in detail. The self-adapting algorithm is based on feedback signal controlling the lens movements to realize automatic focus, and to get a clear eyes image. This CCD camera automatic focusing method has self-adapting capacity for changes of light intensity on the external environment. It also avoids the trouble of manual adjustment and improves the accuracy of the adjustment.

Development of a Deep-Learning Model with Maritime Environment Simulation for Detection of Distress Ships from Drone Images (드론 영상 기반 조난 선박 탐지를 위한 해양 환경 시뮬레이션을 활용한 딥러닝 모델 개발)

  • Jeonghyo Oh;Juhee Lee;Euiik Jeon;Impyeong Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1451-1466
    • /
    • 2023
  • In the context of maritime emergencies, the utilization of drones has rapidly increased, with a particular focus on their application in search and rescue operations. Deep learning models utilizing drone images for the rapid detection of distressed vessels and other maritime drift objects are gaining attention. However, effective training of such models necessitates a substantial amount of diverse training data that considers various weather conditions and vessel states. The lack of such data can lead to a degradation in the performance of trained models. This study aims to enhance the performance of deep learning models for distress ship detection by developing a maritime environment simulator to augment the dataset. The simulator allows for the configuration of various weather conditions, vessel states such as sinking or capsizing, and specifications and characteristics of drones and sensors. Training the deep learning model with the dataset generated through simulation resulted in improved detection performance, including accuracy and recall, when compared to models trained solely on actual drone image datasets. In particular, the accuracy of distress ship detection in adverse weather conditions, such as rain or fog, increased by approximately 2-5%, with a significant reduction in the rate of undetected instances. These results demonstrate the practical and effective contribution of the developed simulator in simulating diverse scenarios for model training. Furthermore, the distress ship detection deep learning model based on this approach is expected to be efficiently applied in maritime search and rescue operations.

Real Time Discrimination of 3 Dimensional Face Pose (실시간 3차원 얼굴 방향 식별)

  • Kim, Tae-Woo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.1
    • /
    • pp.47-52
    • /
    • 2010
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

  • PDF

Real Time 3D Face Pose Discrimination Based On Active IR Illumination (능동적 적외선 조명을 이용한 실시간 3차원 얼굴 방향 식별)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.727-732
    • /
    • 2004
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

The Effect of Invisible Cue on Change Detection Performance: using Continuous Flash Suppression (시각적으로 자각되지 않는 단서자극이 변화 탐지 수행에 미치는 효과: 연속 플래시 억제를 사용하여)

  • Park, Hyeonggyu;Byoun, Shinchul;Kwak, Ho-Wan
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.1
    • /
    • pp.1-25
    • /
    • 2016
  • The present study investigated the effect size of attention and consciousness on change detection. We confirmed the effect size of consciousness by comparing the condition which combined attention and consciousness and the condition of attention without consciousness. Then, we confirmed the effect size of attention by comparing the condition of attention without consciousness and the control condition which excluded attention and consciousness. For this purpose, change detection task and continuous flash suppression (CFS) were used. CFS renders a highly visible image invisible. In CFS, one eye is presented with a static stimulus, while the other eye is presented with a series of rapidly changing stimuli, such as mondrian patterns. The result is that the static stimulus becomes suppressed from conscious awareness by the stimuli presented in the other eye. We used a customized device with smartphone and google cardboard instead of stereoscope to trigger CFS. In Experiment 1-1, we reenacted some study to validate our experimental setup. Our experimental setup produced the duration of stimulus suppression that were similar to those of preceding research. In Experiment 1-2, we reenacted a study for attention without consciousness using an customized device. The results showed that attention without consciousness more strongly work as a cue. We think that it is reasonable to use CFS treatment employing smartphone and google cardboard for a follow-up study. In Experiment 2, when performing the change detection task, we measured the effect size of consciousness and attention by manipulating the consciousness level of cue. We used the method in which everything but the variable of interest kept being fixed. That way, the difference this independent variable makes to the action of the entire system can be isolated. We found that there was significant difference of correct response rate on change detection performance among different consciousness level of cue. In this study, we investigated that not only the role of attention and consciousness were different also we were able to estimated the effect size.

  • PDF

A Study on Extraction of Skin Region and Lip Using Skin Color of Eye Zone (눈 주위의 피부색을 이용한 피부영역검출과 입술검출에 관한 연구)

  • Park, Young-Jae;Jang, Seok-Woo;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.19-30
    • /
    • 2009
  • In this paper, We propose a method with which we can detect facial components and face in input image. We use eye map and mouth map to detect facial components using eyes and mouth. First, We find out eye zone, and second, We find out color value distribution of skin region using the color around the eye zone. Skin region have characteristic distribution in YCbCr color space. By using it, we separate the skin region and background area. We find out the color value distribution of the extracted skin region and extract around the region. Then, detect mouth using mouthmap from extracted skin region. Proposed method is better than traditional method the reason for it comes good result with accurate mouth region.

Performance Evaluation of Snow Detection Using Himawari-8 AHI Data (Himawari-8 AHI 적설 탐지의 성능 평가)

  • Jin, Donghyun;Lee, Kyeong-sang;Seo, Minji;Choi, Sungwon;Seong, Noh-hun;Lee, Eunkyung;Han, Hyeon-gyeong;Han, Kyung-soo
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1025-1032
    • /
    • 2018
  • Snow Cover is a form of precipitation that is defined by snow on the surface and is the single largest component of the cryosphere that plays an important role in maintaining the energy balance between the earth's surface and the atmosphere. It affects the regulation of the Earth's surface temperature. However, since snow cover is mainly distributed in area where human access is difficult, snow cover detection using satellites is actively performed, and snow cover detection in forest area is an important process as well as distinguishing between cloud and snow. In this study, we applied the Normalized Difference Snow Index (NDSI) and the Normalized Difference Vegetation Index (NDVI) to the geostationary satellites for the snow detection of forest area in existing polar orbit satellites. On the rest of the forest area, the snow cover detection using $R_{1.61{\mu}m}$ anomaly technique and NDSI was performed. As a result of the indirect validation using the snow cover data and the Visible Infrared Imaging Radiometer (VIIRS) snow cover data, the probability of detection (POD) was 99.95 % and the False Alarm Ratio (FAR) was 16.63 %. We also performed qualitative validation using the Himawari-8 Advanced Himawari Imager (AHI) RGB image. The result showed that the areas detected by the VIIRS Snow Cover miss pixel are mixed with the area detected by the research false pixel.

Data augmentation in voice spoofing problem (데이터 증강기법을 이용한 음성 위조 공격 탐지모형의 성능 향상에 대한 연구)

  • Choi, Hyo-Jung;Kwak, Il-Youp
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.449-460
    • /
    • 2021
  • ASVspoof 2017 deals with detection of replay attacks and aims to classify real human voices and fake voices. The spoofed voice refers to the voice that reproduces the original voice by different types of microphones and speakers. data augmentation research on image data has been actively conducted, and several studies have been conducted to attempt data augmentation on voice. However, there are not many attempts to augment data for voice replay attacks, so this paper explores how audio modification through data augmentation techniques affects the detection of replay attacks. A total of 7 data augmentation techniques were applied, and among them, dynamic value change (DVC) and pitch techniques helped improve performance. DVC and pitch showed an improvement of about 8% of the base model EER, and DVC in particular showed noticeable improvement in accuracy in some environments among 57 replay configurations. The greatest increase was achieved in RC53, and DVC led to an approximately 45% improvement in base model accuracy. The high-end recording and playback devices that were previously difficult to detect were well identified. Based on this study, we found that the DVC and pitch data augmentation techniques are helpful in improving performance in the voice spoofing detection problem.

Window Production Method based on Low-Frequency Detection for Automatic Object Extraction of GrabCut (GrabCut의 자동 객체 추출을 위한 저주파 영역 탐지 기반의 윈도우 생성 기법)

  • Yoo, Tae-Hoon;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.211-217
    • /
    • 2012
  • Conventional GrabCut algorithm is semi-automatic algorithm that user must be set rectangle window surrounds the object. This paper studied automatic object detection to solve these problem by detecting salient region based on Human Visual System. Saliency map is computed using Lab color space which is based on color opposing theory of 'red-green' and 'blue-yellow'. Then Saliency Points are computed from the boundaries of Low-Frequency region that are extracted from Saliency Map. Finally, Rectangle windows are obtained from coordinate value of Saliency Points and these windows are used in GrabCut algorithm to extract objects. Through various experiments, the proposed algorithm computing rectangle windows of salient region and extracting objects has been proved.