• Title/Summary/Keyword: Eye Image

Search Result 821, Processing Time 0.024 seconds

3-D OCT Image Reconstruction for Precision Analysis of Rat Eye and Human Molar (쥐 눈과 인간 치아의 정밀한 단층정보 분석을 위한 OCT 3-D 영상 재구성)

  • Jeon, Ji-Hye;Na, Ji-Hoon;Yang, Yoon-Gi;Lee, Byeong-Ha;Lee, Chang-Su
    • The KIPS Transactions:PartB
    • /
    • v.14B no.6
    • /
    • pp.423-430
    • /
    • 2007
  • Optical coherence tomography(OCT) is a high resolution imaging system which can image the cross section of microscopic organs in a living tissue with about $1{\mu}m$ resolution. In this paper, we implement OCT system and acquire 2-D images of rat eye and human molar samples especially in the field of opthalmology and dentistry. In terms of 2-D images, we reconstruct 3-D OCT images which give us another inner structural information of target objects. OPEN-GL reduces the 3-D processing time 10 times less than MATLAB.

Robust Eye Localization using Multi-Scale Gabor Feature Vectors (다중 해상도 가버 특징 벡터를 이용한 강인한 눈 검출)

  • Kim, Sang-Hoon;Jung, Sou-Hwan;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.25-36
    • /
    • 2008
  • Eye localization means localization of the center of the pupils, and is necessary for face recognition and related applications. Most of eye localization methods reported so far still need to be improved about robustness as well as precision for successful applications. In this paper, we propose a robust eye localization method using multi-scale Gabor feature vectors without big computational burden. The eye localization method using Gabor feature vectors is already employed in fuck as EBGM, but the method employed in EBGM is known not to be robust with respect to initial values, illumination, and pose, and may need extensive search range for achieving the required performance, which may cause big computational burden. The proposed method utilizes multi-scale approach. The proposed method first tries to localize eyes in the lower resolution face image by utilizing Gabor Jet similarity between Gabor feature vector at an estimated initial eye coordinates and the Gabor feature vectors in the eye model of the corresponding scale. Then the method localizes eyes in the next scale resolution face image in the same way but with initial eye points estimated from the eye coordinates localized in the lower resolution images. After repeating this process in the same way recursively, the proposed method funally localizes eyes in the original resolution face image. Also, the proposed method provides an effective illumination normalization to make the proposed multi-scale approach more robust to illumination, and additionally applies the illumination normalization technique in the preprocessing stage of the multi-scale approach so that the proposed method enhances the eye detection success rate. Experiment results verify that the proposed eye localization method improves the precision rate without causing big computational overhead compared to other eye localization methods reported in the previous researches and is robust to the variation of post: and illumination.

A Study on the Improvement of the Facial Image Recognition by Extraction of Tilted Angle (기울기 검출에 의한 얼굴영상의 인식의 개선에 관한 연구)

  • 이지범;이호준;고형화
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.7
    • /
    • pp.935-943
    • /
    • 1993
  • In this paper, robust recognition system for tilted facial image was developed. At first, standard facial image and lilted facial image are captured by CCTV camera and then transformed into binary image. The binary image is processed in order to obtain contour image by Laplacian edge operator. We trace and delete outermost edge line and use inner contour lines. We label four inner contour lines in order among the inner lines, and then we extract left and right eye with known distance relationship and with two eyes coordinates, and calculate slope information. At last, we rotate the tilted image in accordance with slope information and then calculate the ten distance features between element and element. In order to make the system invariant to image scale, we normalize these features with distance between left and righ eye. Experimental results show 88% recognition rate for twenty five face images when tilted degree is considered and 60% recognition rate when tilted degree is not considered.

  • PDF

A Study on the Changes in the Make-up Color and Texture by the Type of Make-up Image Shown in the Beauty Trends (뷰티 트랜드에 따른 화장 이미지 유형별 화장색채와 질감 변화 분석)

  • Kim, Myoung-Lee;Lee, Kyung-Hee
    • Fashion & Textile Research Journal
    • /
    • v.13 no.3
    • /
    • pp.409-417
    • /
    • 2011
  • This study aims to investigate makeup images shown in the beauty trends and analyzes the characteristics of makeup colors depending on the types of facial makeup. This survey's collected data includes a total of 365 makeup colors which have been shown in the beauty trends for the last three years. The pictures and vocabularies shown in such data were analyzed and thus we could have classificatorily six kinds of makeup images. In addition, makeup colors were divided into two subcategories: eye makeup and lip makeup, both of which have the most significant impact on the makeup images. As the results, the types of makeup images shown at beauty trends were classified such as natural image, gorgeous image, elegant image, sophisticate image, and romantic image. If analyzing yearly changes, active, romantic, and elegant images were common in 2008, and natural image displayed a certain strong tendency amid pro-environmental trends in 2009, and gorgeous images were appeared apparently in 2010, while natural image showed a bullish tendency yet. Regarding to color characteristics by makeup images shown at beauty trends, YR color in eye makeup and R in lip makeup looked bullish generally, and a lot of changes were shown in color tones. This fact gives help in grasping fashion colors and color tones of yearly makeups. Based on these results, this study examines makeup colors for expressing makeup images closely, and then suggests that it could be utilized in makeup color planning.

Characteristics of Visual Attention for the Different Type of Material Finishing in Cafe Space Using by Eye-tracking (시선추적을 이용한 카페 공간 마감재 차이의 시각주의력 특성)

  • Choi, Jin-Kyung;Kim, Ju-Yeon
    • Korean Institute of Interior Design Journal
    • /
    • v.27 no.2
    • /
    • pp.3-11
    • /
    • 2018
  • This study aims to investigate whether there is intensionally changing eye - gaze on the cafe space images with floor finishing materials. In the Yarbus' experiment, he argued that changing information that an observer is asked to obtain from an image changes pattern of eye movements. Based on the scan path evidence, this research have questions as (1) the difference of visual attention on finishing floor material stimulus, (2) visual attention of initial activity time and type of movement paths on AOIs, and (3) visual relation floor area with another AOIs. Eye movements were recorded with the SMI REDn Scientific, which sampled eye position at 30Hz and lasted 2 minutes(120s). Although viewing was binocular, only the right eye was tracked. Of the 66 observers(mean age 22 years, standard deviation: ${\pm}1.82$) who participated in the experiment done by the four point calibration and validation procedures at the beginning tasks. Analyzing qualitative data from the number of fixation and duration on AOIs divided into four parts (AOI I-Floor, AOI II-Wall, AOI III-Ceiling, and AOI IV-Counter) in the stimulus. The results from this experiment analyzed as follows. First, it was significant in the difference of the average number of AOIs fixation times observed for the spatial image using the wood tile flooring material and the polishing tile. The wood tile flooring of stimulus had higher fixation number on AOI-II, AOI-III, and AOI-IV than the polishing tile. On seeing AOI-I was higher attention in the polishing tile stimulus. Second, the observers examined AOI-II intensively in both stimuli. However, the visual intensity was also followed by on the AOI-IV and AOI-I in the wood tile flooring stimulus, and on AOI-I, AO-IV in the polishing tile. Third, visual attention data on each AOIs have divided into the time range of "5 sec" for both images. In the wood tile stimulus, the horizontal movement path followed by AOI-II, AOI-IV, and AOI-II. In the polished tile stimulus, the movement path followed by moving vertically to AOI-II, AOI-I, and AOI-II. This study approached meaningfully and found out the characteristics of visual attention, according to the different intentions of visual attention, the relationship pathways of visual mechanism appeared and also activated by eye-tracking experiments.

Real-time Face Detection Method using SVM Classifier (SW 분류기를 이용한 실시간 얼굴 검출 방법)

  • 지형근;이경희;반성범
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.529-532
    • /
    • 2003
  • In this paper, we describe new method to detect face in real-time. We use color information, edge information, and binary information to detect candidate regions of eyes from input image, and then extract face region using the detected eye pall. We verify both eye candidate regions and face region using Support Vector Machines(SVM). It is possible to perform fast and reliable face detection because we can protect false detection through these verification processes. From the experimental results, we confirmed the proposed algorithm shows very excellent face detection performance.

  • PDF

Exploring the Analysis of Male and Female Shopper's Visual Attention to Online Shopping Information Contents: Emphasis on Human Brand Image (온라인 쇼핑정보에 대한 남성과 여성 간 시각 주의도 탐색 연구: 휴먼 브랜드 이미지를 중심으로)

  • Hwang, Yoon Min;Lee, Kun Chang
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.2
    • /
    • pp.328-339
    • /
    • 2019
  • Shopping information contents shown on online shopping sites represent online retailer's intention to draw potential consumers' visual attention. However, unfortunately, previous studies in literature show that most of the shopping information contents are naively designed just to appeal to consumers' visual attention without systematic and logical analysis of consumers' possible different visual reactions depending on gender. To fill in the research void like this, this study proposes eye-tracking approach to investigating the research issue of how gender affects consumers' visual attention towards human brand image contents on the online shopping sites. For the sake of conducting related eye-tracking experiments, we adopted two types of products - notebook computer as a utilitarian product, and perfume as a hedonic product. Results revealed that female consumers show higher visual attention to human brand image contents than male consumers. Besides, significant gender difference exists on the human brand image contents more highly when they are attached with a hedonic product like perfume, than a utilitarian product like notebook computer. From the eye-tracking-based experiment results like this, this study suggested theoretical backgrounds about gender differences towards online shopping information contents and related human brand image contents as well.

The Accuracy Assessment of Species Classification according to Spatial Resolution of Satellite Image Dataset Based on Deep Learning Model (딥러닝 모델 기반 위성영상 데이터세트 공간 해상도에 따른 수종분류 정확도 평가)

  • Park, Jeongmook;Sim, Woodam;Kim, Kyoungmin;Lim, Joongbin;Lee, Jung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1407-1422
    • /
    • 2022
  • This study was conducted to classify tree species and assess the classification accuracy, using SE-Inception, a classification-based deep learning model. The input images of the dataset used Worldview-3 and GeoEye-1 images, and the size of the input images was divided into 10 × 10 m, 30 × 30 m, and 50 × 50 m to compare and evaluate the accuracy of classification of tree species. The label data was divided into five tree species (Pinus densiflora, Pinus koraiensis, Larix kaempferi, Abies holophylla Maxim. and Quercus) by visually interpreting the divided image, and then labeling was performed manually. The dataset constructed a total of 2,429 images, of which about 85% was used as learning data and about 15% as verification data. As a result of classification using the deep learning model, the overall accuracy of up to 78% was achieved when using the Worldview-3 image, the accuracy of up to 84% when using the GeoEye-1 image, and the classification accuracy was high performance. In particular, Quercus showed high accuracy of more than 85% in F1 regardless of the input image size, but trees with similar spectral characteristics such as Pinus densiflora and Pinus koraiensis had many errors. Therefore, there may be limitations in extracting feature amount only with spectral information of satellite images, and classification accuracy may be improved by using images containing various pattern information such as vegetation index and Gray-Level Co-occurrence Matrix (GLCM).

Development of Videooculograph for Vestibular Function Test (전정 기능 평가를 위한 영상 안구 운동 측정 시스템의 개발)

  • 김수찬;남기창;이원선;김덕원
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.40 no.3
    • /
    • pp.189-198
    • /
    • 2003
  • Videooculography (VOG) is one of the eye-movement measurement methods used for objective evaluation of vestibule -ocular reflex. A key feature of VOG is to estimate accurately the center of pupil and ocular torsion with being less influenced by the upper eyelid droop, eyelashes, corneal reflection, and eye blinks. Especially, it Is important to find the accurate center of the pupil in 3-D VOG because the inaccurate pupil center causes significant errors on measuring torsional eye movement. A new algorithm was proposed to find the center of pupil which is a little influenced by factors mentioned above. In this study, real time three-dimensional VOG which can measure horizontal, vortical, torsional eye movements, and the diameter of pupil was implemented using the proposed method.

A Study on the Differences in Cognition of Design Associated with Changes in Fashion Model Type - Exploratory Analysis Using Eye Tracking - (패션 모델 유형 변화에 따른 디자인 인지 차이에 관한 연구 - 시선추적을 활용한 탐색적 분석 -)

  • Lee, Shin-Young
    • Fashion & Textile Research Journal
    • /
    • v.20 no.2
    • /
    • pp.167-176
    • /
    • 2018
  • In this study, an eye-tracking program that can confirm a design cognition process was developed for the purpose of presenting strategic methods to create fashion images, and the program was used to identify what effects fashion models' external characteristics have on the cognition of design. The data for analysis were collected through an eyemovement tracking experiment and a survey, with the focus on the research problem that differences in models' external uniformity will lead to differences in the eye movement for perceiving models and design as well as the image sensibility. The results of the analysis are as follows. First, it was confirmed that the uniformity of model types and the simplicity/complexity of design led to differences in the eye movement directed at design and models and the gaze ratio. Consequently, it is deemed that models should be selected in consideration of the characteristics of design and the intention of planning when creating fashion images. Second, it was found that in terms of the cognition of design, external conditions of models affect design sensibility. A change in models led to a subtle difference in sensibility cognition even when the design condition did not change. Thus, not only the design but also model attributes are factors that should be considered important in fashion planning.