• Title/Summary/Keyword: Eye Image

Search Result 821, Processing Time 0.036 seconds

An Object Tracking Method using Stereo Images (스테레오 영상을 이용한 물체 추적 방법)

  • Lee, Hak-Chan;Park, Chang-Han;Namkung, Yun;Namkyung, Jae-Chan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.522-534
    • /
    • 2002
  • In this paper, we propose a new object tracking system using stereo images to improve the performance of the automatic object tracking system. The existing object tracking system has optimum characteristics, but it requires a lot of computation. In the case of the image with a single eye, the system is difficult to estimate and track for the various transformation of the object. Because the stereo image by both eyes is difficult to estimate the translation and the rotation, this paper deals with the tracking method, which has the ability to track the image for translation for real time, with block matching algorithm in order to decrease the calculation. The experimental results demonstrate the usefulness of proposed system with the recognition rate of 88% in the rotation, 89% in the translation, 88% in various image, and with the mean rate of 88.3%.

A Study on the Application Technique and 3D Geospatial Information Generation for Optimum Route Decision (최적노선결정을 위한 3차원 지형공간정보생성 및 적용기법연구)

  • Yeon Sangho
    • Proceedings of the KSR Conference
    • /
    • 2003.05a
    • /
    • pp.321-325
    • /
    • 2003
  • The technology for the multi-dimensional terrain perspective view can be used as an important factors in planning and designing for the various construction projects. In this study, the stereo image perspective view has been generated for the multi-dimension analysis by combining useful digital map and remotely sensed satellite images. In the course of experimenting with the multi-dimensional topography generated by the combination of the front-projected image by the precise GCP and DEM from the contour line, the technology has been developed to offer the multi-dimensional access to the potential construction sites from the nearby main roads. This stereo image bird's eye view has made it possible to make multi-dimensional analysis on the terrain, which provides real time virtual access to the designated construction sites and will be a versatile application for development planning and construction projects.

  • PDF

A Study on the Generation of Perspective Image View for Stereo Terrain Analysis for the Route Decision of Highway (고속도로 노선선정에서의 입체지형분석을 위한 영상조감도 생성에 관한 연구)

  • Yeon, Sang-Ho;Hong, Ill-Hwa
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.5 no.3
    • /
    • pp.1-8
    • /
    • 2002
  • The technology for the three-dimensional terrain perspective view can be used as an important factor in planning and designing for the various construction projects. In this study, the stereo image perspective view has been generated for the multi-dimension analysis by combining useful digital map and remotely sensed satellite images. In the course of experimenting with the three-dimensional topography generated by the combination of the orthopimage by the precise GCP and DEM from the contour line, the technology has been developed to offer the multi-dimensional access to the potential construction sites from the nearby main roads. This stereo image bird's eye view has made it possible to make multi-dimensional analysis on the terrain, which provides real-time virtual access to the designated construction sites and will be a versatile application for development planning and construction projects.

  • PDF

Facial Feature Extraction in Reduced Image using Generalized Symmetry Transform (일반화 대칭 변환을 이용한 축소 영상에서의 얼굴특징추출)

  • Paeng, Young-Hye;Jung, Sung-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.569-576
    • /
    • 2000
  • The GST can extract the position of facial features without a prior information in an image. However, this method requires a plenty of the processing time because the mask size to process GST must be larger than the size of object such as eye, mouth and nose in an image. In addition, it has the complexity for the computation of middle line to decide facial features. In this paper, we proposed two methods to overcome these disadvantage of the conventional method. First, we used the reduced image having enough information instead of an original image to decrease the processing time. Second, we used the extracted peak positions instead of the complex statistical processing to get the middle lines. To analyze the performance of the proposed method, we tested 200 images including, the front, rotated, spectacled, and mustached facial images. In result, the proposed method shows 85% in the performance of feature extraction and can reduce the processing time over 53 times, compared with existing method.

  • PDF

Design of Image Extraction Hardware for Hand Gesture Vision Recognition

  • Lee, Chang-Yong;Kwon, So-Young;Kim, Young-Hyung;Lee, Yong-Hwan
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.71-83
    • /
    • 2020
  • In this paper, we propose a system that can detect the shape of a hand at high speed using an FPGA. The hand-shape detection system is designed using Verilog HDL, a hardware language that can process in parallel instead of sequentially running C++ because real-time processing is important. There are several methods for hand gesture recognition, but the image processing method is used. Since the human eye is sensitive to brightness, the YCbCr color model was selected among various color expression methods to obtain a result that is less affected by lighting. For the CbCr elements, only the components corresponding to the skin color are filtered out from the input image by utilizing the restriction conditions. In order to increase the speed of object recognition, a median filter that removes noise present in the input image is used, and this filter is designed to allow comparison of values and extraction of intermediate values at the same time to reduce the amount of computation. For parallel processing, it is designed to locate the centerline of the hand during scanning and sorting the stored data. The line with the highest count is selected as the center line of the hand, and the size of the hand is determined based on the count, and the hand and arm parts are separated. The designed hardware circuit satisfied the target operating frequency and the number of gates.

A Study on Radiation Dose and Image Quality according to CT Table Height in Brain CT (두부 CT 검사 시 테이블 높이에 따른 선량과 화질에 관한 연구)

  • Ki-Won Kim;Joo-Young Oh;Jung-Whan Min;Sang-Sun Lee;Young-Bong Lee;Kyung-Hwan Lim;Yun Yi
    • Journal of radiological science and technology
    • /
    • v.46 no.2
    • /
    • pp.99-106
    • /
    • 2023
  • The height of the table should be considered important during computed tomography (CT) examination, but according to previous studies, not all radiology technologists set the table at the patient's center at the examination, which affects the exposure dose and image quality received by the patient. Therefore, this study intends to study the image quality exposure dose according to the height of the table to realize the optimal image quality and dose during the brain CT scan. The head phantom images were acquired using Philips Brilliance iCT 256. When the image was acquired, the table height was adjusted to 815, 865, 915, 965, 1015, and 1030 mm, respectively, and each scan was performed 3 times for each height. For the exposure dose measurement, optically stimulated luminescence dosimeter (OSLD) was attached to the front, side, eye, and thyroid gland of the head phantom. In the signal to noise ratio (SNR) measurement result, The SNR values for each table height were all lower than 915 mm. As a result of exposure dose, the exposure dose on each area increased as the table height decreased. The height of the table has a close relationship with the patient's radiation exposure dose in the CT scan.

Evaluation of Unexposed Images after Erasure of Image Plate from CR System (CR 시스템에서 IP 잠상의 소거 후 Unexposed Image의 평가)

  • Lim, Bo-Yeon;Park, Hye-Suk;Kim, Ju-Hye;Park, Kwang-Hyun;Kim, Hee-Joung
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.199-207
    • /
    • 2009
  • It is important to initialize Image Plate (IP) completely for removing residual latent image by sodium lamp for reliability and repeatability of computed radiography (CR) system. The purpose of this study was to evaluate latent images of computed radiography (CR) images respect to delay time after erasure of foregone latent image and its effect, and erasure level. Erasure thoroughness for CR acceptance test from American Association of Physicist in Medicine (AAPM) Report 93 (2006) was also evaluated. Measurements were made on a CR (Agfa CR 25; Agfa, BELGIUM) system. Chest postero-anterior (PA), Hand PA, L-spine lateral radiographs were chosen for evaluation. Chest phantom (3D-torso; CIRS, USA) was used for Chest PA and L-spine lateral radiography. For Hand PA radiography, projections was done without phantom. Except Hand PA radiographs, noise was increased with delay time, and ghost image was appeared on overexposed area. Effect of delay after erasure on latent image was not seen on naked eye, but standard deviation (SD) of pixel value on overexposed area was relatively higher than that of other areas. On Hand PA and Chest PA radiographs, noise were not occurred by adjustment of erasure level. On L-spine lateral images at lower erasure level than standard level, noise including ghost image were occurred because of high tube current. Erasure thoroughness of CR system in our department was to be proved by these evaluation. The results of this study could be used as a baseline for IP initialization and reliability of CR images.

  • PDF

Automatic Extraction of Eye and Mouth Fields from Face Images using MultiLayer Perceptrons and Eigenfeatures (고유특징과 다층 신경망을 이용한 얼굴 영상에서의 눈과 입 영역 자동 추출)

  • Ryu, Yeon-Sik;O, Se-Yeong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.2
    • /
    • pp.31-43
    • /
    • 2000
  • This paper presents a novel algorithm lot extraction of the eye and mouth fields (facial features) from 2D gray level face images. First of all, it has been found that Eigenfeatures, derived from the eigenvalues and the eigenvectors of the binary edge data set constructed from the eye and mouth fields are very good features to locate these fields. The Eigenfeatures, extracted from the positive and negative training samples for the facial features, ate used to train a MultiLayer Perceptron(MLP) whose output indicates the degree to which a particular image window contains the eye or the mouth within itself. Second, to ensure robustness, the ensemble network consisting of multiple MLPs is used instead of a single MLP. The output of the ensemble network becomes the average of the multiple locations of the field each found by the constituent MLPs. Finally, in order to reduce the computation time, we extracted the coarse search region lot eyes and mouth by using prior information on face images. The advantages of the proposed approach includes that only a small number of frontal faces are sufficient to train the nets and furthermore, lends themselves to good generalization to non-frontal poses and even to other people's faces. It was also experimentally verified that the proposed algorithm is robust against slight variations of facial size and pose due to the generalization characteristics of neural networks.

  • PDF

Analysis of Eye-safe LIDAR Signal under Various Measurement Environments and Reflection Conditions (다양한 측정 환경 및 반사 조건에 대한 시각안전 LIDAR 신호 분석)

  • Han, Mun Hyun;Choi, Gyu Dong;Seo, Hong Seok;Mheen, Bong Ki
    • Korean Journal of Optics and Photonics
    • /
    • v.29 no.5
    • /
    • pp.204-214
    • /
    • 2018
  • Since LIDAR is advantageous for accurate information acquisition and realization of a high-resolution 3D image based on characteristics that can be precisely measured, it is essential to autonomous navigation systems that require acquisition and judgment of accurate peripheral information without user intervention. Recently, as an autonomous navigation system applying LIDAR has been utilized in human living space, it is necessary to solve the eye-safety problem, and to make reliable judgment through accurate obstacle recognition in various environments. In this paper, we construct a single-shot LIDAR system (SSLs) using a 1550-nm eye-safe light source, and report the analysis method and results of LIDAR signals for various measurement environments, reflective materials, and material angles. We analyze the signals of materials with different reflectance in each measurement environment by using a 5% Al reflector and a building wall located at a distance of 25 m, under indoor, daytime, and nighttime conditions. In addition, signal analysis of the angle change of the material is carried out, considering actual obstacles at various angles. This signal analysis has the merit of possibly confirming the correlation between measurement environment, reflection conditions, and LIDAR signal, by using the SNR to determine the reliability of the received information, and the timing jitter, which is an index of the accuracy of the distance information.

A Study on Land Cover Map of UAV Imagery using an Object-based Classification Method (객체기반 분류기법을 이용한 UAV 영상의 토지피복도 제작 연구)

  • Shin, Ji Sun;Lee, Tae Ho;Jung, Pil Mo;Kwon, Hyuk Soo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.4
    • /
    • pp.25-33
    • /
    • 2015
  • The study of ecosystem assessment(ES) is based on land cover information, and primarily it is performed at the global scale. However, these results as data for decision making have a limitation at the aspects of range and scale to solve the regional issue. Although the Ministry of Environment provides available land cover data at the regional scale, it is also restricted in use due to the intrinsic limitation of on screen digitizing method and temporal and spatial difference. This study of objective is to generate UAV land cover map. In order to classify the imagery, we have performed resampling at 5m resolution using UAV imagery. The results of object-based image segmentation showed that scale 20 and merge 34 were the optimum weight values for UAV imagery. In the case of RapidEye imagery;we found that the weight values;scale 30 and merge 30 were the most appropriate at the level of land cover classes for sub-category. We generated land cover imagery using example-based classification method and analyzed the accuracy using stratified random sampling. The results show that the overall accuracies of RapidEye and UAV classification imagery are each 90% and 91%.