• Title/Summary/Keyword: Detailed Eye Model

Search Result 5, Processing Time 0.019 seconds

Development of Detailed Korean Adult Eye Model for Lens Dose Calculation

  • Han, Haegin;Zhang, Xujia;Yeom, Yeon Soo;Choi, Chansoo;Nguyen, Thang Tat;Shin, Bangho;Ha, Sangseok;Moon, Sungho;Kim, Chan Hyeong
    • Journal of Radiation Protection and Research
    • /
    • v.45 no.1
    • /
    • pp.45-52
    • /
    • 2020
  • Background: Recently, the International Commission on Radiological Protection (ICRP) lowered the dose limit for the eye lens from 150 mSv to 20 mSv, highlighting the importance of accurate lens dose estimation. The ICRP reference computational phantoms used for lens dose calculation are mostly based on the data of Caucasian population, and thus might be inappropriate for Korean population. Materials and Methods: In the present study, a detailed Korean eye model was constructed by determining nine ocular dimensions using the data of Korean subjects. The developed eye model was then incorporated into the adult male and female mesh-type reference Korean phantoms (MRKPs), which were then used to calculate lens doses for photons and electrons in idealized irradiation geometries. The calculated lens doses were finally compared with those calculated with the ICRP mesh-type reference computational phantoms (MRCPs) to observe the effect of ethnic difference on lens dose. Results and Discussion: The lens doses calculated with the MRKPs and the MRCPs were not much different for photons for the entire energy range considered in the present study. For electrons, the differences were generally small, but exceptionally large differences were found at a specific energy range (0.5-1 MeV), the maximum differences being about 10 times at 0.6 MeV in the anteroposterior geometry; the differences are mainly due to the difference in the depth of the lens between the MRCPs and the MRKPs. Conclusion: The MRCPs are generally considered acceptable for lens dose calculations for Korean population, except for the electrons at the energy range of 0.5-1 MeV for which it is suggested to use the MRKPs incorporating the Korean eye model developed in the present study.

Generation of Topographic Map Using GeoEye-1 Satellite Imagery for Construction of the Jangbogo Antarctic Station (GeoEye-1 위성영상을 이용한 남극의 장보고기지 건설을 위한 지형도 제작)

  • Kim, Eui-Myoung;Hong, Chang-Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.19 no.4
    • /
    • pp.101-108
    • /
    • 2011
  • Construction of the Jangbogo antarctic station was planned, and it requires detailed information on topography of the area around the station. The purpose of this research is to generate the topographic map to construct the Jangbogo antarctic station using the satellite image. To do this, surveying and pre-test of equipment were conducted. In addition, for sensor modeling of the GeoEye-1 satellite image, RPC-bias correction was done, and it showed that at least two control points are required. In generating the map, a 1/2,500 scale was deemed suitable in consideration of resolution of the image and the fact that supplementary topographic surveying would be impossible. In order to provide detailed information on the topography around the Jangbogo station, the digital elevation model based on image matching was created, and compared with GPS-RTK data, accuracy of vertical location about 0.6m was exhibited.

A Study on Visual Behavior for Presenting Consumer-Oriented Information on an Online Fashion Store

  • Kim, Dahyun;Lee, Seunghee
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.44 no.5
    • /
    • pp.789-809
    • /
    • 2020
  • Growth in online channels has created fierce competition; consequently, retailers have to invest an increasing amount of effort into attracting consumers. In this study, eye-tracking technology examined consumers' visual behavior to gain an understanding of information searching behavior in exploring product information for fashion products. Product attribute information was classified into two image-based elements (model image information and detail image information) and two text-based elements (basic text information, detail text information), after which consumers' visual behavior for each information element was analyzed. Furthermore, whether involvement affects consumers' information search behavior was investigated. The results demonstrated that model image information attracted visual attention the quickest, while detail text information and model image information received the most visual attention. Additionally, high-involvement consumers tended to pay more attention to detailed information while low-involvement consumers tended to pay more attention to image-based and basic information. This study is expected to help broaden the understanding of consumer behavior and provide implications for establishing strategies on how to efficiently organize product information for online fashion stores.

Face Detection and Recognition with Multiple Appearance Models for Mobile Robot Application

  • Lee, Taigun;Park, Sung-Kee;Kim, Munsang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.100.4-100
    • /
    • 2002
  • For visual navigation, mobile robot can use a stereo camera which has large field of view. In this paper, we propose an algorithm to detect and recognize human face on the basis of such camera system. In this paper, a new coarse to fine detection algorithm is proposed. For coarse detection, nearly face-like areas are found in entire image using dual ellipse templates. And, detailed alignment of facial outline and features is performed on the basis of view- based multiple appearance model. Because it hard to finely align with facial features in this case, we try to find most resembled face image area is selected from multiple face appearances using most distinguished facial features- two eye...

  • PDF

Frontal Face Generation Algorithm from Multi-view Images Based on Generative Adversarial Network

  • Heo, Young- Jin;Kim, Byung-Gyu;Roy, Partha Pratim
    • Journal of Multimedia Information System
    • /
    • v.8 no.2
    • /
    • pp.85-92
    • /
    • 2021
  • In a face, there is much information of person's identity. Because of this property, various tasks such as expression recognition, identity recognition and deepfake have been actively conducted. Most of them use the exact frontal view of the given face. However, various directions of the face can be observed rather than the exact frontal image in real situation. The profile (side view) lacks information when comparing with the frontal view image. Therefore, if we can generate the frontal face from other directions, we can obtain more information on the given face. In this paper, we propose a combined style model based the conditional generative adversarial network (cGAN) for generating the frontal face from multi-view images that consist of characteristics that not only includes the style around the face (hair and beard) but also detailed areas (eye, nose, and mouth).