• Title/Summary/Keyword: Optical camera

Search Result 1,232, Processing Time 0.024 seconds

Application Analysis of Digital Photogrammetry and Optical Scanning Technique for Cultural Heritages Restoration (문화재 원형복원을 위한 수치사진측량과 광학스캐닝기법의 응용분석)

  • Han, Seung Hee;Bae, Yeon Soung;Bae, Sang Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.5D
    • /
    • pp.869-876
    • /
    • 2006
  • In the case of earthenware cultural heritages that are found in the form of fragments, the major task is quick and precise restoration. The existing method, which follows the rule of trial and error, is not only greatly time consuming but also lacked precision. If this job could be done by three dimensional scanning, matching up pieces could be done with remarkable efficiency. In this study, the original earthenware was modeled through three-dimensional pattern scanning and photogrammetry, and each of the fragments were scanned and modeled. In order to obtain images from the photogrammetry, we calibrated and used a Canon EOS 1DS real size camera. We analyzed the relationship among the sections of the formed model, efficiently compounded them, and analyzed the errors through residual and color error map. Also, we built a web-based three-dimensional simulation environment centering around the users, for the virtual museum.

Monte Carlo Simulations of Detection Efficiency and Position Resolution of NaI(TI)-PMT Detector used in Small Gamma Camera (소형 감마카메라 제작에 사용되는 NaI(TI)- 광전자증배관 검출기의 민감도와 위치 분해능 특성 연구를 위한 몬테카를로 시뮬레이션)

  • Kim, Jong-Ho;Choi, Yong;Kim, Jun-Young;Im, Ki-Chun;Kim, Sang-Eun;Choi, Yeon-Sung;Joo, Kwan-Sik;Kim, Young-Jin;Kim, Byung-Tae
    • Progress in Medical Physics
    • /
    • v.8 no.2
    • /
    • pp.67-76
    • /
    • 1997
  • We studied optical behavior of scintillation light generated in NaI(TI) crystal using Monte Carlo simulation method. The simulation was performed for the model of NaI(TI) scintillator (size: 60 mm ${\times}$ 60 mm ${\times}$ 6 mm) using an optical tracking code. The sensitivity as a function of surface treatment (Ground, Polished, Metal-0.95RC, Polished-0.98RC, Painted- 0.98RC) of the incident surface of the scintillator was compared. The effects of NaI(TI) scintillator thickness and the refractive index of light guide optically coupling between the NaI(TI) scintillator and photomultiplier tube (PMT) were simulated. We also evaluated intrinsic position resolution of the system by calculating the spread of scintillation light generated. The sensitivities of the system having the surface treatment of Ground, Polished, Metal-0.95RC, Polished-0.98RC and Painted-0.98RC were 70.9%, 73.9%, 78.6%, 80.1% and 85.2%, respectively, and the surface treatment of Painted-0.98RC allowed the highest sensitivity. As increasing the thickness of scintillation crystal and light guide, the sensitivity of the system was decreased. As the refractive index of light guide increases, the sensitivity was increased. The intrinsic position resolution of the system was estimated to be 1.2 mm in horizontal and vertical directions. In this study, the performance of NaI(TI)-PMT detector system was evaluated using Monte Carlo simulation. Based on the results, we concluded that the NaI(TI)-PMT detector array is a favorable configuration for small gamma camera imaging breast tumor using Tc-99m labeled radiopharmaceuticals.

  • PDF

Characteristics of the Electro-Optical Camera(EOC) (다목적실용위성탑재 전자광학카메라(EOC)의 성능 특성)

  • Seunghoon Lee;Hyung-Sik Shim;Hong-Yul Paik
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.3
    • /
    • pp.213-222
    • /
    • 1998
  • Electro-Optical Camera(EOC) is the main payload of the KOrea Multi-Purpose SATellite(KOMPSAT) with the mission of cartography to build up a digital map of Korean territory including a Digital Terrain Elevation Map(DTEM). This instalment which comprises EOC Sensor Assembly and EOC Electronics Assembly produces the panchromatic images of 6.6 m GSD with a swath wider than 17 km by push-broom scanning and spacecraft body pointing in a visible range of wavelength, 510~730 nm. The high resolution panchromatic image is to be collected for 2 minutes during 98 minutes of orbit cycle covering about 800 km along ground track, over the mission lifetime of 3 years with the functions of programmable gain/offset and on-board image data storage. The image of 8 bit digitization, which is collected by a full reflective type F8.3 triplet without obscuration, is to be transmitted to Ground Station at a rate less than 25 Mbps. EOC was elaborated to have the performance which meets or surpasses its requirements of design phase. The spectral response, the modulation transfer function, and the uniformity of all the 2592 pixel of CCD of EOC are illustrated as they were measured for the convenience of end-user. The spectral response was measured with respect to each gain setup of EOC and this is expected to give the capability of generating more accurate panchromatic image to the users of EOC data. The modulation transfer function of EOC was measured as greater than 16 % at Nyquist frequency over the entire field of view, which exceeds its requirement of larger than 10 %. The uniformity that shows the relative response of each pixel of CCD was measured at every pixel of the Focal Plane Array of EOC and is illustrated for the data processing.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

IGRINS Design and Performance Report

  • Park, Chan;Jaffe, Daniel T.;Yuk, In-Soo;Chun, Moo-Young;Pak, Soojong;Kim, Kang-Min;Pavel, Michael;Lee, Hanshin;Oh, Heeyoung;Jeong, Ueejeong;Sim, Chae Kyung;Lee, Hye-In;Le, Huynh Anh Nguyen;Strubhar, Joseph;Gully-Santiago, Michael;Oh, Jae Sok;Cha, Sang-Mok;Moon, Bongkon;Park, Kwijong;Brooks, Cynthia;Ko, Kyeongyeon;Han, Jeong-Yeol;Nah, Jakyuong;Hill, Peter C.;Lee, Sungho;Barnes, Stuart;Yu, Young Sam;Kaplan, Kyle;Mace, Gregory;Kim, Hwihyun;Lee, Jae-Joon;Hwang, Narae;Kang, Wonseok;Park, Byeong-Gon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.2
    • /
    • pp.90-90
    • /
    • 2014
  • The Immersion Grating Infrared Spectrometer (IGRINS) is the first astronomical spectrograph that uses a silicon immersion grating as its dispersive element. IGRINS fully covers the H and K band atmospheric transmission windows in a single exposure. It is a compact high-resolution cross-dispersion spectrometer whose resolving power R is 40,000. An individual volume phase holographic grating serves as a secondary dispersing element for each of the H and K spectrograph arms. On the 2.7m Harlan J. Smith telescope at the McDonald Observatory, the slit size is $1^{{\prime}{\prime}}{\times}15^{{\prime}{\prime}}$. IGRINS has a plate scale of 0.27" pixel-1 on a $2048{\times}2048$ pixel Teledyne Scientific & Imaging HAWAII-2RG detector with a SIDECAR ASIC cryogenic controller. The instrument includes four subsystems; a calibration unit, an input relay optics module, a slit-viewing camera, and nearly identical H and K spectrograph modules. The use of a silicon immersion grating and a compact white pupil design allows the spectrograph collimated beam size to be 25mm, which permits the entire cryogenic system to be contained in a moderately sized ($0.96m{\times}0.6m{\times}0.38m$) rectangular Dewar. The fabrication and assembly of the optical and mechanical components were completed in 2013. From January to July of this year, we completed the system optical alignment and carried out commissioning observations on three runs to improve the efficiency of the instrument software and hardware. We describe the major design characteristics of the instrument including the system requirements and the technical strategy to meet them. We also present the instrumental performance test results derived from the commissioning runs at the McDonald Observatory.

  • PDF

A Study of Experimental Image Direction for Short Animation Movies -focusing in short film and (단편애니메이션의 실험적 영상연출 연구 -<탱고>와 <페스트 필름>을 중심으로)

  • Choi, Don-Ill
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.375-391
    • /
    • 2014
  • Animation movie is a non-photorealistic animated art that consists of formative language forming a frame based on a story and cuts describing frames that form the cuts. Therefore, in expressing an image, artistic expression methods and devices for a formative space are should be provided in a frame while cuts have the images between frames faithfully. Short animation movie is produced by various image experiments with unique image expressions rather than narration for expressing subjective discourse of a writer. Therefore, image style that forms unique images and various image directions are important factors. This study compared the experimental image directions of and , both of which showed a production method of film manipulation. First, while uses pixilation that produces images obtained from live images through painting and many optical disclosure process on a cell mat, was made with diverse collage techniques such as tearing, cutting, pasting, and folding hundreds of scenes from action movies. Second, expresses non-causal relationship of characters by their repetitive behaviors and circulatory image structure through a fixed camera angle, resisting typical scene transition. On the other hand, has an advancing structure that progresses antagonistic relationship of characters through diverse camera angles and scene transition of unique images. Third, in terms of editing, uses a long-take short cut technique in which the whole image consists of one short cut, though it seems to be many scenes with the appearance of various characters. On the other hand, maximizes visual fun and commitment by image reconstruction with hundreds of various short cuts. That is, both works have common features of an experimental work that shows expansion of animated image expressions through film manipulation that is different form general animation productions. On top of that, delivers routine life of diverse human beings without clear narration through image of conceptualized spaces. expresses it in a new image space through image reconstruction with collage technique and speedy progress, setting a binary opposition structure.

Validation of GOCI-II Products in an Inner Bay through Synchronous Usage of UAV and Ship-based Measurements (드론과 선박을 동시 활용한 내만에서의 GOCI-II 산출물 검증)

  • Baek, Seungil;Koh, Sooyoon;Lim, Taehong;Jeon, Gi-Seong;Do, Youngju;Jeong, Yujin;Park, Sohyeon;Lee, Yongtak;Kim, Wonkook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.609-625
    • /
    • 2022
  • Validation of satellite data products is critical for subsequent analysis that is based on the data. Particularly, performance of ocean color products in turbid and shallow near-land ocean areas has been questioned for long time for its difficulty that stems from the complex optical environment with varying distribution of water constituents. Furthermore, validation with ship-based or station-based measurements has also exhibited clear limitation in its spatial scale that is not compatible with that of satellite data. This study firstly performed validation of major GOCI-II products such as remote sensing reflectance, chlorophyll-a concentration, suspended particulate matter, and colored dissolved organic matter, using the in-situ measurements collected from ship-based field campaign. Secondly, this study also presents preliminary analysis on the use of drone images for product validation. Multispectral images were acquired from a MicaSense RedEdge camera onboard a UAV to compensate for the significant scale difference between the ship-based measurements and the satellite data. Variation of water radiance in terms of camera altitude was analyzed for future application of drone images for validation. Validation conducted with a limited number of samples showed that GOCI-II remote sensing reflectance at 555 nm is overestimated more than 30%, and chlorophyll-a and colored dissolved organic matter products exhibited little correlation with in-situ measurements. Suspended particulate matter showed moderate correlation with in-situ measurements (R2~0.6), with approximately 20% uncertainty.

Comparative Performance Analysis of Feature Detection and Matching Methods for Lunar Terrain Images (달 지형 영상에서 특징점 검출 및 정합 기법의 성능 비교 분석)

  • Hong, Sungchul;Shin, Hyu-Soung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.4
    • /
    • pp.437-444
    • /
    • 2020
  • A lunar rover's optical camera is used to provide navigation and terrain information in an exploration zone. However, due to the scant presence of atmosphere, the Moon has homogeneous terrain with dark soil. Also, in extreme environments, the rover has limited data storage with low computation capability. Thus, for successful exploration, it is required to examine feature detection and matching methods which are robust to lunar terrain and environmental characteristics. In this research, SIFT, SURF, BRISK, ORB, and AKAZE are comparatively analyzed with lunar terrain images from a lunar rover. Experimental results show that SIFT and AKAZE are most robust for lunar terrain characteristics. AKAZE detects less quantity of feature points than SIFT, but feature points are detected and matched with high precision and the least computational cost. AKAZE is adequate for fast and accurate navigation information. Although SIFT has the highest computational cost, the largest quantity of feature points are stably detected and matched. The rover periodically sends terrain images to Earth. Thus, SIFT is suitable for global 3D terrain map construction in that a large amount of terrain images can be processed on Earth. Study results are expected to provide a guideline to utilize feature detection and matching methods for future lunar exploration rovers.

PERIOD CHANCE OF THE CONTACT BINARY AH Tauri (접촉쌍성 AH Tauri의 공전주기 변화)

  • Lee, Dong-Joo;Lee, Chung-Uk;Lee, Jae-Woo;Kim, Seung-Lee;Oh, Kyu-Dong;Kim, Chun-Hwey
    • Journal of Astronomy and Space Sciences
    • /
    • v.21 no.4
    • /
    • pp.283-294
    • /
    • 2004
  • New BV RI photometric observations of the contact binary AH Tau were performed with the 61 cm reflector and a 2K CCD camera at the Sobaeksan Optical Astronomy Observatory during seven nights from September to December, 2001. A total of 144 times of minima observed up to date, including three times of minima obtained from our observation, were analyzed. It is found that the orbital period of AH Tau has varied in a cyclic way superposed on a secular period decrease. The rate of the secular period decrease is calculated to be $1^s$ .04 per century, implying that a mass of about $3.8{\times}10^{-8}M{\odot}/yr$ from the more massive primary flows into the secondary if a conservative mass transfer is assumed. Assuming that the sinusoidal period variation is produced by a light-time effect due to an unseen third body, the resultant semi-amplitude, period, and eccentricity for the deduced light-time orbit are obtained as 35.4 years, 0.014 day and 0.52, respectively. The mass of the third-body is calculated as a tout $0.24M{\odot}$ when the third body is assumed to be coplanar with AH Tau system.

Development of the IRIS Collimator for the Portable Radiation Detector and Its Performance Evaluation Using the MCNP Code (IRIS형 방사선검출기 콜리메이터 제작 및 MCNP 코드를 이용한 성능평가)

  • Ji, Young-Yong;Chung, Kun Ho;Lee, Wanno;Choi, Sang-Do;Kim, Change-Jong;Kang, Mun Ja;Park, Sang Tae
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.13 no.1
    • /
    • pp.55-61
    • /
    • 2015
  • When a radiation detector is applied to the measurement of the radioactivity of high-level of radioactive materials or the rapid response to the nuclear accident, several collimators with the different inner radii should be prepared according to the level of dose rate. This makes the in-situ measurement impractical, because of the heavy weight of the collimator. In this study, an IRIS collimator was developed so as to have a function of controlling the inner radius, with the same method used in optical camera, to vary the attenuation ratio of radiation. The shutter was made to have the double tungsten layers with different phase angles to prevent the radiation from penetrating owing to the mechanical tolerance. The performance evaluation through the MCNP code was conducted by calculating the attenuation ratio according to the inner radius of the collimator. The attenuation ratio was marked on the outer scale ring of the collimator. It is expected that when a radiation detector with the IRIS collimator is used for the in-situ measurement, it can change the attenuation ratio of the incident photon to the detector without replacing the collimator.