• Title/Summary/Keyword: infrared imaging camera

Search Result 142, Processing Time 0.028 seconds

STSAT-3 Operations Concept (과학기술위성 3호 운영개념)

  • Lee, Seung-Hun;Park, Jong-Oh;Rhee, Seung-Wu;Jung, Tae-Jin;Lee, Dae-Hee;Lee, Joon-Ho
    • Aerospace Engineering and Technology
    • /
    • v.10 no.2
    • /
    • pp.29-36
    • /
    • 2011
  • The Science and Technology Satellite-3 (STSAT-3) is based on the KITSAT-1, 2, 3 and STSAT-1, 2 which were Korea micro-satellites for the mission of space and earth science. The objectives of the STSAT-3 are to support earth and space sciences in parallel with the demonstration of spacecraft technology. The STSAT-3 carries an infrared (IR) camera for space & earth observation and an imaging spectrometer for earth observation. The IR payload instrument of the STSAT-3, Multi-purpose Infrared Imaging System (MIRIS), will observe the Galactic plane and North/South Ecliptic poles to research the origin of universe. The secondary payload instrument, Compact Imaging Spectrometer (COMIS), images the Earth's surface. The data acquired from COMIS are expected to be used for various application fields such as monitoring of disaster management, water quality studies, and farmland assessment. In this paper we present the operations concept of STSAT-3 which will be launched into a sun-synchronous orbit at a nominal altitude of 600km in late 2012.

RESEARCH FOR ROBUSTNESS OF THE MIRIS OPTICAL COMPONENTS IN THE SHOCK ENVIRONMENT TEST (MIRIS 충격시험에서의 광학계 안정성 확보를 위한 연구)

  • Moon, B.K.;Kanai, Yoshikazu;Park, S.J.;Park, K.J.;Lee, D.H.;Jeong, W.S.;Park, Y.S.;Pyo, J.H.;Nam, U.W.;Lee, D.H.;Ree, S.W.;Matsumoto, Toshio;Han, W.
    • Publications of The Korean Astronomical Society
    • /
    • v.27 no.3
    • /
    • pp.39-47
    • /
    • 2012
  • MIRIS, Multi-purpose Infra-Red Imaging System, is the main payload of STSAT-3 (Korea Science & Technology Satellite 3), which will be launched in the end of 2012 (the exact date to be determined) by a Russian Dnepr rocket. MIRIS consists of two camera systems, SOC (Space Observation Camera) and EOC (Earth Observation Camera). During a shock test for the flight model stability in the launching environment, some lenses of SOC EQM (Engineering Qualification Model) were broken. In order to resolve the lens failure, analyses for cause were performed with visual inspections for lenses and opto-mechanical parts. After modifications of SOC opto-mechanical parts, the shock test was performed again and passed. In this paper, we introduce the solution for lens safety and report the test results.

Coating defect classification method for steel structures with vision-thermography imaging and zero-shot learning

  • Jun Lee;Kiyoung Kim;Hyeonjin Kim;Hoon Sohn
    • Smart Structures and Systems
    • /
    • v.33 no.1
    • /
    • pp.55-64
    • /
    • 2024
  • This paper proposes a fusion imaging-based coating-defect classification method for steel structures that uses zero-shot learning. In the proposed method, a halogen lamp generates heat energy on the coating surface of a steel structure, and the resulting heat responses are measured by an infrared (IR) camera, while photos of the coating surface are captured by a charge-coupled device (CCD) camera. The measured heat responses and visual images are then analyzed using zero-shot learning to classify the coating defects, and the estimated coating defects are visualized throughout the inspection surface of the steel structure. In contrast to older approaches to coating-defect classification that relied on visual inspection and were limited to surface defects, and older artificial neural network (ANN)-based methods that required large amounts of data for training and validation, the proposed method accurately classifies both internal and external defects and can classify coating defects for unobserved classes that are not included in the training. Additionally, the proposed model easily learns about additional classifying conditions, making it simple to add classes for problems of interest and field application. Based on the results of validation via field testing, the defect-type classification performance is improved 22.7% of accuracy by fusing visual and thermal imaging compared to using only a visual dataset. Furthermore, the classification accuracy of the proposed method on a test dataset with only trained classes is validated to be 100%. With word-embedding vectors for the labels of untrained classes, the classification accuracy of the proposed method is 86.4%.

Infrared Thermal Imaging for Quantification of HIFU-induced Tissue Coagulation (적외선 이미징 기반 HIFU 응용 조직 응고 정량화 연구)

  • Pyo, Hanjae;Park, Suhyun;Kang, Hyun Wook
    • Korean Journal of Optics and Photonics
    • /
    • v.28 no.5
    • /
    • pp.236-240
    • /
    • 2017
  • In this paper, we investigate the thermal response of skin tissue to high-intensity focused ultrasound (HIFU) by means of infrared (IR) thermal imaging. For skin tightening, a 7-MHz ultrasound transducer is used to induce irreversible tissue coagulation in porcine skin. An IR camera is employed to monitor spatiotemporal changes of the temperature in the tissue. The maximum temperature in the tissue increased linearly with applied energy, up to $90^{\circ}C$. The extent of irreversible tissue coagulation (up to 3.2 mm in width) corresponds well to the spatial distribution of the temperature during HIFU sonication. Histological analysis confirms that the temperature beyond the coagulation threshold (${\sim}65^{\circ}C$) delineates the margin of collagen denaturation in the tissue. IR thermal imaging can be a feasible method for quantifying the degree of thermal coagulation in HIFU-induced skin treatment.

Preliminary growth chamber experiments using thermal infrared image to detect crop disease (적외선 촬영 영상 기반의 작물 병해 모니터링 가능성 타진을 위한 실내 감염 실험)

  • Jeong, Hoejeong;Jeong, Rae-Dong;Ryu, Jae-Hyun;Oh, Dohyeok;Choi, Seonwoong;Cho, Jaeil
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.2
    • /
    • pp.111-116
    • /
    • 2019
  • The biotic stress of garlic and tobacco infected by bacteria and virus was evaluated using a thermal imaging camera in a growth chamber. The remote sensing technique using the thermal camera detected that garlic leaf temperature increased when the leaves were infected by bacterial soft rot of garlic. Furthermore, the temperature of leaf was relatively high for the leaves where the colony-forming unit per mL was large. Such temperature patterns were detected for tobacco leaves infected by Cucumber Mosaic Virus using thermal images. In addition, the crop water stress index (CWSI) calculated from leaf temperature also increased for the leaves infected by the virus. The event such that CWSI increased by the infection of the virus occurred before visual disease symptom appeared. Our results suggest that the thermal imaging camera would be useful for the development of crop remote sensing technique, which can be applied to a smart farm.

Design of an Optical System for a Space Target Detection Camera

  • Zhang, Liu;Zhang, Jiakun;Lei, Jingwen;Xu, Yutong;Lv, Xueying
    • Current Optics and Photonics
    • /
    • v.6 no.4
    • /
    • pp.420-429
    • /
    • 2022
  • In this paper, the details and design process of an optical system for space target detection cameras are introduced. The whole system is divided into three structures. The first structure is a short-focus visible light system for rough detection in a large field of view. The field of view is 2°, the effective focal length is 1,125 mm, and the F-number is 3.83. The second structure is a telephoto visible light system for precise detection in a small field of view. The field of view is 1°, the effective focal length is 2,300 mm, and the F-number is 7.67. The third structure is an infrared light detection system. The field of view is 2°, the effective focal length is 390 mm, and the F-number is 1.3. The visible long-focus narrow field of view and visible short-focus wide field of view are switched through a turning mirror. Design results show that the modulation transfer functions of the three structures of the system are close to the diffraction limit. It can further be seen that the short-focus wide-field-of-view distortion is controlled within 0.1%, the long-focus narrow-field-of-view distortion within 0.5%, and the infrared subsystem distortion within 0.2%. The imaging effect is good and the purpose of the design is achieved.

PROCESSING OF INTERSTELLAR MEDIUM AS DIVULGED BY AKARI

  • Onaka, Takashi;Mori, Tamami I.;Ohsawa, Ryou;Sakon, Itsuki;Bell, Aaron C.;Hammonds, Mark;Shimonishi, Takashi;Ishihara, Daisuke;Kaneda, Hidehiro;Okada, Yoko;Tanaka, Masahiro
    • Publications of The Korean Astronomical Society
    • /
    • v.32 no.1
    • /
    • pp.77-81
    • /
    • 2017
  • A wide spectral coverage from near-infrared (NIR) to far-infrared (FIR) of AKARI both for imaging and spectroscopy enables us to efficiently study the emission from gas and dust in the interstellar medium (ISM). In particular, the Infrared Camera (IRC) onboard AKARI offers a unique opportunity to carry out sensitive spectroscopy in the NIR ($2-5{\mu}m$) for the first time from a spaceborn telescope. This spectral range contains a number of important dust bands and gas lines, such as the aromatic and aliphatic emission bands at 3.3 and $3.4-3.5{\mu}m$, $H_2O$ and $CO_2$ ices at 3.0 and $4.3{\mu}m$, CO, $H_2$, and H I gas emission lines. In this paper we concentrate on the aromatic and aliphatic emission and ice absorption features. The balance between dust supply and destruction suggests significant dust processing taking place as well as dust formation in the ISM. Detailed analysis of the aromatic and aliphatic bands of AKARI observations for a number of H ii regions and H ii region-like objects suggests processing of carbonaceous dust in the ISM. The ice formation process can also be studied with IRC NIR spectroscopy efficiently. In this review, dust processing in the ISM divulged by recent analysis of AKARI data is discussed.

An Improved ViBe Algorithm of Moving Target Extraction for Night Infrared Surveillance Video

  • Feng, Zhiqiang;Wang, Xiaogang;Yang, Zhongfan;Guo, Shaojie;Xiong, Xingzhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4292-4307
    • /
    • 2021
  • For the research field of night infrared surveillance video, the target imaging in the video is easily affected by the light due to the characteristics of the active infrared camera and the classical ViBe algorithm has some problems for moving target extraction because of background misjudgment, noise interference, ghost shadow and so on. Therefore, an improved ViBe algorithm (I-ViBe) for moving target extraction in night infrared surveillance video is proposed in this paper. Firstly, the video frames are sampled and judged by the degree of light influence, and the video frame is divided into three situations: no light change, small light change, and severe light change. Secondly, the ViBe algorithm is extracted the moving target when there is no light change. The segmentation factor of the ViBe algorithm is adaptively changed to reduce the impact of the light on the ViBe algorithm when the light change is small. The moving target is extracted using the region growing algorithm improved by the image entropy in the differential image of the current frame and the background model when the illumination changes drastically. Based on the results of the simulation, the I-ViBe algorithm proposed has better robustness to the influence of illumination. When extracting moving targets at night the I-ViBe algorithm can make target extraction more accurate and provide more effective data for further night behavior recognition and target tracking.

Design of Face with Mask Detection System in Thermal Images Using Deep Learning (딥러닝을 이용한 열영상 기반 마스크 검출 시스템 설계)

  • Yong Joong Kim;Byung Sang Choi;Ki Seop Lee;Kyung Kwon Jung
    • Convergence Security Journal
    • /
    • v.22 no.2
    • /
    • pp.21-26
    • /
    • 2022
  • Wearing face masks is an effective measure to prevent COVID-19 infection. Infrared thermal image based temperature measurement and identity recognition system has been widely used in many large enterprises and universities in China, so it is totally necessary to research the face mask detection of thermal infrared imaging. Recently introduced MTCNN (Multi-task Cascaded Convolutional Networks)presents a conceptually simple, flexible, general framework for instance segmentation of objects. In this paper, we propose an algorithm for efficiently searching objects of images, while creating a segmentation of heat generation part for an instance which is a heating element in a heat sensed image acquired from a thermal infrared camera. This method called a mask MTCNN is an algorithm that extends MTCNN by adding a branch for predicting an object mask in parallel with an existing branch for recognition of a bounding box. It is easy to generalize the R-CNN to other tasks. In this paper, we proposed an infrared image detection algorithm based on R-CNN and detect heating elements which can not be distinguished by RGB images.

The Development of a Real-Time Hand Gestures Recognition System Using Infrared Images (적외선 영상을 이용한 실시간 손동작 인식 장치 개발)

  • Ji, Seong Cheol;Kang, Sun Woo;Kim, Joon Seek;Joo, Hyonam
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.12
    • /
    • pp.1100-1108
    • /
    • 2015
  • A camera-based real-time hand posture and gesture recognition system is proposed for controlling various devices inside automobiles. It uses an imaging system composed of a camera with a proper filter and an infrared lighting device to acquire images of hand-motion sequences. Several steps of pre-processing algorithms are applied, followed by a background normalization process before segmenting the hand from the background. The hand posture is determined by first separating the fingers from the main body of the hand and then by finding the relative position of the fingers from the center of the hand. The beginning and ending of the hand motion from the sequence of the acquired images are detected using pre-defined motion rules to start the hand gesture recognition. A set of carefully designed features is computed and extracted from the raw sequence and is fed into a decision tree-like decision rule for determining the hand gesture. Many experiments are performed to verify the system. In this paper, we show the performance results from tests on the 550 sequences of hand motion images collected from five different individuals to cover the variations among many users of the system in a real-time environment. Among them, 539 sequences are correctly recognized, showing a recognition rate of 98%.