• Title/Summary/Keyword: 합성 적외선 영상

Search Result 26, Processing Time 0.025 seconds

Effect of the East Asian Reference Atmosphere on a Synthetic Infrared Image (동아시아 표준 대기가 합성 적외선 영상에 미치는 효과)

  • Shin, Jong-Jin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.9 no.4
    • /
    • pp.97-103
    • /
    • 2006
  • A synthetic infrared image can be effectively utilized in various fields such as the recognition and tracking of targets as long as its quality is good enough to reflect the real situations. One way to improve its quality is to use the reference atmosphere which best describes atmospheric properties of regional areas. The east asian reference atmosphere has been developed to represent atmospheric properties of the east asia including Korean peninsula. However, few research has been conducted to examine the effects of this east asian reference atmosphere on the modeling and simulation. In this regard, this paper analyzes the effects of the east asian reference atmosphere on a synthetic infrared image. The research compares the atmospheric transmittance, the surface temperature, and the radiance obtained by using the east asian reference atmosphere with those of the midlatitude reference atmosphere which has been widely applied in the east asia. The results show that the differences of the atmospheric transmittance, the surface temperature, and the radiance between the east asian reference atmosphere and the midlatitude reference atmosphere are significant especially during the daytime. Therefore, it is recommended to apply the east asian reference atmosphere for generating a synthetic infrared image with targets in the east asia.

Flame Detection Using Haar Wavelet and Moving Average in Infrared Video (적외선 비디오에서 Haar 웨이블릿과 이동평균을 이용한 화염검출)

  • Kim, Dong-Keun
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.367-376
    • /
    • 2009
  • In this paper, we propose a flame detection method using Haar wavelet and moving averages in outdoor infrared video sequences. Our proposed method is composed of three steps which are Haar wavelet decomposition, flame candidates detection, and their tracking and flame classification. In Haar wavelet decomposition, each frame is decomposed into 4 sub- images(LL, LH, HL, HH), and also computed high frequency energy components using LH, HL, and HH. In flame candidates detection, we compute a binary image by thresholding in LL sub-image and apply morphology operations to the binary image to remove noises. After finding initial boundaries, final candidate regions are extracted using expanding initial boundary regions to their neighborhoods. In tracking and flame classification, features of region size and high frequency energy are calculated from candidate regions and tracked using queues, and we classify whether the tracked regions are flames by temporal changes of moving averages.

Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion (적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지)

  • Cho, Eun Ji;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.635-644
    • /
    • 2020
  • Object recognition, detection and instance segmentation based on DL (Deep Learning) have being used in various practices, and mainly optical images are used as training data for DL models. The major objective of this paper is object segmentation and building detection by utilizing multimodal datasets as well as optical images for training Detectron2 model that is one of the improved R-CNN (Region-based Convolutional Neural Network). For the implementation, infrared aerial images, LiDAR data, and edges from the images, and Haralick features, that are representing statistical texture information, from LiDAR (Light Detection And Ranging) data were generated. The performance of the DL models depends on not only on the amount and characteristics of the training data, but also on the fusion method especially for the multimodal data. The results of segmenting objects and detecting buildings by applying hybrid fusion - which is a mixed method of early fusion and late fusion - results in a 32.65% improvement in building detection rate compared to training by optical image only. The experiments demonstrated complementary effect of the training multimodal data having unique characteristics and fusion strategy.

A Perspective on the Electromagnetic Imaging of Aircrafts (비행체의 전자파 영상화 기술동향)

  • 윤용수;이재천
    • Korean Journal of Remote Sensing
    • /
    • v.15 no.3
    • /
    • pp.167-175
    • /
    • 1999
  • So far, the remote sensing technology has widely been used in a variety of application areas such as military, medical imaging, environment, geology and so forth. The microwave remote sensing uses the wavelengths ranging from around one centimeter up to a few tens of centimeters and is known to be very effective regardless of the weather conditions and the day/night time as compared with the reflective InfraRed (IR) remote sensing or the thermal IR remote sensing. There are three generic modes of synthetic aperture radar imaging systems depending on its application, that is, stripmap mode, spotlight mode, or inverse mode. In this article we focus on the issue of imaging of flying aircrafts for the inverse mode of a ground - based, fixed radar with moving objects. The imaging of flying aircrafts is considered to be an important step for the automatic target recognition systems, and therefore a great deal of efforts have recently been made on the subject. Here we review the three representative methods including the Fourier transform processing, the time - frequency processing, and the reconstruction from the projection. Some relative merits and drawbacks are also discussed.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Enhancement of the Nighttime Image Exposure with IR LED Camera for surveillance camera (감시 시스템에서의 야간 영상 보정 알고리즘을 이용한 IR LED Camera의 적정 노출 영상 획득)

  • Woo, Seung-Won;Sohn, Jong-In;Kim, Seung-Ryong;Kim, Jun-Hyung;Kim, Young-Jung;Sohn, Kwang-Hoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2013.06a
    • /
    • pp.286-288
    • /
    • 2013
  • 감시 카메라에서 야간 시간대의 영상 품질은 매우 중요한 요소 중 하나이다. 본 논문에서는 IR LED Camera 에서 적외선 LED 를 사용한 회로적 제어를 통한 노출 제어에 문제점을 분석하고, 이를 해결하기 위한 적응적 배경 모델링과 IR 카메라의 특화된 객체 검출 방법을 제안한다. 노출 제어 방식의 배경을 제외한 적응적 배경과 객체의 합성으로 향상된 야간 영상획득 방식을 제안한다. 영상 개선 실험 결과는 기존의 회로적 노출 제어 방식의 영상보다 제안하는 방식이 프로세스의 단순화를 통한 비용 절감 효과와 야간 영상 품질 향상의 우수성을 보여준다.

  • PDF

Efficient Preprocessing Method for Binary Centroid Tracker in Cluttered Image Sequences (복잡한 배경영상에서 효과적인 전처리 방법을 이용한 표적 중심 추적기)

  • Cho, Jae-Soo
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.1
    • /
    • pp.48-56
    • /
    • 2006
  • This paper proposes an efficient preprocessing technique for a binary centroid tracker in correlated image sequences. It is known that the following factors determine the performance of the binary centroid target tracker: (1) an efficient real-time preprocessing technique, (2) an exact target segmentation from cluttered background images and (3) an intelligent tracking window sizing, and etc. The proposed centroid tracker consists of an adaptive segmentation method based on novel distance features and an efficient real-time preprocessing technique in order to enhance the distinction between the objects of interest and their local background. Various tracking experiments using synthetic images as well as real Forward-Looking InfraRed (FLIR) images are performed to show the usefulness of the proposed methods.

  • PDF

Target Recognition with Intensity-Boundary Features (밝기- 윤곽선 정보 기반의 목표물 인식 기법)

  • 신호철;최해철;이진성;조주현;김성대
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.411-414
    • /
    • 2001
  • 목표물 인식(Target Recognition)에 사용되는 대표적인 특징 정보에는 밝기 (Intensity) 정보와 윤곽선(Boundary) 등의 모양(Shape) 정보가 있다. 그러나, 일반적으로 영상에서 바로 추출한 밝기 정보나 윤곽선 정보는 환경 변화에 의한 많은 오차 요인들을 포함하고 있기 때문에, 이들 특징 정보를 개별적으로 인식에 사용하는 것은 높은 인식 성능을 기대하기 어렵다. 따라서, 밝기 정보와 모양 정보를 인식에 함께 사용하는 기법이 요구된다. 본 논문에서는 밝기 정보와 윤곽선 기반의 모양 정보를 합성하여 동시에 인식에 사용하는 3단계 기법을 제안한다. 제안하는 기법에서 밝기 정보 추출에 는 PCA (Principal Component Analysis)기법을 사용하고 , 윤곽선 정보 추출에는 PDM(Point Distribution Model) 에 기반한 영역 분할(Segmentation) 기법과 Algebraic Curve Fitting기법을 사용하였다 추출된 밝기 정보와 윤곽선 정보는 FLD(Fisher Linear Discriminant) 기법을 통해 결합(integration)되어 인식에 사용 된다. 제안한 기법을 적외선 자동차 영상을 인식하는 실험에 적용한 결과, 기존기법에 비해 인식 성능이 개선됨을 확인할 수 있었다.

  • PDF

Research for Calibration and Correction of Multi-Spectral Aerial Photographing System(PKNU 3) (다중분광 항공촬영 시스템(PKNU 3) 검정 및 보정에 관한 연구)

  • Lee, Eun Kyung;Choi, Chul Uong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.4
    • /
    • pp.143-154
    • /
    • 2004
  • The researchers, who seek geological and environmental information, depend on the remote sensing and aerial photographic datum from various commercial satellites and aircraft. However, the adverse weather conditions and the expensive equipment can restrict that the researcher can collect their data anywhere and any time. To allow for better flexibility, we have developed a compact, a multi-spectral automatic Aerial photographic system(PKNU 2). This system's Multi-spectral camera can catch the visible(RGB) and infrared(NIR) bands($3032{\times}2008$ pixels) image. Visible and infrared bands images were obtained from each camera respectively and produced Color-infrared composite images to be analyzed in the purpose of the environment monitor but that was not very good data. Moreover, it has a demerit that the stereoscopic overlap area is not satisfied with 60% due to the 12s storage time of each data, while it was possible that PKNU 2 system photographed photos of great capacity. Therefore, we have been developing the advanced PKNU 2(PKNU 3) that consists of color-infrared spectral camera can photograph the visible and near infrared bands data using one sensor at once, thermal infrared camera, two of 40 G computers to store images, and MPEG board to compress and transfer data to the computer at the real time and can attach and detach itself to a helicopter. Verification and calibration of each sensor(REDLAKE MS 4000, Raytheon IRPro) were conducted before we took the aerial photographs for obtaining more valuable data. Corrections for the spectral characteristics and radial lens distortions of sensor were carried out.

  • PDF

Real-Time Virtual-View Image Synthesis Algorithm Using Kinect Camera (키넥트 카메라를 이용한 실시간 가상 시점 영상 생성 기법)

  • Lee, Gyu-Cheol;Yoo, Jisang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.5
    • /
    • pp.409-419
    • /
    • 2013
  • Kinect released by Microsoft in November 2010 is a motion sensing camera in xbox360 and gives depth and color images. However, Kinect camera also generates holes and noise around object boundaries in the obtained images because it uses infrared pattern. Also, boundary flickering phenomenon occurs. Therefore, we propose a real-time virtual-view video synthesis algorithm which results in a high-quality virtual view by solving these problems. In the proposed algorithm, holes around the boundary are filled by using the joint bilateral filter. Color image is converted into intensity image and then flickering pixels are searched by analyzing the variation of intensity and depth images. Finally, boundary flickering phenomenon can be reduced by converting values of flickering pixels into the maximum pixel value of a previous depth image and virtual views are generated by applying 3D warping technique. Holes existing on regions that are not part of occlusion region are also filled with a center pixel value of the highest reliability block after the final block reliability is calculated by using a block based gradient searching algorithm with block reliability. The experimental results show that the proposed algorithm generated the virtual view image in real-time.