• Title/Summary/Keyword: Multi-Sensor Image

Search Result 286, Processing Time 0.039 seconds

Enhancement of Inter-Image Statistical Correlation for Accurate Multi-Sensor Image Registration (정밀한 다중센서 영상정합을 위한 통계적 상관성의 증대기법)

  • Kim, Kyoung-Soo;Lee, Jin-Hak;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.1-12
    • /
    • 2005
  • Image registration is a process to establish the spatial correspondence between images of the same scene, which are acquired at different view points, at different times, or by different sensors. This paper presents a new algorithm for robust registration of the images acquired by multiple sensors having different modalities; the EO (electro-optic) and IR(infrared) ones in the paper. The two feature-based and intensity-based approaches are usually possible for image registration. In the former selection of accurate common features is crucial for high performance, but features in the EO image are often not the same as those in the R image. Hence, this approach is inadequate to register the E0/IR images. In the latter normalized mutual Information (nHr) has been widely used as a similarity measure due to its high accuracy and robustness, and NMI-based image registration methods assume that statistical correlation between two images should be global. Unfortunately, since we find out that EO and IR images don't often satisfy this assumption, registration accuracy is not high enough to apply to some applications. In this paper, we propose a two-stage NMI-based registration method based on the analysis of statistical correlation between E0/1R images. In the first stage, for robust registration, we propose two preprocessing schemes: extraction of statistically correlated regions (ESCR) and enhancement of statistical correlation by filtering (ESCF). For each image, ESCR automatically extracts the regions that are highly correlated to the corresponding regions in the other image. And ESCF adaptively filters out each image to enhance statistical correlation between them. In the second stage, two output images are registered by using NMI-based algorithm. The proposed method provides prospective results for various E0/1R sensor image pairs in terms of accuracy, robustness, and speed.

Study of Radiation Mapping System for Water Contamination in Water System (방사능 수치 오염 지도 작성을 위한 방사선 계측 시스템 연구)

  • Na, Teresa W.;Kim, Han Soo;Yeon, Jei Won;Lee, Rena;Ha, Jang Ho
    • Journal of Radiation Industry
    • /
    • v.5 no.2
    • /
    • pp.185-189
    • /
    • 2011
  • As nuclear industry has been developed, a various types of radiological contamination has occurred. After 9.11 terror in U.S.A., it has been concerned that terrorists' active area has been enlarged to use nuclear or radioactive substance. Recently, the most powerful earth-quake stroke, which triggered a massive tsunami in Japan and then Fukushima nuclear power plant reactor has suffered from a serious accident in history. The Fukushima reactor accident has occurred an anxiety of radiation leaks and about 170,000 people have been evacuated from the accidental area near the nuclear power plant. For these reasons, a social chaos can be occurred if radiological contamination occurs to the supply system for the drinking water. As such, the establishment of the radiation monitoring system for the city main water system is compelling for the national security. In this study, a feasibility test of radiation monitoring system which consists of unified hybrid-type radiation detectors was experimented for multi detection system by using gamma-ray imaging. The hybrid-type radiation sensors were fabricated with CsI(Tl) scintillators and photodiodes. A preamplifier and amplifier was also fabricated and assembled with the sensor in the shielding case. For the preliminary test of detection of radiological contamination in the river, multi CsI(Tl)-PIN photodiode radiation detectors and $^{137}Cs$ gamma-ray source were used. The DAQ was done by Linux based ROOT program and NI DAQ system with Labview program. The simulated contamination was assumed to be occurred at Gapcheon river in Daejeon city. Multi CsI(Tl)-PIN photodiode radiation detectors were positioned at the Gapcheon river side. Assuming that the radiological contaminations flows in the river the $^{137}Cs$ gamma-ray source has been moved and then, the contamination region was reconstructed.

Multiple SL-AVS(Small size & Low power Around View System) Synchronization Maintenance Method (다중 SL-AVS 동기화 유지기법)

  • Park, Hyun-Moon;Park, Soo-Huyn;Seo, Hae-Moon;Park, Woo-Chool
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.3
    • /
    • pp.73-82
    • /
    • 2009
  • Due to the many advantages including low price, low power consumption, and miniaturization, the CMOS camera has been utilized in many applications, including mobile phones, the automotive industry, medical sciences and sensoring, robotic controls, and research in the security field. In particular, the 360 degree omni-directional camera when utilized in multi-camera applications has displayed issues of software nature, interface communication management, delays, and a complicated image display control. Other issues include energy management problems, and miniaturization of a multi-camera in the hardware field. Traditional CMOS camera systems are comprised of an embedded system that consists of a high-performance MCU enabling a camera to send and receive images and a multi-layer system similar to an individual control system that consists of the camera's high performance Micro Controller Unit. We proposed the SL-AVS (Small Size/Low power Around-View System) to be able to control a camera while collecting image data using a high speed synchronization technique on the foundation of a single layer low performance MCU. It is an initial model of the omni-directional camera that takes images from a 360 view drawing from several CMOS camera utilizing a 110 degree view. We then connected a single MCU with four low-power CMOS cameras and implemented controls that include synchronization, controlling, and transmit/receive functions of individual camera compared with the traditional system. The synchronization of the respective cameras were controlled and then memorized by handling each interrupt through the MCU. We were able to improve the efficiency of data transmission that minimizes re-synchronization amongst a target, the CMOS camera, and the MCU. Further, depending on the choice of users, respective or groups of images divided into 4 domains were then provided with a target. We finally analyzed and compared the performance of the developed camera system including the synchronization and time of data transfer and image data loss, etc.

Development of Mobile Active Transponder for KOMPSAT-5 SAR Image Calibration and Validation (다목적실용위성 5호의 SAR 영상 검·보정을 위한 이동형 능동 트랜스폰더 개발)

  • Park, Durk-Jong;Yeom, Kyung-Whan
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.24 no.12
    • /
    • pp.1128-1139
    • /
    • 2013
  • KOMPSAT-5(KOrea Multi-Purpose SATellite-5) has a benefit of continuously conducting its mission in all weather and even night by loading SAR(Synthetic Aperture Radar) payload, which is different from optical sensor of KOMPSAT-2 satellite. During IOT(In-Orbit Test) periods, SAR image calibration should be conducted through ground target of which location and RCS is pre-determined. Differently from the conventional corner reflector, active transponder has a capability to change its internal transfer gain and delay, which allows active transponder to be shown in a pixel of SAR image with very high radiance and virtual location. In this paper, the development of active transponder is presented from design to I&T(Integration and Test).

Development of High-Sensitivity Detection Sensor and Module for Spatial Distribution Measurement of Multi Gamma Sources (감마선원의 공간분포 가시화 및 3D모델링을 위한 운용환경 개발)

  • Song, Keun-Young;Lim, Ji-Seok;Choi, Jung-Huk;Yuk, Young-Ho;Hwang, Young-Gwan;Lee, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.702-704
    • /
    • 2017
  • In case of dismantling of nuclear power generation facility or radiation accident, the accurate information of gammaray source is essential for rapid decontamination. In order to more efficiently represent the position of the gamma ray to be removed, we create a spatial domain based on the real image. And we can perform decontamination of gamma-ray source more quickly by expressing the distribution of radiation source. The developed gamma ray imaging device overlaps with the visible image after gamma - ray detection and provides only two - dimensional image, but it does not show the distance information to the source. In this paper, we have developed a operation environment using the 3D visualization model for reporting effective decontamination operation.

  • PDF

A method of improving the quality of 3D images acquired from RGB-depth camera (깊이 영상 카메라로부터 획득된 3D 영상의 품질 향상 방법)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.637-644
    • /
    • 2021
  • In general, in the fields of computer vision, robotics, and augmented reality, the importance of 3D space and 3D object detection and recognition technology has emerged. In particular, since it is possible to acquire RGB images and depth images in real time through an image sensor using Microsoft Kinect method, many changes have been made to object detection, tracking and recognition studies. In this paper, we propose a method to improve the quality of 3D reconstructed images by processing images acquired through a depth-based (RGB-Depth) camera on a multi-view camera system. In this paper, a method of removing noise outside an object by applying a mask acquired from a color image and a method of applying a combined filtering operation to obtain the difference in depth information between pixels inside the object is proposed. Through each experiment result, it was confirmed that the proposed method can effectively remove noise and improve the quality of 3D reconstructed image.

A Time Synchronization Scheme for Vision/IMU/OBD by GPS (GPS를 활용한 Vision/IMU/OBD 시각동기화 기법)

  • Lim, JoonHoo;Choi, Kwang Ho;Yoo, Won Jae;Kim, La Woo;Lee, Yu Dam;Lee, Hyung Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.3
    • /
    • pp.251-257
    • /
    • 2017
  • Recently, hybrid positioning system combining GPS, vision sensor, and inertial sensor has drawn many attentions to estimate accurate vehicle positions. Since accurate multi-sensor fusion requires efficient time synchronization, this paper proposes an efficient method to obtain time synchronized measurements of vision sensor, inertial sensor, and OBD device based on GPS time information. In the proposed method, the time and position information is obtained by the GPS receiver, the attitude information is obtained by the inertial sensor, and the speed information is obtained by the OBD device. The obtained time, position, speed, and attitude information is converted to the color information. The color information is inserted to several corner pixels of the corresponding image frame. An experiment was performed with real measurements to evaluate the feasibility of the proposed method.

Verification of Multi-point Displacement Response Measurement Algorithm Using Image Processing Technique (영상처리기법을 이용한 다중 변위응답 측정 알고리즘의 검증)

  • Kim, Sung-Wan;Kim, Nam-Sik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.3A
    • /
    • pp.297-307
    • /
    • 2010
  • Recently, maintenance engineering and technology for civil and building structures have begun to draw big attention and actually the number of structures that need to be evaluate on structural safety due to deterioration and performance degradation of structures are rapidly increasing. When stiffness is decreased because of deterioration of structures and member cracks, dynamic characteristics of structures would be changed. And it is important that the damaged areas and extent of the damage are correctly evaluated by analyzing dynamic characteristics from the actual behavior of a structure. In general, typical measurement instruments used for structure monitoring are dynamic instruments. Existing dynamic instruments are not easy to obtain reliable data when the cable connecting measurement sensors and device is long, and have uneconomical for 1 to 1 connection process between each sensor and instrument. Therefore, a method without attaching sensors to measure vibration at a long range is required. The representative applicable non-contact methods to measure the vibration of structures are laser doppler effect, a method using GPS, and image processing technique. The method using laser doppler effect shows relatively high accuracy but uneconomical while the method using GPS requires expensive equipment, and has its signal's own error and limited speed of sampling rate. But the method using image signal is simple and economical, and is proper to get vibration of inaccessible structures and dynamic characteristics. Image signals of camera instead of sensors had been recently used by many researchers. But the existing method, which records a point of a target attached on a structure and then measures vibration using image processing technique, could have relatively the limited objects of measurement. Therefore, this study conducted shaking table test and field load test to verify the validity of the method that can measure multi-point displacement responses of structures using image processing technique.

S-FDS : a Smart Fire Detection System based on the Integration of Fuzzy Logic and Deep Learning (S-FDS : 퍼지로직과 딥러닝 통합 기반의 스마트 화재감지 시스템)

  • Jang, Jun-Yeong;Lee, Kang-Woon;Kim, Young-Jin;Kim, Won-Tae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.4
    • /
    • pp.50-58
    • /
    • 2017
  • Recently, some methods of converging heterogeneous fire sensor data have been proposed for effective fire detection, but the rule-based methods have low adaptability and accuracy, and the fuzzy inference methods suffer from detection speed and accuracy by lack of consideration for images. In addition, a few image-based deep learning methods were researched, but it was too difficult to rapidly recognize the fire event in absence of cameras or out of scope of a camera in practical situations. In this paper, we propose a novel fire detection system combining a deep learning algorithm based on CNN and fuzzy inference engine based on heterogeneous fire sensor data including temperature, humidity, gas, and smoke density. we show it is possible for the proposed system to rapidly detect fire by utilizing images and to decide fire in a reliable way by utilizing multi-sensor data. Also, we apply distributed computing architecture to fire detection algorithm in order to avoid concentration of computing power on a server and to enhance scalability as a result. Finally, we prove the performance of the system through two experiments by means of NIST's fire dynamics simulator in both cases of an explosively spreading fire and a gradually growing fire.

Analysis on Mapping Accuracy of a Drone Composite Sensor: Focusing on Pre-calibration According to the Circumstances of Data Acquisition Area (드론 탑재 복합센서의 매핑 정확도 분석: 데이터 취득 환경에 따른 사전 캘리브레이션 여부를 중심으로)

  • Jeon, Ilseo;Ham, Sangwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.577-589
    • /
    • 2021
  • Drone mapping systems can be applied to many fields such as disaster damage investigation, environmental monitoring, and construction process monitoring. To integrate individual sensors attached to a drone, it was essential to undergo complicated procedures including time synchronization. Recently, a variety of composite sensors are released which consist of visual sensors and GPS/INS. Composite sensors integrate multi-sensory data internally, and they provide geotagged image files to users. Therefore, to use composite sensors in drone mapping systems, mapping accuracies from composite sensors should be examined. In this study, we analyzed the mapping accuracies of a composite sensor, focusing on the data acquisition area and pre-calibration effect. In the first experiment, we analyzed how mapping accuracy varies with the number of ground control points. When 2 GCPs were used for mapping, the total RMSE has been reduced by 40 cm from more than 1 m to about 60 cm. In the second experiment, we assessed mapping accuracies based on whether pre-calibration is conducted or not. Using a few ground control points showed the pre-calibration does not affect mapping accuracies. The formation of weak geometry of the image sequences has resulted that pre-calibration can be essential to decrease possible mapping errors. In the absence of ground control points, pre-calibration also can improve mapping errors. Based on this study, we expect future drone mapping systems using composite sensors will contribute to streamlining a survey and calibration process depending on the data acquisition circumstances.