• Title/Summary/Keyword: IR image processing

Search Result 74, Processing Time 0.027 seconds

The effective noise reduction method in infrared image using bilateral filter based on median value

  • Park, Chan-Geun;Choi, Byung-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.12
    • /
    • pp.27-33
    • /
    • 2016
  • In this paper, we propose the bilateral filter based on median value that can reduce random noise and impulse noise with minimal loss of contour information. In general, EO / IR camera to generate a random or impulse noise due to a number of reasons. This noise reduces the performance of detecting and tracking by signal processing. To reduce noise, our proposed bilateral filter sorts the values of the target pixel and the peripheral pixels, and extracts a median filter coefficients of the Gaussian type. Then to extract the Gaussian filter coefficient involved with the distance between the center pixel and the surrounding pixels. As using those filter coefficients, our proposed method can remove the various noise effectively while minimizing the loss of the contour information. To validate our proposed method, we present experimental results for several IR images.

Automatic Registration between Multiple IR Images Using Simple Pre-processing Method and Modified Local Features Extraction Algorithm (단순 전처리 방법과 수정된 지역적 피쳐 추출기법을 이용한 다중 적외선영상 자동 기하보정)

  • Kim, Dae Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.485-494
    • /
    • 2017
  • This study focuses on automatic image registration between multiple IR images using simple preprocessing method and modified local feature extraction algorithm. The input images were preprocessed by using the median and absolute value after histogram equalization, and it could be effectively applied to reduce the brightness difference value between images by applying the similarity of extracted features to the concept of angle instead of distance. The results were evaluated using visual and inverse RMSE methods. The features that could not be achieved by the existing local feature extraction technique showed high image matching reliability and application convenience. It is expected that this method can be used as one of the automatic registration methods between multi-sensor images under specific conditions.

Relative Navigation for Autonomous Aerial Refueling Using Infra-red based Vision Systems (자동 공중급유를 위한 적외선 영상기반 상대 항법)

  • Yoon, Hyungchul;Yang, Youyoung;Leeghim, Henzeh
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.46 no.7
    • /
    • pp.557-566
    • /
    • 2018
  • In this paper, a vision-based relative navigation system is addressed for autonomous aerial refueling. In the air-to-air refueling, it is assumed that the tanker has the drogue, and the receiver has the probe. To obtain the relative information from the drogue, a vision-based imaging technology by infra-red camera is applied. In this process, the relative information is obtained by using Gaussian Least Squares Differential Correction (GLSDC), and Levenberg-Marquadt(LM), where the drouge geometric information calculated through image processing is used. These two approaches proposed in this paper are analyzed through numerical simulations.

Design and Implementation of Smart Pen based User Interface System for U-learning (U-Learning 을 위한 스마트펜 인터페이스 시스템 디자인 및 개발)

  • Shim, Jae-Youen;Kim, Seong-Whan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.1388-1391
    • /
    • 2010
  • In this paper, we present a design and implementation of U-learning system using pen based augmented reality approach. Student has been given a smart pen and a smart study book, which is similar to the printed material already serviced. However, we print the study book using CMY inks, and embed perceptually invisible dot patterns using K ink. Smart pen includes (1) IR LED for illumination, IR pass filter for extracting the dot patterns, and (3) camera for image captures. From the image sequences, we perform topology analysis which determines the topological distance between dot pixels, and perform error correction decoding using four position symbols and five CRC symbols. When a student touches a smart study books with our smart pen, we show him/her multimedia (visual/audio) information which is exactly related with the selected region. Our scheme can embed 16 bit information, which is more than 200% larger than previous scheme, which supports 7 bits or 8 bits information.

High-Speed Satellite Detection in High-Resolution Image Using Image Processing (영상 처리를 이용한 고해상도 영상 내 위성의 고속 검출)

  • Shin, Seunghyeok;Lee, Jongmin;Lee, Sangwook;Yang, Taeseok;Kim, Whoi-Yul
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.46 no.5
    • /
    • pp.427-435
    • /
    • 2018
  • Many countries are trying to deploy satellite surveillance systems for their national defense, and one of these system uses optical systems to observe the satellites above their territories. The optical satellite surveillance system requires the coordinates of the satellites in an acquired image and expects that those coordinates to be delivered to the tracking system. The proposed method detects the satellite sources in a high-resolution image with fast image processing for the optical surveillance system. To achieve faster detection, the proposed method reduces the size of the original image and approximates the trajectory of a satellite, so image processing methods are only applied to the nearby area of the approximated trajectory in the original image. The proposed method shows the similar detection performance faster than the previous method.

Optical Resonance-based Three Dimensional Sensing Device and its Signal Processing (광공진 현상을 이용한 입체 영상센서 및 신호처리 기법)

  • Park, Yong-Hwa;You, Jang-Woo;Park, Chang-Young;Yoon, Heesun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2013.10a
    • /
    • pp.763-764
    • /
    • 2013
  • A three-dimensional image capturing device and its signal processing algorithm and apparatus are presented. Three dimensional information is one of emerging differentiators that provides consumers with more realistic and immersive experiences in user interface, game, 3D-virtual reality, and 3D display. It has the depth information of a scene together with conventional color image so that full-information of real life that human eyes experience can be captured, recorded and reproduced. 20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented[1,2]. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical resonator'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation[3,4]. The optical resonator is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image (Figure 1). Suggested novel optical resonator enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously (Figure 2,3). The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical resonator design, fabrication, 3D camera system prototype and signal processing algorithms.

  • PDF

DSP Embedded Early Fire Detection Method Using IR Thermal Video

  • Kim, Won-Ho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.10
    • /
    • pp.3475-3489
    • /
    • 2014
  • Here we present a simple flame detection method for an infrared (IR) thermal camera based real-time fire surveillance digital signal processor (DSP) system. Infrared thermal cameras are especially advantageous for unattended fire surveillance. All-weather monitoring is possible, regardless of illumination and climate conditions, and the data quantity to be processed is one-third that of color videos. Conventional IR camera-based fire detection methods used mainly pixel-based temporal correlation functions. In the temporal correlation function-based methods, temporal changes in pixel intensity generated by the irregular motion and spreading of the flame pixels are measured using correlation functions. The correlation values of non-flame regions are uniform, but the flame regions have irregular temporal correlation values. To satisfy the requirement of early detection, all fire detection techniques should be practically applied within a very short period of time. The conventional pixel-based correlation function is computationally intensive. In this paper, we propose an IR camera-based simple flame detection algorithm optimized with a compact embedded DSP system to achieve early detection. To reduce the computational load, block-based calculations are used to select the candidate flame region and measure the temporal motion of flames. These functions are used together to obtain the early flame detection algorithm. The proposed simple algorithm was tested to verify the required function and performance in real-time using IR test videos and a real-time DSP system. The findings indicated that the system detected the flames within 5 to 20 seconds, and had a correct flame detection ratio of 100% with an acceptable false detection ratio in video sequence level.

Development of Automatic Optical Fiber Alignment System and Optimal Aligning Algorithm (자동 광 정렬시스템 및 최적 광 정렬알고리즘의 개발)

  • Um, Chul;Kim, Byung-Hee;Choi, Young-Seok
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.4
    • /
    • pp.194-201
    • /
    • 2004
  • Optical fibers are indispensable fer optical communication systems that transmit large volumes of data at high speed. But the aligning technology under the sub-micron accuracy is required for the precise axis adjustment and connection. For the purpose of precise alignment of the optical arrays, in this research, we have developed the 12-axis(with 8 automated axis and 4 manual axis) automatic optical fiber alignment system including the image processing-based searching system, the automatic loading system using the robot and the suction toot and the automatic UV bonding system. In order to obtain the sub-micron alignment accuracy, two 4-axis PC-based motion controllers and the two 50nm resolution 6-aixs micro-stage actuated by micro stepping motors are adopted. The fiber aligning procedure consists of two steps. Firstly, the optical wave guide and an input optical array are aligned by the 6-axis input micro-stage with the IR camera. The image processing technique is introduced to reduce primary manual aligning time and result in achieving the 50% decrease of aligning time. Secondly, the IR camera is replaced by the output micro-stage and a wave guide and two optical arrays are aligned simultaneously before the laser power intensity delivered to the optical powermeter reached the threshold value. When the aligning procedure is finished, the wave guide and arrays are W bonded. The automatic loading/unloading system is also introduced and the entire wave guide handing time is reduced significantly compared to the former commercial aligning system.

Detection of Precise Crop Locations under Vinyl Mulch using Non-integral Moving Average Applied to Thermal Distribution

  • Cho, Yongjin;Yun, Yeji;Lee, Kyou-Seung;Lee, Dong-Hoon
    • Journal of Biosystems Engineering
    • /
    • v.42 no.2
    • /
    • pp.117-125
    • /
    • 2017
  • Purpose: Damage to pulse crops by wild birds is a serious problem. The damage is to such an extent that the rate of damage during the period between seeding and cotyledon stages reaches 54.6% on an average. In this study, a crop-position detection method was developed wherein infrared (IR) sensors were used to determine the cotyledon position under a vinyl mulch. Methods: IR sensors that helped measure the temperature were used to locate the cotyledons below the vinyl mulch. A single IR sensor module was installed at three locations of the crops (peanut, red lettuce, and crown daisy) in the cotyledon stage. The representative thermal response of a $16{\times}4$ pixel area was detected using this sensor in the case where the distance from the target was 25 cm. A spatial image was applied to the two-dimensional temperature distribution using a non-integral moving-average method. The collected data were first processed by taking the moving average via interpolation to determine the frame where the variance was the lowest for a resolution unit of 1.02 cm. Results: The temperature distribution was plotted corresponding to a distance of 10 cm between the crops. A clear leaf pattern of the crop was visually confirmed. However, the temperature distribution after the normalization was unclear. The image conversion and frequency-conversion graphs were obtained based on the moving average by averaging the points corresponding to a frequency of 40 Hz for 8 pixels. The most optimized resolutions at locations 1, 2, and 3 were found on 3.4, 4.1, and 5.6 Pixels, respectively. Conclusions: In this study, to solve the problem of damage caused by birds to crops in the cotyledon stage after seeding, the vinyl mulch is punched after seeding. The crops in the cotyledon stage could be accurately located using the proposed method. By conducting the experiments using the single IR sensor and a sliding mechanical device with the help of a non-integral interpolation method, the crops in the cotyledon stage could be precisely located.

Segmentation and 3D Visualization of Medical Image : An Overview

  • Kang, Jiwoo;Kim, Doyoung;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.1
    • /
    • pp.27-31
    • /
    • 2014
  • In this paper, an overview of segmentation and 3D visualization methods are presented. Commonly, the two kinds of methods are used to visualize organs and vessels into 3D from medical images such as CT(A) and MRI - Direct Volume Rendering (DVR) and Iso-surface Rendering (IR). DVR can be applied directly to a volume. It directly penetrates through the volume while it determines which voxels are visualizedbased on a transfer function. On the other hand, IR requires a series of processes such as segmentation, polygonization and visualization. To extract a region of interest (ROI) from the medical volume image via the segmentation, some regions of an object and a background are required, which are typically obtained from the user. To visualize the extracted regions, the boundary points of the regions should be polygonized. In other words, the boundary surface composed of polygons such as a triangle and a rectangle should be required to visualize the regions into 3D because illumination effects, which makes the object shaded and seen in 3D, cannot be applied directly to the points.