• 제목/요약/키워드: Optical Image

검색결과 2,683건 처리시간 0.032초

고보 조명 광학계의 이미지 왜곡 보정에 관한 연구 (A Study on Image Distortion Correction for Gobo Lighting Optical System)

  • 김규하;이지환;이창훈;정미숙
    • 한국광학회지
    • /
    • 제34권2호
    • /
    • pp.61-65
    • /
    • 2023
  • 본 논문은 고보 조명 광학계의 image mask에 사전 왜곡을 적용하여, image를 선명하게 조사 및 보정하는 방법을 연구하였다. 고보 조명 광학계의 경우, 일반적으로 기울여 조사되는 경우가 많기 때문에 image에서 상·하 방향의 왜곡이 심하게 발생한다. 이러한 문제점을 해결하기 위해 비례식을 이용해 image의 보정 좌표를 도출하고, 이를 image mask에 적용하여 왜곡을 보정하였다. 그 결과, 기존 image mask를 사용하였을 경우에 비해 왜곡이 64.5% 감소되는 것을 확인하였다.

그레디언트 및 분산을 이용한 웨이블릿 기반의 광학 및 레이더 영상 융합 (Wavelet-based Fusion of Optical and Radar Image using Gradient and Variance)

  • 예철수
    • 대한원격탐사학회지
    • /
    • 제26권5호
    • /
    • pp.581-591
    • /
    • 2010
  • 본 연구에서는 주파수 및 공간 도메인 상에서 선호 분석에 장점이 있는 웨이블릿 기반의 영상 융합 알고리듬을 제안하였다. 개발된 알고리듬은 레이더 영상 신호와 광학 영상 신호의 상대적인 크기를 비교하여 상대적으로 신호 크기가 큰 경우에는 레이더 영상 신호를 융합 영상에 할당하고 크기가 작은 경우에는 레이더 영상 신호와 광학 영상 선호의 가중치 합으로 융합 영상 신호를 결정한다. 사용되는 융합 규칙은 두 영상 신호의 상대적인 신호 비(ratio) 영상 그레디언트, 로컬 영역의 분산 특성을 동시에 고려한다. Ikonos 위성 영상과 TerraSAR-X 위성 영상을 이용한 실험에서 상대적으로 신호 크기가 큰 레이더 신호 만을 융합 영상에 할당하는 기존 방법에 비해 entropy, image clarity, spatial frequency, speckle index 측면에서 우수한 융합 결과를 얻었다.

An Omnidirectional Vision-Based Moving Obstacle Detection in Mobile Robot

  • Kim, Jong-Cheol;Suga, Yasuo
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권6호
    • /
    • pp.663-673
    • /
    • 2007
  • This paper presents a new moving obstacle detection method using an optical flow in mobile robot with an omnidirectional camera. Because an omnidirectional camera consists of a nonlinear mirror and CCD camera, the optical flow pattern in omnidirectional image is different from the pattern in perspective camera. The geometry characteristic of an omnidirectional camera has influence on the optical flow in omnidirectional image. When a mobile robot with an omnidirectional camera moves, the optical flow is not only theoretically calculated in omnidirectional image, but also investigated in omnidirectional and panoramic images. In this paper, the panoramic image is generalized from an omnidirectional image using the geometry of an omnidirectional camera. In particular, Focus of expansion (FOE) and focus of contraction (FOC) vectors are defined from the estimated optical flow in omnidirectional and panoramic images. FOE and FOC vectors are used as reference vectors for the relative evaluation of optical flow. The moving obstacle is turned out through the relative evaluation of optical flows. The proposed algorithm is tested in four motions of a mobile robot including straight forward, left turn, right turn and rotation. The effectiveness of the proposed method is shown by the experimental results.

Spatial Frequency Coverage and Image Reconstruction for Photonic Integrated Interferometric Imaging System

  • Zhang, Wang;Ma, Hongliu;Huang, Kang
    • Current Optics and Photonics
    • /
    • 제5권6호
    • /
    • pp.606-616
    • /
    • 2021
  • A photonic integrated interferometric imaging system possesses the characteristics of small-scale, low weight, low power consumption, and better image quality. It has potential application for replacing conventional large space telescopes. In this paper, the principle of photonic integrated interferometric imaging is investigated. A novel lenslet array arrangement and lenslet pairing approach are proposed, which are helpful in improving spatial frequency coverage. For the novel lenslet array arrangement, two short interference arms were evenly distributed between two adjacent long interference arms. Each lenslet in the array would be paired twice through the novel lenslet pairing approach. Moreover, the image reconstruction model for optical interferometric imaging based on compressed sensing was established. Image simulation results show that the peak signal to noise ratio (PSNR) of the reconstructed image based on compressive sensing is about 10 dB higher than that of the direct restored image. Meanwhile, the normalized mean square error (NMSE) of the direct restored image is approximately 0.38 higher than that of the reconstructed image. Structural similarity index measure (SSIM) of the reconstructed image based on compressed sensing is about 0.33 higher than that of the direct restored image. The increased spatial frequency coverage and image reconstruction approach jointly contribute to better image quality of the photonic integrated interferometric imaging system.

QUICK-LOOK TEST OF KOMPSAT-2 FOR IMAGE CHAIN VERIFICATION

  • Lee Eung-Shik;Jung Dae-Jun;Lee Seung-Hoon
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.509-511
    • /
    • 2005
  • KOMPSAT -2 equipped with an optical telescope(MSC) will be launched in this year. It can take images of the earth with push-broom scanning at altitude 685Km. Its resolution is 1m in panchromatic channel with a swath width of 15 km After the MSC is tested and the performance is measured at instrument level, it is installed on satellite. The image passes through the electro-optical system, compression and storage unit and fmally downlink sub-systems. This integration procedure necessitates the functional test of all subsystems participating in the image chain. The objective of functional test at satellite level(Quick Look test) is to check the functionality of image chain by real target image. Collimated moving image is input to the EOS in order to simulate the operational environments as if KOMPSAT -2 is being operated in orbit. The image chain from EOS to data downlink subsystem will be verified through Quick Look test. This paper explains the Quick Look test of KOMPSAT -2 and compares the taken images with collimated input ones.

  • PDF

Edge Detection과 Lucas-Kanade Optical Flow 방식에 기반한 디지털 영상 안정화 기법 (Digital Image Stabilization Based on Edge Detection and Lucas-Kanade Optical Flow)

  • 이혜정;최윤원;강태훈;이석규
    • 로봇학회논문지
    • /
    • 제5권2호
    • /
    • pp.85-92
    • /
    • 2010
  • In this paper, we propose a digital image stabilization technique using edge detection and Lucas-Kanade optical flow in order to minimize the motion of the shaken image. The accuracy of motion estimation based on block matching technique depends on the size of search window, which results in long calculation time. Therefore it is not applicable to real-time system. In addition, since the size of vector depends on that of block, it is difficult to estimate the motion which is bigger than the block size. The proposed method extracts the trust region using edge detection, to estimate the motion of some critical points in trust region based on Lucas-Kanade optical flow algorithm. The experimental results show that the proposed method stabilizes the shaking of motion image effectively in real time.

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • 한국해양공학회지
    • /
    • 제36권1호
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.

Restoring Turbulent Images Based on an Adaptive Feature-fusion Multi-input-Multi-output Dense U-shaped Network

  • Haiqiang Qian;Leihong Zhang;Dawei Zhang;Kaimin Wang
    • Current Optics and Photonics
    • /
    • 제8권3호
    • /
    • pp.215-224
    • /
    • 2024
  • In medium- and long-range optical imaging systems, atmospheric turbulence causes blurring and distortion of images, resulting in loss of image information. An image-restoration method based on an adaptive feature-fusion multi-input-multi-output (MIMO) dense U-shaped network (Unet) is proposed, to restore a single image degraded by atmospheric turbulence. The network's model is based on the MIMO-Unet framework and incorporates patch-embedding shallow-convolution modules. These modules help in extracting shallow features of images and facilitate the processing of the multi-input dense encoding modules that follow. The combination of these modules improves the model's ability to analyze and extract features effectively. An asymmetric feature-fusion module is utilized to combine encoded features at varying scales, facilitating the feature reconstruction of the subsequent multi-output decoding modules for restoration of turbulence-degraded images. Based on experimental results, the adaptive feature-fusion MIMO dense U-shaped network outperforms traditional restoration methods, CMFNet network models, and standard MIMO-Unet network models, in terms of image-quality restoration. It effectively minimizes geometric deformation and blurring of images.

Stellar Source Selections for Image Validation of Earth Observation Satellite

  • Yu, Ji-Woong;Park, Sang-Young;Lim, Dong-Wook;Lee, Dong-Han;Sohn, Young-Jong
    • Journal of Astronomy and Space Sciences
    • /
    • 제28권4호
    • /
    • pp.273-284
    • /
    • 2011
  • A method of stellar source selection for validating the quality of image is investigated for a low Earth orbit optical remote sensing satellite. Image performance of the optical payload needs to be validated after its launch into orbit. The stellar sources are ideal source points that can be used to validate the quality of optical images. For the image validation, stellar sources should be the brightest as possible in the charge-coupled device dynamic range. The time delayed and integration technique, which is used to observe the ground, is also performed to observe the selected stars. The relations between the incident radiance at aperture and V magnitude of a star are established using Gunn & Stryker's star catalogue of spectrum. Applying this result, an appropriate image performance index is determined, and suitable stars and areas of the sky scene are selected for the optical payload on a remote sensing satellite to observe. The result of this research can be utilized to validate the quality of optical payload of a satellite in orbit.

광스캔닝 훌로그래피의 해상도 (Resolution in Optical Scanning Holography)

  • 도규봉
    • 한국항행학회논문지
    • /
    • 제2권2호
    • /
    • pp.126-131
    • /
    • 1998
  • 광학적 스캐닝 홀로그래피에 있어서, 물체의 3차원 홀로그래픽 정보는 2차원적 광스캐닝에 의해서 생성되며, 광스캐닝 광선은 time-dependent한 Gaussian 형태의 Fresnel 윤대판(zone plate)이다. 본 기술에서 홀로그래픽 정보는 그 자체로서 전기적인 신호로서 발생하기 때문에 전자광선 addressed - spatial light modulator을 사용하여 영상 재생이 가능하다. 이 기법의 응용분야로서 3-차원 원거리 광 센서로서 사용될 수 있으며, 특히 비행물체 확인에 응용될 수 있다. 본 논문에서, 우리는 먼저 광스캐닝 홀로그래피에 대해 간략한 기술과 본 기술 시스템에 있어서 광스캐닝 빔의 해상도를 먼저 유도하고, 그 다음으로 Gaussian 원리를 이용하여 홀로그래픽 image 재생을 위해 필요한 실상(real image) 및 허상(virtual image)에 대한 수학적 표현을 제시하고자 한다.

  • PDF