• Title/Summary/Keyword: optical image

Search Result 2,687, Processing Time 0.027 seconds

Optical Encryption based on Visual Cryptography and Interferometry (시각 암호와 간섭계를 이용한 광 암호화)

  • 이상수;서동환;김종윤;박세준;신창목;김수중;박상국
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2000.08a
    • /
    • pp.126-127
    • /
    • 2000
  • In this paper, we proposed an optical encryption method based in the concept of visual cryptography and interferometry. In our method a secret binary image was divided into two sub-images and they were encrypted by 'XOR' operation with a random key mask. Finally each encrypted image was changed into phase mask. By interference of these two phase masks the original image was obtained. Compared with general visual encryption method, this optical method had good signal-to-noise ratio due to no need to generate sub-pixels like visual encryption.

  • PDF

A Study on Image Distortion Correction for Gobo Lighting Optical System (고보 조명 광학계의 이미지 왜곡 보정에 관한 연구)

  • Gyu-Ha Kim;Ji-Hwan Lee;Chang-Hun Lee;Mee-Suk Jung
    • Korean Journal of Optics and Photonics
    • /
    • v.34 no.2
    • /
    • pp.61-65
    • /
    • 2023
  • This paper studies a method of applying pre-distortion to the image mask of the gobo illumination optical system to correct an irradiated image and irradiate a clear image. In the case of the gobo illumination optical system, since it is generally irradiated with a tilt, distortion in the upper and lower directions occurs severely in the image. To solve this problem, the correction coordinates of the image were derived using a proportional equation, and the distortion was corrected by applying them to the image mask. As a result, it was confirmed that the distortion was reduced by 64.5% compared to the case of using the existing image mask.

Wavelet-based Fusion of Optical and Radar Image using Gradient and Variance (그레디언트 및 분산을 이용한 웨이블릿 기반의 광학 및 레이더 영상 융합)

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.5
    • /
    • pp.581-591
    • /
    • 2010
  • In this paper, we proposed a new wavelet-based image fusion algorithm, which has advantages in both frequency and spatial domains for signal analysis. The developed algorithm compares the ratio of SAR image signal to optical image signal and assigns the SAR image signal to the fused image if the ratio is larger than a predefined threshold value. If the ratio is smaller than the threshold value, the fused image signal is determined by a weighted sum of optical and SAR image signal. The fusion rules consider the ratio of SAR image signal to optical image signal, image gradient and local variance of each image signal. We evaluated the proposed algorithm using Ikonos and TerraSAR-X satellite images. The proposed method showed better performance than the conventional methods which take only relatively strong SAR image signals in the fused image, in terms of entropy, image clarity, spatial frequency and speckle index.

An Omnidirectional Vision-Based Moving Obstacle Detection in Mobile Robot

  • Kim, Jong-Cheol;Suga, Yasuo
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.6
    • /
    • pp.663-673
    • /
    • 2007
  • This paper presents a new moving obstacle detection method using an optical flow in mobile robot with an omnidirectional camera. Because an omnidirectional camera consists of a nonlinear mirror and CCD camera, the optical flow pattern in omnidirectional image is different from the pattern in perspective camera. The geometry characteristic of an omnidirectional camera has influence on the optical flow in omnidirectional image. When a mobile robot with an omnidirectional camera moves, the optical flow is not only theoretically calculated in omnidirectional image, but also investigated in omnidirectional and panoramic images. In this paper, the panoramic image is generalized from an omnidirectional image using the geometry of an omnidirectional camera. In particular, Focus of expansion (FOE) and focus of contraction (FOC) vectors are defined from the estimated optical flow in omnidirectional and panoramic images. FOE and FOC vectors are used as reference vectors for the relative evaluation of optical flow. The moving obstacle is turned out through the relative evaluation of optical flows. The proposed algorithm is tested in four motions of a mobile robot including straight forward, left turn, right turn and rotation. The effectiveness of the proposed method is shown by the experimental results.

Spatial Frequency Coverage and Image Reconstruction for Photonic Integrated Interferometric Imaging System

  • Zhang, Wang;Ma, Hongliu;Huang, Kang
    • Current Optics and Photonics
    • /
    • v.5 no.6
    • /
    • pp.606-616
    • /
    • 2021
  • A photonic integrated interferometric imaging system possesses the characteristics of small-scale, low weight, low power consumption, and better image quality. It has potential application for replacing conventional large space telescopes. In this paper, the principle of photonic integrated interferometric imaging is investigated. A novel lenslet array arrangement and lenslet pairing approach are proposed, which are helpful in improving spatial frequency coverage. For the novel lenslet array arrangement, two short interference arms were evenly distributed between two adjacent long interference arms. Each lenslet in the array would be paired twice through the novel lenslet pairing approach. Moreover, the image reconstruction model for optical interferometric imaging based on compressed sensing was established. Image simulation results show that the peak signal to noise ratio (PSNR) of the reconstructed image based on compressive sensing is about 10 dB higher than that of the direct restored image. Meanwhile, the normalized mean square error (NMSE) of the direct restored image is approximately 0.38 higher than that of the reconstructed image. Structural similarity index measure (SSIM) of the reconstructed image based on compressed sensing is about 0.33 higher than that of the direct restored image. The increased spatial frequency coverage and image reconstruction approach jointly contribute to better image quality of the photonic integrated interferometric imaging system.

QUICK-LOOK TEST OF KOMPSAT-2 FOR IMAGE CHAIN VERIFICATION

  • Lee Eung-Shik;Jung Dae-Jun;Lee Seung-Hoon
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.509-511
    • /
    • 2005
  • KOMPSAT -2 equipped with an optical telescope(MSC) will be launched in this year. It can take images of the earth with push-broom scanning at altitude 685Km. Its resolution is 1m in panchromatic channel with a swath width of 15 km After the MSC is tested and the performance is measured at instrument level, it is installed on satellite. The image passes through the electro-optical system, compression and storage unit and fmally downlink sub-systems. This integration procedure necessitates the functional test of all subsystems participating in the image chain. The objective of functional test at satellite level(Quick Look test) is to check the functionality of image chain by real target image. Collimated moving image is input to the EOS in order to simulate the operational environments as if KOMPSAT -2 is being operated in orbit. The image chain from EOS to data downlink subsystem will be verified through Quick Look test. This paper explains the Quick Look test of KOMPSAT -2 and compares the taken images with collimated input ones.

  • PDF

Digital Image Stabilization Based on Edge Detection and Lucas-Kanade Optical Flow (Edge Detection과 Lucas-Kanade Optical Flow 방식에 기반한 디지털 영상 안정화 기법)

  • Lee, Hye-Jung;Choi, Yun-Won;Kang, Tae-Hun;Lee, Suk-Gyu
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.2
    • /
    • pp.85-92
    • /
    • 2010
  • In this paper, we propose a digital image stabilization technique using edge detection and Lucas-Kanade optical flow in order to minimize the motion of the shaken image. The accuracy of motion estimation based on block matching technique depends on the size of search window, which results in long calculation time. Therefore it is not applicable to real-time system. In addition, since the size of vector depends on that of block, it is difficult to estimate the motion which is bigger than the block size. The proposed method extracts the trust region using edge detection, to estimate the motion of some critical points in trust region based on Lucas-Kanade optical flow algorithm. The experimental results show that the proposed method stabilizes the shaking of motion image effectively in real time.

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • Journal of Ocean Engineering and Technology
    • /
    • v.36 no.1
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.

Restoring Turbulent Images Based on an Adaptive Feature-fusion Multi-input-Multi-output Dense U-shaped Network

  • Haiqiang Qian;Leihong Zhang;Dawei Zhang;Kaimin Wang
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.215-224
    • /
    • 2024
  • In medium- and long-range optical imaging systems, atmospheric turbulence causes blurring and distortion of images, resulting in loss of image information. An image-restoration method based on an adaptive feature-fusion multi-input-multi-output (MIMO) dense U-shaped network (Unet) is proposed, to restore a single image degraded by atmospheric turbulence. The network's model is based on the MIMO-Unet framework and incorporates patch-embedding shallow-convolution modules. These modules help in extracting shallow features of images and facilitate the processing of the multi-input dense encoding modules that follow. The combination of these modules improves the model's ability to analyze and extract features effectively. An asymmetric feature-fusion module is utilized to combine encoded features at varying scales, facilitating the feature reconstruction of the subsequent multi-output decoding modules for restoration of turbulence-degraded images. Based on experimental results, the adaptive feature-fusion MIMO dense U-shaped network outperforms traditional restoration methods, CMFNet network models, and standard MIMO-Unet network models, in terms of image-quality restoration. It effectively minimizes geometric deformation and blurring of images.

Stellar Source Selections for Image Validation of Earth Observation Satellite

  • Yu, Ji-Woong;Park, Sang-Young;Lim, Dong-Wook;Lee, Dong-Han;Sohn, Young-Jong
    • Journal of Astronomy and Space Sciences
    • /
    • v.28 no.4
    • /
    • pp.273-284
    • /
    • 2011
  • A method of stellar source selection for validating the quality of image is investigated for a low Earth orbit optical remote sensing satellite. Image performance of the optical payload needs to be validated after its launch into orbit. The stellar sources are ideal source points that can be used to validate the quality of optical images. For the image validation, stellar sources should be the brightest as possible in the charge-coupled device dynamic range. The time delayed and integration technique, which is used to observe the ground, is also performed to observe the selected stars. The relations between the incident radiance at aperture and V magnitude of a star are established using Gunn & Stryker's star catalogue of spectrum. Applying this result, an appropriate image performance index is determined, and suitable stars and areas of the sky scene are selected for the optical payload on a remote sensing satellite to observe. The result of this research can be utilized to validate the quality of optical payload of a satellite in orbit.