• Title/Summary/Keyword: Underwater optical image

Search Result 35, Processing Time 0.03 seconds

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • Journal of Ocean Engineering and Technology
    • /
    • v.36 no.1
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.

Underwater Optical Image Data Transmission in the Presence of Turbulence and Attenuation

  • Ramavath Prasad Naik;Maaz Salman;Wan-Young Chung
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.1-14
    • /
    • 2023
  • Underwater images carry information that is useful in the fields of aquaculture, underwater military security, navigation, transportation, and so on. In this research, we transmitted an underwater image through various underwater mediums in the presence of underwater turbulence and beam attenuation effects using a high-speed visible optical carrier signal. The optical beam undergoes scintillation because of the turbulence and attenuation effects; therefore, distorted images were observed at the receiver end. To understand the behavior of the communication media, we obtained the bit error rate (BER) performance of the system with respect to the average signal-to-noise ratio (SNR). Also, the structural similarity index (SSI) and peak SNR (PSNR) metrics of the received image were evaluated. Based on the received images, we employed suitable nonlinear filters to recover the distorted images and enhance them further. The BER, SSI, and PSNR metrics of the specific nonlinear filters were also evaluated and compared with the unfiltered metrics. These metrics were evaluated using the on-off keying and binary phase-shift keying modulation techniques for the 50-m and 100-m links for beam attenuation resulting from pure seawater, clear ocean water, and coastal ocean water mediums.

Study of Marker Detection Performance on Deep Learning via Distortion and Rotation Augmentation of Training Data on Underwater Sonar Image (수중 소나 영상 학습 데이터의 왜곡 및 회전 Augmentation을 통한 딥러닝 기반의 마커 검출 성능에 관한 연구)

  • Lee, Eon-Ho;Lee, Yeongjun;Choi, Jinwoo;Lee, Sejin
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.14-21
    • /
    • 2019
  • In the ground environment, mobile robot research uses sensors such as GPS and optical cameras to localize surrounding landmarks and to estimate the position of the robot. However, an underwater environment restricts the use of sensors such as optical cameras and GPS. Also, unlike the ground environment, it is difficult to make a continuous observation of landmarks for location estimation. So, in underwater research, artificial markers are installed to generate a strong and lasting landmark. When artificial markers are acquired with an underwater sonar sensor, different types of noise are caused in the underwater sonar image. This noise is one of the factors that reduces object detection performance. This paper aims to improve object detection performance through distortion and rotation augmentation of training data. Object detection is detected using a Faster R-CNN.

Single Image-based Enhancement Techniques for Underwater Optical Imaging

  • Kim, Do Gyun;Kim, Soo Mee
    • Journal of Ocean Engineering and Technology
    • /
    • v.34 no.6
    • /
    • pp.442-453
    • /
    • 2020
  • Underwater color images suffer from low visibility and color cast effects caused by light attenuation by water and floating particles. This study applied single image enhancement techniques to enhance the quality of underwater images and compared their performance with real underwater images taken in Korean waters. Dark channel prior (DCP), gradient transform, image fusion, and generative adversarial networks (GAN), such as cycleGAN and underwater GAN (UGAN), were considered for single image enhancement. Their performance was evaluated in terms of underwater image quality measure, underwater color image quality evaluation, gray-world assumption, and blur metric. The DCP saturated the underwater images to a specific greenish or bluish color tone and reduced the brightness of the background signal. The gradient transform method with two transmission maps were sensitive to the light source and highlighted the region exposed to light. Although image fusion enabled reasonable color correction, the object details were lost due to the last fusion step. CycleGAN corrected overall color tone relatively well but generated artifacts in the background. UGAN showed good visual quality and obtained the highest scores against all figures of merit (FOMs) by compensating for the colors and visibility compared to the other single enhancement methods.

Underwater Docking of a Visual Servoing Autonomous Underwater Vehicle Using a Single Camera (단일 카메라를 이용한 비쥬얼 서보 자율무인잠수정의 수중 도킹)

  • 이판묵;전봉환;홍영화;오준호;김시문;이계홍
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.316-320
    • /
    • 2003
  • This paper introduces an autonomous underwater vehicle (AUV) model, ASUM, equipped with a visual servo control system to dock into an underwater station with a camera and motion sensors. To make a visual servoing AUV, this paper implemented the visual servo control system designed with an augmented state equation, which was composed of the optical flow model of a camera and the equation of the AUV's motion. The system design and the hardware configuration of ASUM are presented in this paper. ASUM recognizes the target position by processing the captured image for the lights, which are installed around the end of the cone-type entrance of the duct. Unfortunately, experiments are not yet conducted when we write this article. The authors will present the results for the AUV docking test.

  • PDF

Study on Distortion Compensation of Underwater Archaeological Images Acquired through a Fisheye Lens and Practical Suggestions for Underwater Photography - A Case of Taean Mado Shipwreck No. 1 and No. 2 -

  • Jung, Young-Hwa;Kim, Gyuho;Yoo, Woo Sik
    • Journal of Conservation Science
    • /
    • v.37 no.4
    • /
    • pp.312-321
    • /
    • 2021
  • Underwater archaeology relies heavily on photography and video image recording during surveillances and excavations like ordinary archaeological studies on land. All underwater images suffer poor image quality and distortions due to poor visibility, low contrast and blur, caused by differences in refractive indices of water and air, properties of selected lenses and shapes of viewports. In the Yellow Sea (between mainland China and the Korean peninsula), the visibility underwater is far less than 1 m, typically in the range of 30 cm to 50 cm, on even a clear day, due to very high turbidity. For photographing 1 m x 1 m grids underwater, a very wide view angle (180°) fisheye lens with an 8 mm focal length is intentionally used despite unwanted severe barrel-shaped image distortion, even with a dome port camera housing. It is very difficult to map wide underwater archaeological excavation sites by combining severely distorted images. Development of practical compensation methods for distorted underwater images acquired through the fisheye lens is strongly desired. In this study, the source of image distortion in underwater photography is investigated. We have identified the source of image distortion as the mismatching, in optical axis and focal points, between dome port housing and fisheye lens. A practical image distortion compensation method, using customized image processing software, was explored and verified using archived underwater excavation images for effectiveness in underwater archaeological applications. To minimize unusable area due to severe distortion after distortion compensation, practical underwater photography guidelines are suggested.

CNN-based Opti-Acoustic Transformation for Underwater Feature Matching (수중에서의 특징점 매칭을 위한 CNN기반 Opti-Acoustic변환)

  • Jang, Hyesu;Lee, Yeongjun;Kim, Giseop;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.1-7
    • /
    • 2020
  • In this paper, we introduce the methodology that utilizes deep learning-based front-end to enhance underwater feature matching. Both optical camera and sonar are widely applicable sensors in underwater research, however, each sensor has its own weaknesses, such as light condition and turbidity for the optic camera, and noise for sonar. To overcome the problems, we proposed the opti-acoustic transformation method. Since feature detection in sonar image is challenging, we converted the sonar image to an optic style image. Maintaining the main contents in the sonar image, CNN-based style transfer method changed the style of the image that facilitates feature detection. Finally, we verified our result using cosine similarity comparison and feature matching against the original optic image.

Study on Underwater Optical Communication System for Video Transmission (영상통신용 수중광통신 시스템 연구)

  • Son, Hyun-Joong;Kang, Jin-Il;Nhat, Thieu Quang Minh;Kim, Seo Kang;Choi, Hyeung-Sik
    • Journal of Ocean Engineering and Technology
    • /
    • v.32 no.2
    • /
    • pp.143-150
    • /
    • 2018
  • In this study, we designed and developed an underwater LED communication system composed of an LED and a photo sensor. In addition, we experimented with video data transmission in a water tank. Two communication modules were installed in the 3 m water tank, and the image data transmission test was successfully performed at a rate of 20 frames per second(FPS), image resolution of $480{\times}272$, and data communication speed of 4 Mbps.

A Visual Servo Algorithm for Underwater Docking of an Autonomous Underwater Vehicle (AUV) (자율무인잠수정의 수중 도킹을 위한 비쥬얼 서보 제어 알고리즘)

  • 이판묵;전봉환;이종무
    • Journal of Ocean Engineering and Technology
    • /
    • v.17 no.1
    • /
    • pp.1-7
    • /
    • 2003
  • Autonomous underwater vehicles (AUVs) are unmanned, underwater vessels that are used to investigate sea environments in the study of oceanography. Docking systems are required to increase the capability of the AUVs, to recharge the batteries, and to transmit data in real time for specific underwater works, such as repented jobs at sea bed. This paper presents a visual :em control system used to dock an AUV into an underwater station. A camera mounted at the now center of the AUV is used to guide the AUV into dock. To create the visual servo control system, this paper derives an optical flow model of a camera, where the projected motions of the image plane are described with the rotational and translational velocities of the AUV. This paper combines the optical flow equation of the camera with the AUVs equation of motion, and deriver a state equation for the visual servo AUV. Further, this paper proposes a discrete-time MIMO controller, minimizing a cost function. The control inputs of the AUV are automatically generated with the projected target position on the CCD plane of the camera and with the AUVs motion. To demonstrate the effectiveness of the modeling and the control law of the visual servo AUV simulations on docking the AUV to a target station are performed with the 6-dof nonlinear equations of REMUS AUV and a CCD camera.

Off-Site Distortion and Color Compensation of Underwater Archaeological Images Photographed in the Very Turbid Yellow Sea

  • Jung, Young-Hwa;Kim, Gyuho;Yoo, Woo Sik
    • Journal of Conservation Science
    • /
    • v.38 no.1
    • /
    • pp.14-32
    • /
    • 2022
  • Underwater photographing and image recording are essential for pre-excavation survey and during excavation in underwater archaeology. Unlike photographing on land, all underwater images suffer various quality degradations such as shape distortions, color shift, blur, low contrast, high noise levels and so on. Outcome is very often heavily photographing equipment and photographer dependent. Excavation schedule, weather conditions, and water conditions can put burdens on divers. Usable images are very limited compared to the efforts. In underwater archaeological study in very turbid water such as in the Yellow Sea (between mainland China and the Korean peninsula), underwater photographing is very challenging. In this study, off-site image distortion and color compensation techniques using an image processing/analysis software is investigated as an alternative image quality enhancement method. As sample images, photographs taken during the excavation of 800-year-old Taean Mado Shipwrecks in the Yellow Sea in 2008-2010 were mainly used. Significant enhancement in distortion and color compensation of archived images were obtained by simple post image processing using image processing/analysis software (PicMan) customized for given view ports, lenses and cameras with and without optical axis offsets. Post image processing is found to be very effective in distortion and color compensation of both recent and archived images from various photographing equipment models and configurations. Merits and demerit of in-situ, distortion and color compensated photographing with sophisticated equipment and conventional photographing equipment, which requires post image processing, are compared.