• Title/Summary/Keyword: Underwater color image

Search Result 19, Processing Time 0.036 seconds

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

Visibility Enhancement of Underwater Stereo Images Using Depth Image (깊이 영상을 이용한 수중 스테레오 영상의 가시성 개선)

  • Shin, Hyoung-Chul;Kim, Sang-Hoon;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.17 no.4
    • /
    • pp.684-694
    • /
    • 2012
  • In the underwater environment, light is absorbed and scattered by water and floating particles, which makes the underwater images suffer from color degradation and limited visibility. Physically, the amount of the scattered light transmitted to the image is proportional to the distance between the camera and the object. In this paper, the proposed visibility enhancement. method utilizes depth images to estimate the light transmission and the degradation factor by the scattered light. To recover the scatter-free images without unnatural artifacts, the proposed method normalizes the degradation factor based on the value of each pixel of the image. Finally, the scatter-free images are obtained by removing the scattered components on the image according to the estimated transmission. The proposed method also considers the color discrepancies of underwater stereo images so that the stereo images have the same color appearance after the visibility enhancement. The experimental results show that the proposed method improves the color contrast more than 5% to 14% depending on the experimental images.

Edge Enhancement for Vessel Bottom Image Considering the Color Characteristics of Underwater Images (수중영상의 색상특성을 고려한 선박하부 영상의 윤곽선 강조 기법)

  • Choi, Hyun-Jun;Yang, Won-Jae;Kim, Bu-Ki
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.23 no.7
    • /
    • pp.926-932
    • /
    • 2017
  • Image distortion can occur when photographing deep sea targets with an optical camera. This problem arises because sunlight is not sufficiently transmitted due to seawater and various floating particles of dust. Particularly, color distortion takes place, causing green and blue color channels to be over emphasized due to water depth, while distortion of boundaries also occurs due to light refraction by seawater and floating particles of dust. These distortions degrade the overall quality of underwater images. In this paper, we analyze underwater images of the bottom of vessels. Based on the results, we propose a technique for color correction and edge enhancement. Experimental results show that the proposed method increases edge clarity by 3.39 % compared to the effective edges of the original underwater image. In addition, a quantitative evaluation and subjective image quality evaluation were concurrently performed. As a result, it was confirmed that object boundaries became clear with color correction. The color correction and contour enhancement method proposed in this paper can be applied in various fields requiring underwater imaging in the future.

Analysis of Color Error and Distortion Pattern in Underwater images (수중 영상의 색상 오차 및 왜곡 패턴 분석)

  • Jeong Yeop Kim
    • Journal of Platform Technology
    • /
    • v.12 no.3
    • /
    • pp.16-26
    • /
    • 2024
  • Videos shot underwater are known to have significant color distortion. Typical causes are backscattering by floating objects and attenuation of red colors in proportion to the depth of the water. In this paper, we aim to analyze color correction performance and color distortion patterns for images taken underwater. Backscattering and attenuation caused by suspended matter will be discussed in the next study. In this study, based on the DeepSeeColor model proposed by Jamieson et al., we verify color correction performance and analyze the pattern of color distortion according to changes in water depth. The input images were taken in the US Virgin Islands by Jamieson et al., and out of 1,190 images, 330 images including color charts were used. Color correction performance was expressed as angular error using the input image and the correction image using the DeepSeeColor model. Jamieson et al. calculated the angular error using only black and white patches among the color charts, so they were unable to provide an accurate analysis of overall color distortion. In this paper, the color correction error was calculated targeting the entire color chart patch, so an appropriate degree of color distortion can be suggested. Since the input image of the DeepSeeColor model has a depth of 1 to 8, color distortion patterns according to depth changes can be analyzed. In general, the deeper the depth, the greater the attenuation of red colors. Color distortion due to depth changes was modeled in the form of scale and offset movement to predict distortion due to depth changes. As the depth increases, the scale for color correction increases and the offset decreases. The color correction performance using the proposed method was improved by 41.5% compared to the conventional method.

  • PDF

Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment (구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식)

  • Kim, Donghoon;Lee, Donghwa;Myung, Hyun;Choi, Hyun-Taek
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Position Detection and Gathering Swimming Control of Fish Robot Using Color Detection Algorithm (색상 검출 알고리즘을 활용한 물고기로봇의 위치인식과 군집 유영제어)

  • Akbar, Muhammad;Shin, Kyoo Jae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.510-513
    • /
    • 2016
  • Detecting of the object in image processing is substantial but it depends on the object itself and the environment. An object can be detected either by its shape or color. Color is an essential for pattern recognition and computer vision. It is an attractive feature because of its simplicity and its robustness to scale changes and to detect the positions of the object. Generally, color of an object depends on its characteristics of the perceiving eye and brain. Physically, objects can be said to have color because of the light leaving their surfaces. Here, we conducted experiment in the aquarium fish tank. Different color of fish robots are mimic the natural swim of fish. Unfortunately, in the underwater medium, the colors are modified by attenuation and difficult to identify the color for moving objects. We consider the fish motion as a moving object and coordinates are found at every instinct of the aquarium to detect the position of the fish robot using OpenCV color detection. In this paper, we proposed to identify the position of the fish robot by their color and use the position data to control the fish robot gathering in one point in the fish tank through serial communication using RF module. It was verified by the performance test of detecting the position of the fish robot.

A study of Detecting Fish Robot Position Using The Define Average Color Weight Algorithm (평상 색상 구분 알고리즘을 이용한 물고기 로봇 위치 검출 연구)

  • Angani, Amaranth Varma;Lee, Ju Hyun;Shin, Kyoo Jae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1354-1357
    • /
    • 2015
  • In this paper, the designed fish robot is researched and developed for aquarium underwater robot. This paper is a study on how the outside technology merely to find the location of fish robots without specific sensor or internal devices for these fish robot. The model of the fish is designed to detect the position of the optical flow of the Robotic Fish in the Simulink through Matlab. This paper intends to recognize the shape of the tank via a video device such as a camera or camcorder using an image processing technique to identify the location of the robotic fish. Here, we are applied to the image comparing algorithm by using the average color weight algorithm method. In this, position coordinate system is used to find the position coordinates of the fish to identify the position of the Robotic fish. It was verified by the performance test of design robot.

A Study on the ultrasonic signals analysis for scan fish schools and seabed targets (어군 및 해저 목표물 탐지를 위한 초음파 신호분석에 관한 연구)

  • Kim Jae-Gab;Kim Won-Jung;Yang Hwa-Sup;Jeong Chan-Ju
    • Management & Information Systems Review
    • /
    • v.2
    • /
    • pp.95-106
    • /
    • 1998
  • Color Echo-sounder display signals reflected from underwater objects in eight colors according to the strength of the signal. When the sea bottom is hard due to the presence of rocks, etc, the trailing on the reflection become strong signal and soft to presence of mud, etc the trailing on the reflection become weak signal. Strong signals are displayed in the color range, reddish brown, orange and yellow, in descending order of intensity. Weak signals are displayed in the range blue, light blue, cyan and green, in ascending order of intensity. Image of fish schools at or near the sea bottom vary according to the characteristics of the beam angle setting. When the angle is wide, even fish not near the bottom may be recorded as being on the seabed. A narrow angle should, therefore, be selected when you want to get an accurate recording of fish at or near the sea bottom. The condition of the sea bottom can be determinded more easily when the beam angle is wide and pulse with is long. Though the objects could be verified by the type of reflected signals which have been transformed into digital signals strong middle and weak ones, the experiments have in continue under various condition. Futhermore, the methode of measuring temperature inside the sea ought to be examined.

  • PDF

Evaluation of Robustness of Deep Learning-Based Object Detection Models for Invertebrate Grazers Detection and Monitoring (조식동물 탐지 및 모니터링을 위한 딥러닝 기반 객체 탐지 모델의 강인성 평가)

  • Suho Bak;Heung-Min Kim;Tak-Young Kim;Jae-Young Lim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.297-309
    • /
    • 2023
  • The degradation of coastal ecosystems and fishery environments is accelerating due to the recent phenomenon of invertebrate grazers. To effectively monitor and implement preventive measures for this phenomenon, the adoption of remote sensing-based monitoring technology for extensive maritime areas is imperative. In this study, we compared and analyzed the robustness of deep learning-based object detection modelsfor detecting and monitoring invertebrate grazersfrom underwater videos. We constructed an image dataset targeting seven representative species of invertebrate grazers in the coastal waters of South Korea and trained deep learning-based object detection models, You Only Look Once (YOLO)v7 and YOLOv8, using this dataset. We evaluated the detection performance and speed of a total of six YOLO models (YOLOv7, YOLOv7x, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) and conducted robustness evaluations considering various image distortions that may occur during underwater filming. The evaluation results showed that the YOLOv8 models demonstrated higher detection speed (approximately 71 to 141 FPS [frame per second]) compared to the number of parameters. In terms of detection performance, the YOLOv8 models (mean average precision [mAP] 0.848 to 0.882) exhibited better performance than the YOLOv7 models (mAP 0.847 to 0.850). Regarding model robustness, it was observed that the YOLOv7 models were more robust to shape distortions, while the YOLOv8 models were relatively more robust to color distortions. Therefore, considering that shape distortions occur less frequently in underwater video recordings while color distortions are more frequent in coastal areas, it can be concluded that utilizing YOLOv8 models is a valid choice for invertebrate grazer detection and monitoring in coastal waters.