• 제목/요약/키워드: Underwater object detection

검색결과 29건 처리시간 0.028초

수중 로봇을 위한 다중 템플릿 및 가중치 상관 계수 기반의 물체 인식 및 추종 (Multiple Templates and Weighted Correlation Coefficient-based Object Detection and Tracking for Underwater Robots)

  • 김동훈;이동화;명현;최현택
    • 로봇학회논문지
    • /
    • 제7권2호
    • /
    • pp.142-149
    • /
    • 2012
  • The camera has limitations of poor visibility in underwater environment due to the limited light source and medium noise of the environment. However, its usefulness in close range has been proved in many studies, especially for navigation. Thus, in this paper, vision-based object detection and tracking techniques using artificial objects for underwater robots have been studied. We employed template matching and mean shift algorithms for the object detection and tracking methods. Also, we propose the weighted correlation coefficient of adaptive threshold -based and color-region-aided approaches to enhance the object detection performance in various illumination conditions. The color information is incorporated into the template matched area and the features of the template are used to robustly calculate correlation coefficients. And the objects are recognized using multi-template matching approach. Finally, the water basin experiments have been conducted to demonstrate the performance of the proposed techniques using an underwater robot platform yShark made by KORDI.

수중 소나 영상 학습 데이터의 왜곡 및 회전 Augmentation을 통한 딥러닝 기반의 마커 검출 성능에 관한 연구 (Study of Marker Detection Performance on Deep Learning via Distortion and Rotation Augmentation of Training Data on Underwater Sonar Image)

  • 이언호;이영준;최진우;이세진
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.14-21
    • /
    • 2019
  • In the ground environment, mobile robot research uses sensors such as GPS and optical cameras to localize surrounding landmarks and to estimate the position of the robot. However, an underwater environment restricts the use of sensors such as optical cameras and GPS. Also, unlike the ground environment, it is difficult to make a continuous observation of landmarks for location estimation. So, in underwater research, artificial markers are installed to generate a strong and lasting landmark. When artificial markers are acquired with an underwater sonar sensor, different types of noise are caused in the underwater sonar image. This noise is one of the factors that reduces object detection performance. This paper aims to improve object detection performance through distortion and rotation augmentation of training data. Object detection is detected using a Faster R-CNN.

Sonar-based yaw estimation of target object using shape prediction on viewing angle variation with neural network

  • Sung, Minsung;Yu, Son-Cheol
    • Ocean Systems Engineering
    • /
    • 제10권4호
    • /
    • pp.435-449
    • /
    • 2020
  • This paper proposes a method to estimate the underwater target object's yaw angle using a sonar image. A simulator modeling imaging mechanism of a sonar sensor and a generative adversarial network for style transfer generates realistic template images of the target object by predicting shapes according to the viewing angles. Then, the target object's yaw angle can be estimated by comparing the template images and a shape taken in real sonar images. We verified the proposed method by conducting water tank experiments. The proposed method was also applied to AUV in field experiments. The proposed method, which provides bearing information between underwater objects and the sonar sensor, can be applied to algorithms such as underwater localization or multi-view-based underwater object recognition.

강건한 CNN기반 수중 물체 인식을 위한 이미지 합성과 자동화된 Annotation Tool (Synthesizing Image and Automated Annotation Tool for CNN based Under Water Object Detection)

  • 전명환;이영준;신영식;장혜수;여태경;김아영
    • 로봇학회논문지
    • /
    • 제14권2호
    • /
    • pp.139-149
    • /
    • 2019
  • In this paper, we present auto-annotation tool and synthetic dataset using 3D CAD model for deep learning based object detection. To be used as training data for deep learning methods, class, segmentation, bounding-box, contour, and pose annotations of the object are needed. We propose an automated annotation tool and synthetic image generation. Our resulting synthetic dataset reflects occlusion between objects and applicable for both underwater and in-air environments. To verify our synthetic dataset, we use MASK R-CNN as a state-of-the-art method among object detection model using deep learning. For experiment, we make the experimental environment reflecting the actual underwater environment. We show that object detection model trained via our dataset show significantly accurate results and robustness for the underwater environment. Lastly, we verify that our synthetic dataset is suitable for deep learning model for the underwater environments.

소나영상을 이용한 수중 물체의 식별 (Identification of Underwater Objects using Sonar Image)

  • 강현철
    • 전자공학회논문지
    • /
    • 제53권3호
    • /
    • pp.91-98
    • /
    • 2016
  • 소나 영상에서 수중 물체의 검출과 분류는 도전적인 과제이다. 본 논문에서는 소나 영상과 영상처리기법을 이용하여 해저의 물체를 식별하는 시스템을 제안한다. 수중 물체의 식별 과정은 수중 물체 후보 영역 검출과 물체 식별의 두 단계로 구성된다. 영상 정합(image registration) 기법을 이용하여 수중 물체 후보 영역을 검출하고, 기존에 획득된 기준 배경 영상과 현재 스캔된 영상 사이의 공통된 특징점을 검출하여 정합한 후, 두 영상의 차 영상(difference image)을 구하여 검출한다. 검출된 물체는 고유벡터와 고유값을 특징으로 사용하여 데이터베이스내의 패턴과 가장 유사한 패턴으로 분류한다. 제안하는 수중 물체 식별 시스템은 최단 소행 항로(Q route) 확보와 같은 응용에 효율적으로 사용될 수 있을 것으로 기대된다.

사이드 스캔 소나 영상에서 수중물체 자동 탐지를 위한 컨볼루션 신경망 기법 적용 (The application of convolutional neural networks for automatic detection of underwater object in side scan sonar images)

  • 김정문;최지웅;권혁종;오래근;손수욱
    • 한국음향학회지
    • /
    • 제37권2호
    • /
    • pp.118-128
    • /
    • 2018
  • 본 논문은 사이드 스캔 소나 영상을 컨볼루션 신경망으로 학습하여 수중물체를 탐색하는 방법을 다루었다. 사이드 스캔 소나 영상을 사람이 직접 분석하던 방법에서 컨볼루션 신경망 알고리즘이 보강되면 분석의 효율성을 높일 수 있다. 연구에 사용한 사이드 스캔 소나의 영상 데이터는 미 해군 수상전센터에서 공개한 자료이고 4종류의 합성수중물체로 구성되었다. 컨볼루션 신경망 알고리즘은 관심영역 기반으로 학습하는 Faster R-CNN(Region based Convolutional Neural Networks)을 기본으로 하며 신경망의 세부사항을 보유한 데이터에 적합하도록 구성하였다. 연구의 결과를 정밀도-재현율 곡선으로 비교하였고 소나 영상 데이터에 지정한 관심영역의 변경이 탐지성능에 미치는 영향을 검토함으로써 컨볼루션 신경망의 수중물체 탐지 적용성에 대해 살펴보았다.

구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식 (Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment)

  • 김동훈;이동화;명현;최현택
    • 제어로봇시스템학회논문지
    • /
    • 제19권8호
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

이동하는 수중 물체 탐지를 위한 축소모형실험 시스템 개선 (Enhancement of Physical Modeling System for Underwater Moving Object Detection)

  • 김예솔;이효선;조성호;정현기
    • 지구물리와물리탐사
    • /
    • 제22권2호
    • /
    • pp.72-79
    • /
    • 2019
  • 최근 전기비저항 탐사의 정밀 계측기술을 활용한 수중 물체 탐지방법이 제시되었고, 변화하는 해양환경에 대응할 수 있는 자료처리 기술 고도화 연구의 필요성이 제기되었다. 이 연구에서는 효율적인 실험과 검증을 위한 개선된 축소모형실험 시스템과 그 운용 결과를 제시한다. 이 시스템은 다음과 같은 특징을 가진다. 1) 실시간 실험영상과 계측자료의 동시 수집 및 분석과 같은 모든 프로세스가 5 Hz의 속도로 이루어진다. 2) 두 개 탐지선 자료의 실시간 계측 및 처리로 수중물체의 이동방향 파악이 가능하다. 3) 저장된 자료를 이용한 반복실험이 가능하여 획득된 자료의 다각도 반복분석이 가능하다. 4) 모니터링 화면을 통해 수중물체가 이동하는 모습과 두 탐지선 자료를 동시에 직관적으로 파악 가능하다. 개선된 시스템을 이용한 실험 결과, 모든 시스템이 정상 작동하고 효율적 실험이 가능함을 확인하였다.

실시간 순환 신경망 기반의 멀티빔 소나 이미지를 이용한 수중 물체의 추적에 관한 연구 (Study on Underwater Object Tracking Based on Real-Time Recurrent Regression Networks Using Multi-beam Sonar Images)

  • 이언호;이영준;최진우;이세진
    • 로봇학회논문지
    • /
    • 제15권1호
    • /
    • pp.8-15
    • /
    • 2020
  • This research is a case study of underwater object tracking based on real-time recurrent regression networks (Re3). Re3 has the concept of generic object tracking. Because of these characteristics, it is very effective to apply this model to unclear underwater sonar images. The model also an pursues object tracking method, thus it solves the problem of calculating load that may be limited when object detection models are used, unlike the tracking models. The model is also highly intuitive, so it has excellent continuity of tracking even if the object being tracked temporarily becomes partially occluded or faded. There are 4 types of the dataset using multi-beam sonar images: including (a) dummy object floated at the testbed; (b) dummy object settled at the bottom of the sea; (c) tire object settled at the bottom of the testbed; (d) multi-objects settled at the bottom of the testbed. For this study, the experiments were conducted to obtain underwater sonar images from the sea and underwater testbed, and the validity of using noisy underwater sonar images was tested to be able to track objects robustly.

수중영상을 이용한 저서성 해양무척추동물의 실시간 객체 탐지: YOLO 모델과 Transformer 모델의 비교평가 (Realtime Detection of Benthic Marine Invertebrates from Underwater Images: A Comparison betweenYOLO and Transformer Models)

  • 박강현;박수호;장선웅;공신우;곽지우;이양원
    • 대한원격탐사학회지
    • /
    • 제39권5_3호
    • /
    • pp.909-919
    • /
    • 2023
  • Benthic marine invertebrates, the invertebrates living on the bottom of the ocean, are an essential component of the marine ecosystem, but excessive reproduction of invertebrate grazers or pirate creatures can cause damage to the coastal fishery ecosystem. In this study, we compared and evaluated You Only Look Once Version 7 (YOLOv7), the most widely used deep learning model for real-time object detection, and detection tansformer (DETR), a transformer-based model, using underwater images for benthic marine invertebratesin the coasts of South Korea. YOLOv7 showed a mean average precision at 0.5 (mAP@0.5) of 0.899, and DETR showed an mAP@0.5 of 0.862, which implies that YOLOv7 is more appropriate for object detection of various sizes. This is because YOLOv7 generates the bounding boxes at multiple scales that can help detect small objects. Both models had a processing speed of more than 30 frames persecond (FPS),so it is expected that real-time object detection from the images provided by divers and underwater drones will be possible. The proposed method can be used to prevent and restore damage to coastal fisheries ecosystems, such as rescuing invertebrate grazers and creating sea forests to prevent ocean desertification.