• Title/Summary/Keyword: Underwater vision

Search Result 34, Processing Time 0.03 seconds

Pose Estimation of Underwater Robot using Vision System (비젼시스템을 이용한 수중로봇의 위치추정)

  • Kim, Jin-Seok;Kim, Heung-Soo;Cho, Byung-Hak;Kim, Joon-Hong;Shin, Chang-Hoon;Kim, Seok-Gon
    • Proceedings of the KIEE Conference
    • /
    • 2001.11c
    • /
    • pp.292-296
    • /
    • 2001
  • Nuclear regulation requires a periodic visual test for inside structures of reactor to guarantee safe operation of nuclear power plant. However, existing visual test, which is proceeded manually, needs lots of time and labor. Even more, test workers should be exposed in radioactive environment during the test. An underwater robot system has being studied for more efficient and safer test. The position and pose estimation are important issue for the movement control of the robot. An algorithm was presented in this paper, which estimate the location and pose of the underwater robot clearly using vision system.

  • PDF

Localization of AUV Using Visual Shape Information of Underwater Structures (수중 구조물 형상의 영상 정보를 이용한 수중로봇 위치인식 기법)

  • Jung, Jongdae;Choi, Suyoung;Choi, Hyun-Taek;Myung, Hyun
    • Journal of Ocean Engineering and Technology
    • /
    • v.29 no.5
    • /
    • pp.392-397
    • /
    • 2015
  • An autonomous underwater vehicle (AUV) can perform flexible operations even in complex underwater environments because of its autonomy. Localization is one of the key components of this autonomous navigation. Because the inertial navigation system of an AUV suffers from drift, observing fixed objects in an inertial reference system can enhance the localization performance. In this paper, we propose a method of AUV localization using visual measurements of underwater structures. A camera measurement model that emulates the camera’s observations of underwater structures is designed in a particle filtering framework. Then, the particle weight is updated based on the extracted visual information of the underwater structures. The proposed method is validated based on the results of experiments performed in a structured basin environment.

Vision-based Sensor Fusion of a Remotely Operated Vehicle for Underwater Structure Diagnostication (수중 구조물 진단용 원격 조종 로봇의 자세 제어를 위한 비전 기반 센서 융합)

  • Lee, Jae-Min;Kim, Gon-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.349-355
    • /
    • 2015
  • Underwater robots generally show better performances for tasks than humans under certain underwater constraints such as. high pressure, limited light, etc. To properly diagnose in an underwater environment using remotely operated underwater vehicles, it is important to keep autonomously its own position and orientation in order to avoid additional control efforts. In this paper, we propose an efficient method to assist in the operation for the various disturbances of a remotely operated vehicle for the diagnosis of underwater structures. The conventional AHRS-based bearing estimation system did not work well due to incorrect measurements caused by the hard-iron effect when the robot is approaching a ferromagnetic structure. To overcome this drawback, we propose a sensor fusion algorithm with the camera and AHRS for estimating the pose of the ROV. However, the image information in the underwater environment is often unreliable and blurred by turbidity or suspended solids. Thus, we suggest an efficient method for fusing the vision sensor and the AHRS with a criterion which is the amount of blur in the image. To evaluate the amount of blur, we adopt two methods: one is the quantification of high frequency components using the power spectrum density analysis of 2D discrete Fourier transformed image, and the other is identifying the blur parameter based on cepstrum analysis. We evaluate the performance of the robustness of the visual odometry and blur estimation methods according to the change of light and distance. We verify that the blur estimation method based on cepstrum analysis shows a better performance through the experiments.

Implementation of an Underwater ROV for Detecting Foreign Objects in Water

  • Lho, Tae-Jung
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.1
    • /
    • pp.61-66
    • /
    • 2021
  • An underwater remotely operated vehicle (ROV) has been implemented. It can inspect foreign substances through a CCD camera while the ROV is running in water. The maximum thrust of the ROV's running thruster is 139.3 N, allowing the ROV to move forward and backward at a running speed of 1.03 m/s underwater. The structural strength of the guard frame was analyzed when the ROV collided with a wall while traveling at a speed of 1.03 m/s underwater, and found to be safe. The maximum running speed of the ROV is 1.08 m/s and the working speed is 0.2 m/s in a 5.8-m deep-water wave pool, which satisfies the target performance. As the ROV traveled underwater at a speed of 0.2 m/s, the inspection camera was able to read characters that were 3 mm in width at a depth of 1.5 m, which meant it could sufficiently identify foreign objects in the water.

Visual SLAM using Local Bundle Optimization in Unstructured Seafloor Environment (국소 집단 최적화 기법을 적용한 비정형 해저면 환경에서의 비주얼 SLAM)

  • Hong, Seonghun;Kim, Jinwhan
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.4
    • /
    • pp.197-205
    • /
    • 2014
  • As computer vision algorithms are developed on a continuous basis, the visual information from vision sensors has been widely used in the context of simultaneous localization and mapping (SLAM), called visual SLAM, which utilizes relative motion information between images. This research addresses a visual SLAM framework for online localization and mapping in an unstructured seabed environment that can be applied to a low-cost unmanned underwater vehicle equipped with a single monocular camera as a major measurement sensor. Typically, an image motion model with a predefined dimensionality can be corrupted by errors due to the violation of the model assumptions, which may lead to performance degradation of the visual SLAM estimation. To deal with the erroneous image motion model, this study employs a local bundle optimization (LBO) scheme when a closed loop is detected. The results of comparison between visual SLAM estimation with LBO and the other case are presented to validate the effectiveness of the proposed methodology.

The Development of Underwater Robotic System and Its application to Visual Inspection of Nuclear Reactor Internals (수중로봇 시스템의 개발과 원자로 압력용기 육안검사에의 적용)

  • 조병학;변승현;신창훈;양장범
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.1327-1330
    • /
    • 2004
  • An underwater robotic system has been developed and applied to visual inspection of reactor vessel internals. The Korea Electric Power Robot for Visual Test (KeproVt) consists of an underwater robot, a vision processor-based measuring unit, a master control station and a servo control station. The robot guided by the control station with the measuring unit can be controlled to have any motion at any position in the reactor vessel with $\pm$1 cm positioning and $\pm$2 degrees heading accuracies with enough precision to inspect reactor internals. A simple and fast installation process is emphasized in the developed system. The developed robotic system was successfully deployed at the Younggwang Nuclear Unit 1 for the visual inspection of reactor internals.

  • PDF

Docking System for Unmanned Underwater Vehicle using Reduced Signal Strength Indicator (전자기파의 감쇄신호를 이용한 무인 잠수정의 도킹시스템 개발)

  • Lee, Gi-Hyeon;Kim, Jin-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.9
    • /
    • pp.830-836
    • /
    • 2012
  • According to increasing the importance of underwater environments, the needs of UUV are growing. This paper represents the mechanism and algorithm of UUV docking system with 21-inch torpedo tubes for military submarines as a docking station. To improve the reliability of the docking, torpedo tubes launch a wired ROV and next the ROV combined with UUV is retrieved. For estimating the relative position between the ROV and UUV, in this paper, combining RF sensors and vision system is proposed. The RSSI method of RF sensors is used to estimate the distance and the optical image is combined for the directional information.

An adaptive method of multi-scale edge detection for underwater image

  • Bo, Liu
    • Ocean Systems Engineering
    • /
    • v.6 no.3
    • /
    • pp.217-231
    • /
    • 2016
  • This paper presents a new approach for underwater image analysis using the bi-dimensional empirical mode decomposition (BEMD) technique and the phase congruency information. The BEMD algorithm, fully unsupervised, it is mainly applied to texture extraction and image filtering, which are widely recognized as a difficult and challenging machine vision problem. The phase information is the very stability feature of image. Recent developments in analysis methods on the phase congruency information have received large attention by the image researchers. In this paper, the proposed method is called the EP model that inherits the advantages of the first two algorithms, so this model is suitable for processing underwater image. Moreover, the receiver operating characteristic (ROC) curve is presented in this paper to solve the problem that the threshold is greatly affected by personal experience when underwater image edge detection is performed using the EP model. The EP images are computed using combinations of the Canny detector parameters, and the binaryzation image results are generated accordingly. The ideal EP edge feature extractive maps are estimated using correspondence threshold which is optimized by ROC analysis. The experimental results show that the proposed algorithm is able to avoid the operation error caused by manual setting of the detection threshold, and to adaptively set the image feature detection threshold. The proposed method has been proved to be accuracy and effectiveness by the underwater image processing examples.

Bundle Adjustment and 3D Reconstruction Method for Underwater Sonar Image (수중 영상 소나의 번들 조정과 3차원 복원을 위한 운동 추정의 모호성에 관한 연구)

  • Shin, Young-Sik;Lee, Yeong-jun;Cho, Hyun-Taek;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.2
    • /
    • pp.51-59
    • /
    • 2016
  • In this paper we present (1) analysis of imaging sonar measurement for two-view relative pose estimation of an autonomous vehicle and (2) bundle adjustment and 3D reconstruction method using imaging sonar. Sonar has been a popular sensor for underwater application due to its robustness to water turbidity and visibility in water medium. While vision based motion estimation has been applied to many ground vehicles for motion estimation and 3D reconstruction, imaging sonar addresses challenges in relative sensor frame motion. We focus on the fact that the sonar measurement inherently poses ambiguity in its measurement. This paper illustrates the source of the ambiguity in sonar measurements and summarizes assumptions for sonar based robot navigation. For validation, we synthetically generated underwater seafloor with varying complexity to analyze the error in the motion estimation.

Localization and Autonomous Control of PETASUS System II for Manipulation in Structured Environment (구조화된 수중 환경에서 작업을 위한 PETASUS 시스템 II의 위치 인식 및 자율 제어)

  • Han, Jonghui;Ok, Jinsung;Chung, Wan Kyun
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.1
    • /
    • pp.37-42
    • /
    • 2013
  • In this paper, a localization algorithm and an autonomous controller for PETASUS system II which is an underwater vehicle-manipulator system, are proposed. To estimate its position and to identify manipulation targets in a structured environment, a multi-rate extended Kalman filter is developed, where map information and data from inertial sensors, sonar sensors, and vision sensors are used. In addition, a three layered control structure is proposed as a controller for autonomy. By this controller, PETASUS system II is able to generate waypoints and make decisions on its own behaviors. Experiment results are provided for verifying proposed algorithms.