• 제목/요약/키워드: Image based localization

검색결과 258건 처리시간 0.024초

Reliable Autonomous Reconnaissance System for a Tracked Robot in Multi-floor Indoor Environments with Stairs (다층 실내 환경에서 계단 극복이 가능한 궤도형 로봇의 신뢰성 있는 자율 주행 정찰 시스템)

  • Juhyeong Roh;Boseong Kim;Dokyeong Kim;Jihyeok Kim;D. Hyunchul Shim
    • The Journal of Korea Robotics Society
    • /
    • 제19권2호
    • /
    • pp.149-158
    • /
    • 2024
  • This paper presents a robust autonomous navigation and reconnaissance system for tracked robots, designed to handle complex multi-floor indoor environments with stairs. We introduce a localization algorithm that adjusts scan matching parameters to robustly estimate positions and create maps in environments with scarce features, such as narrow rooms and staircases. Our system also features a path planning algorithm that calculates distance costs from surrounding obstacles, integrated with a specialized PID controller tuned to the robot's differential kinematics for collision-free navigation in confined spaces. The perception module leverages multi-image fusion and camera-LiDAR fusion to accurately detect and map the 3D positions of objects around the robot in real time. Through practical tests in real settings, we have verified that our system performs reliably. Based on this reliability, we expect that our research team's autonomous reconnaissance system will be practically utilized in actual disaster situations and environments that are difficult for humans to access, thereby making a significant contribution.

Experimental result of Real-time Sonar-based SLAM for underwater robot (소나 기반 수중 로봇의 실시간 위치 추정 및 지도 작성에 대한 실험적 검증)

  • Lee, Yeongjun;Choi, Jinwoo;Ko, Nak Yong;Kim, Taejin;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • 제54권3호
    • /
    • pp.108-118
    • /
    • 2017
  • This paper presents experimental results of realtime sonar-based SLAM (simultaneous localization and mapping) using probability-based landmark-recognition. The sonar-based SLAM is used for navigation of underwater robot. Inertial sensor as IMU (Inertial Measurement Unit) and DVL (Doppler Velocity Log) and external information from sonar image processing are fused by Extended Kalman Filter (EKF) technique to get the navigation information. The vehicle location is estimated by inertial sensor data, and it is corrected by sonar data which provides relative position between the vehicle and the landmark on the bottom of the basin. For the verification of the proposed method, the experiments were performed in a basin environment using an underwater robot, yShark.

Visualization and Localization of Fusion Image Using VRML for Three-dimensional Modeling of Epileptic Seizure Focus (VRML을 이용한 융합 영상에서 간질환자 발작 진원지의 3차원적 가시화와 위치 측정 구현)

  • 이상호;김동현;유선국;정해조;윤미진;손혜경;강원석;이종두;김희중
    • Progress in Medical Physics
    • /
    • 제14권1호
    • /
    • pp.34-42
    • /
    • 2003
  • In medical imaging, three-dimensional (3D) display using Virtual Reality Modeling Language (VRML) as a portable file format can give intuitive information more efficiently on the World Wide Web (WWW). The web-based 3D visualization of functional images combined with anatomical images has not studied much in systematic ways. The goal of this study was to achieve a simultaneous observation of 3D anatomic and functional models with planar images on the WWW, providing their locational information in 3D space with a measuring implement using VRML. MRI and ictal-interictal SPECT images were obtained from one epileptic patient. Subtraction ictal SPECT co-registered to MRI (SISCOM) was performed to improve identification of a seizure focus. SISCOM image volumes were held by thresholds above one standard deviation (1-SD) and two standard deviations (2-SD). SISCOM foci and boundaries of gray matter, white matter, and cerebrospinal fluid (CSF) in the MRI volume were segmented and rendered to VRML polygonal surfaces by marching cube algorithm. Line profiles of x and y-axis that represent real lengths on an image were acquired and their maximum lengths were the same as 211.67 mm. The real size vs. the rendered VRML surface size was approximately the ratio of 1 to 605.9. A VRML measuring tool was made and merged with previous VRML surfaces. User interface tools were embedded with Java Script routines to display MRI planar images as cross sections of 3D surface models and to set transparencies of 3D surface models. When transparencies of 3D surface models were properly controlled, a fused display of the brain geometry with 3D distributions of focal activated regions provided intuitively spatial correlations among three 3D surface models. The epileptic seizure focus was in the right temporal lobe of the brain. The real position of the seizure focus could be verified by the VRML measuring tool and the anatomy corresponding to the seizure focus could be confirmed by MRI planar images crossing 3D surface models. The VRML application developed in this study may have several advantages. Firstly, 3D fused display and control of anatomic and functional image were achieved on the m. Secondly, the vector analysis of a 3D surface model was defined by the VRML measuring tool based on the real size. Finally, the anatomy corresponding to the seizure focus was intuitively detected by correlations with MRI images. Our web based visualization of 3-D fusion image and its localization will be a help to online research and education in diagnostic radiology, therapeutic radiology, and surgery applications.

  • PDF

Tumor Motion Tracking during Radiation Treatment using Image Registration and Tumor Matching between Planning 4D MDCT and Treatment 4D CBCT (치료계획용 4D MDCT와 치료 시 획득한 4D CBCT간 영상정합 및 종양 매칭을 이용한 방사선 치료 시 종양 움직임 추적)

  • Jung, Julip;Hong, Helen
    • Journal of KIISE
    • /
    • 제43권3호
    • /
    • pp.353-361
    • /
    • 2016
  • During image-guided radiation treatment of lung cancer patients, it is necessary to track the tumor motion because it can change during treatment as a consequence of respiratory motion and cardiac motion. In this paper, we propose a method for tracking the motion of the lung tumors based on the three-dimensional image information from planning 4D MDCT and treatment 4D CBCT images. First, to effectively track the tumor motion during treatment, the global motion of the tumor is estimated based on a tumor-specific motion model obtained from planning 4D MDCT images. Second, to increase the accuracy of the tumor motion tracking, the local motion of the tumor is estimated based on the structural information of the tumor from 4D CBCT images. To evaluate the performance of the proposed method, we estimated the tracking results of proposed method using digital phantom. The results show that the tumor localization error of local motion estimation is reduced by 45% as compared with that of global motion estimation.

A Study on the Image Based Auto-focus Method Considering Jittering of Airborne EO/IR (항공탑재 EO/IR의 영상떨림을 고려한 영상기반 자동 초점조절 기법 연구)

  • Kang, Myung-Ho;Kim, Sung-Jae;Koh, Yeong Jun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • 제50권1호
    • /
    • pp.39-45
    • /
    • 2022
  • In this paper, we propose methods to improve image-based auto-focus that can compensate for drawbacks of traditional auto-focus control. When adjusting the focus, there is a problem that the focus window cannot be set to the same position if the camera's LOS is not directed at the same location and flow or shake. To address this issue, we applied image tracking techniques to improve optimal focus localization accuracy. And also, although the same focus value should be calculated at the same focus step, but different values can be calculated by camera's fine shaking or image disturbance due to atmospheric scattering. To tackle this problem a SAFS (Stable Adjacency Frame Selection) has been proposed. As a result of this study, our proposed methodology shows more accurate than traditional methods in terms of finding best focus position.

Localization of Unmanned Ground Vehicle using 3D Registration of DSM and Multiview Range Images: Application in Virtual Environment (DSM과 다시점 거리영상의 3차원 등록을 이용한 무인이동차량의 위치 추정: 가상환경에서의 적용)

  • Park, Soon-Yong;Choi, Sung-In;Jang, Jae-Seok;Jung, Soon-Ki;Kim, Jun;Chae, Jeong-Sook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • 제15권7호
    • /
    • pp.700-710
    • /
    • 2009
  • A computer vision technique of estimating the location of an unmanned ground vehicle is proposed. Identifying the location of the unmaned vehicle is very important task for automatic navigation of the vehicle. Conventional positioning sensors may fail to work properly in some real situations due to internal and external interferences. Given a DSM(Digital Surface Map), location of the vehicle can be estimated by the registration of the DSM and multiview range images obtained at the vehicle. Registration of the DSM and range images yields the 3D transformation from the coordinates of the range sensor to the reference coordinates of the DSM. To estimate the vehicle position, we first register a range image to the DSM coarsely and then refine the result. For coarse registration, we employ a fast random sample matching method. After the initial position is estimated and refined, all subsequent range images are registered by applying a pair-wise registration technique between range images. To reduce the accumulation error of pair-wise registration, we periodically refine the registration between range images and the DSM. Virtual environment is established to perform several experiments using a virtual vehicle. Range images are created based on the DSM by modeling a real 3D sensor. The vehicle moves along three different path while acquiring range images. Experimental results show that registration error is about under 1.3m in average.

Development of Patrol Robot using DGPS and Curb Detection (DGPS와 연석추출을 이용한 순찰용 로봇의 개발)

  • Kim, Seung-Hun;Kim, Moon-June;Kang, Sung-Chul;Hong, Suk-Kyo;Roh, Chi-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • 제13권2호
    • /
    • pp.140-146
    • /
    • 2007
  • This paper demonstrates the development of a mobile robot for patrol. We fuse differential GPS, angle sensor and odometry data using the framework of extended Kalman filter to localize a mobile robot in outdoor environments. An important feature of road environment is the existence of curbs. So, we also propose an algorithm to find out the position of curbs from laser range finder data using Hough transform. The mobile robot builds the map of the curbs of roads and the map is used fur tracking and localization. The patrol robot system consists of a mobile robot and a control station. The mobile robot sends the image data from a camera to the control station. The remote control station receives and displays the image data. Also, the patrol robot system can be used in two modes, teleoperated or autonomous. In teleoperated mode, the teleoperator commands the mobile robot based on the image data. On the other hand, in autonomous mode, the mobile robot has to autonomously track the predefined waypoints. So, we have designed a path tracking controller to track the path. We have been able to confirm that the proposed algorithms show proper performances in outdoor environment through experiments in the road.

Precise Localization for Mobile Robot Based on Cell-coded Landmarks on the Ceiling (천정 부착 셀코드 랜드마크에 기반한 이동 로봇의 정밀 위치 계산)

  • Chen, Hongxin;Wang, Shi;Yang, Chang-Ju;Lee, Jun-Ho;Kim, Hyong-Suk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • 제46권2호
    • /
    • pp.75-83
    • /
    • 2009
  • This paper presents a new mobile robot localization method for indoor robot navigation. The method uses color-coded landmarks on the ceiling and a camera is installed on the robot facing the ceiling. The proposed "cell-coded map", with the use of only nine different kinds of color-coded landmarks distributed in a particular way, helps reduce the complexity of the landmark structure. This technique is applicable for navigation in an unlimited size of indoor space. The structure of the landmarks and the recognition method are introduced. And 2 rigid rules are also used to ensure the correctness of the recognition. Experimental results prove that the method is useful.

Nonlinear model for estimating depth map of haze removal (안개제거의 깊이 맵 추정을 위한 비선형 모델)

  • Lee, Seungmin;Ngo, Dat;Kang, Bongsoon
    • Journal of IKEEE
    • /
    • 제24권2호
    • /
    • pp.492-496
    • /
    • 2020
  • The visibility deteriorates in hazy weather and it is difficult to accurately recognize information captured by the camera. Research is being actively conducted to remove haze so that camera-based applications such as object localization/detection and lane recognition can operate normally even in hazy weather. In this paper, we propose a nonlinear model for depth map estimation through an extensive analysis that the difference between brightness and saturation in hazy image increases non-linearly with the depth of the image. The quantitative evaluation(MSE, SSIM, TMQI) shows that the proposed haze removal method based on the nonlinear model is superior to other state-of-the-art methods.

Side Scan Sonar based Pose-graph SLAM (사이드 스캔 소나 기반 Pose-graph SLAM)

  • Gwon, Dae-Hyeon;Kim, Joowan;Kim, Moon Hwan;Park, Ho Gyu;Kim, Tae Yeong;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • 제12권4호
    • /
    • pp.385-394
    • /
    • 2017
  • Side scanning sonar (SSS) provides valuable information for robot navigation. However using the side scanning sonar images in the navigation was not fully studied. In this paper, we use range data, and side scanning sonar images from UnderWater Simulator (UWSim) and propose measurement models in a feature based simultaneous localization and mapping (SLAM) framework. The range data is obtained by echosounder and sidescanning sonar images from side scan sonar module for UWSim. For the feature, we used the A-KAZE feature for the SSS image matching and adjusting the relative robot pose by SSS bundle adjustment (BA) with Ceres solver. We use BA for the loop closure constraint of pose-graph SLAM. We used the Incremental Smoothing and Mapping (iSAM) to optimize the graph. The optimized trajectory was compared against the dead reckoning (DR).