• Title/Summary/Keyword: vision based navigation

Search Result 192, Processing Time 0.027 seconds

Performance of AMI-CORBA for Field Robot Application

  • Syahroni Nanang;Choi Jae-Weon
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.10a
    • /
    • pp.384-389
    • /
    • 2005
  • The objective on this project is to develop a cooperative Field Robot (FR), by using a customize Open Control Platform (OCP) as design and development process. An OCP is a CORBA-based solution for networked control system, which facilitates the transitioning of control designs to embedded targets. In order to achieve the cooperation surveillance system, two FRs are distributed by navigation messages (GPS and sensor data) using CORBA event-channel communication, while graphical information from IR night vision camera is distributed using CORBA Asynchronous Method Invocation (AMI). The QoS features of AMI in the network are to provide the additional delivery method for distributing an IR camera Images will be evaluate in this experiment. In this paper also presents an empirical performance evaluation from the variable chunk sizes were compared with the number of clients and message latency, some of the measurement data's are summarized in the following paragraph. In the AMI buffers size measurement, when the chuck sizes were change, the message latency is significantly change according to it frame size. The smaller frame size between 256 bytes to 512 bytes is more efficient fur the message size below 2Mbytes, but it average performance in the large of message size a bigger frame size is more efficient. For the several destination, the same experiment using 512 bytes to 2 Mbytes frame with 2 to 5 destinations are presented. For the message size bigger than 2Mbytes, the AMI are still able to meet requirement far more than 5 clients simultaneously.

  • PDF

Deep Learning-based Action Recognition using Skeleton Joints Mapping (스켈레톤 조인트 매핑을 이용한 딥 러닝 기반 행동 인식)

  • Tasnim, Nusrat;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.2
    • /
    • pp.155-162
    • /
    • 2020
  • Recently, with the development of computer vision and deep learning technology, research on human action recognition has been actively conducted for video analysis, video surveillance, interactive multimedia, and human machine interaction applications. Diverse techniques have been introduced for human action understanding and classification by many researchers using RGB image, depth image, skeleton and inertial data. However, skeleton-based action discrimination is still a challenging research topic for human machine-interaction. In this paper, we propose an end-to-end skeleton joints mapping of action for generating spatio-temporal image so-called dynamic image. Then, an efficient deep convolution neural network is devised to perform the classification among the action classes. We use publicly accessible UTD-MHAD skeleton dataset for evaluating the performance of the proposed method. As a result of the experiment, the proposed system shows better performance than the existing methods with high accuracy of 97.45%.

Digital Image based Real-time Sea Fog Removal Technique using GPU (GPU를 이용한 영상기반 고속 해무제거 기술)

  • Choi, Woon-sik;Lee, Yoon-hyuk;Seo, Young-ho;Choi, Hyun-jun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.12
    • /
    • pp.2355-2362
    • /
    • 2016
  • Seg fog removal is an important issue concerned by both computer vision and image processing. Sea fog or haze removal is widely used in lots of fields, such as automatic control system, CCTV, and image recognition. Color image dehazing techniques have been extensively studied, and expecially the dark channel prior(DCP) technique has been widely used. This paper propose a fast and efficient image prior - dark channel prior to remove seg-fog from a single digital image based on the GPU. We implement the basic parallel program and then optimize it to obtain performance acceleration with more than 250 times. While paralleling and the optimizing the algorithm, we improve some parts of the original serial program or basic parallel program according to the characteristics of several steps. The proposed GPU programming algorithm and implementation results may be used with advantages as pre-processing in many systems, such as safe navigation for ship, topographical survey, intelligent vehicles, etc.

An Efficient Algorithm for 3-D Range Measurement using Disparity of Stereoscopic Camera (스테레오 카메라의 양안 시차를 이용한 거리 계측의 고속 연산 알고리즘)

  • 김재한
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.6
    • /
    • pp.1163-1168
    • /
    • 2001
  • The ranging systems measure range data in three-dimensional coordinate from target surface. These non-contact remote ranging systems is widely used in various automation applications, including military equipment, construction field, navigation, inspection, assembly, and robot vision. The active ranging systems using time of flight technique or light pattern illumination technique are complex and expensive, the passive systems based on stereo or focusing principle are time-consuming. The proposed algorithm, that is based on cross correlation of projection profile of vertical edge, provides advantages of fast and simple operation in the range acquisition. The results of experiment show the effectiveness of the proposed algorithm.

  • PDF

Bundle Adjustment and 3D Reconstruction Method for Underwater Sonar Image (수중 영상 소나의 번들 조정과 3차원 복원을 위한 운동 추정의 모호성에 관한 연구)

  • Shin, Young-Sik;Lee, Yeong-jun;Cho, Hyun-Taek;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.2
    • /
    • pp.51-59
    • /
    • 2016
  • In this paper we present (1) analysis of imaging sonar measurement for two-view relative pose estimation of an autonomous vehicle and (2) bundle adjustment and 3D reconstruction method using imaging sonar. Sonar has been a popular sensor for underwater application due to its robustness to water turbidity and visibility in water medium. While vision based motion estimation has been applied to many ground vehicles for motion estimation and 3D reconstruction, imaging sonar addresses challenges in relative sensor frame motion. We focus on the fact that the sonar measurement inherently poses ambiguity in its measurement. This paper illustrates the source of the ambiguity in sonar measurements and summarizes assumptions for sonar based robot navigation. For validation, we synthetically generated underwater seafloor with varying complexity to analyze the error in the motion estimation.

Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor (2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발)

  • Moon, Jongsik;Lee, Byung-Yoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

Localization of Unmanned Ground Vehicle using 3D Registration of DSM and Multiview Range Images: Application in Virtual Environment (DSM과 다시점 거리영상의 3차원 등록을 이용한 무인이동차량의 위치 추정: 가상환경에서의 적용)

  • Park, Soon-Yong;Choi, Sung-In;Jang, Jae-Seok;Jung, Soon-Ki;Kim, Jun;Chae, Jeong-Sook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.7
    • /
    • pp.700-710
    • /
    • 2009
  • A computer vision technique of estimating the location of an unmanned ground vehicle is proposed. Identifying the location of the unmaned vehicle is very important task for automatic navigation of the vehicle. Conventional positioning sensors may fail to work properly in some real situations due to internal and external interferences. Given a DSM(Digital Surface Map), location of the vehicle can be estimated by the registration of the DSM and multiview range images obtained at the vehicle. Registration of the DSM and range images yields the 3D transformation from the coordinates of the range sensor to the reference coordinates of the DSM. To estimate the vehicle position, we first register a range image to the DSM coarsely and then refine the result. For coarse registration, we employ a fast random sample matching method. After the initial position is estimated and refined, all subsequent range images are registered by applying a pair-wise registration technique between range images. To reduce the accumulation error of pair-wise registration, we periodically refine the registration between range images and the DSM. Virtual environment is established to perform several experiments using a virtual vehicle. Range images are created based on the DSM by modeling a real 3D sensor. The vehicle moves along three different path while acquiring range images. Experimental results show that registration error is about under 1.3m in average.

The Vessels Traffic Measurement and Real-time Track Assessment using Computer Vision (컴퓨터 비젼을 이용한 선박 교통량 측정 및 항적 평가)

  • Joo, Ki-Se;Jeong, Jung-Sik;Kim, Chol-Seong;Jeong, Jae-Yong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.17 no.2
    • /
    • pp.131-136
    • /
    • 2011
  • The furrow calculation and traffic measurement of sailing ship using computer vision are useful methods to prevent maritime accident by predicting the possibility of an accident occurrence in advance. In this paper, sailing ships are recognized using image erosion, differential operator and minimax value, which can be verified directly because the calculated coordinates are displayed on electronic navigation chart. The developed algorithm based on area information of this paper has the advantage which is compared to the conventional radar system focused on point information.

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

Position Estimation of Wheeled Mobile Robot in a Corridor Using Neural Network (신경망을 이용한 복도에서의 구륜이동로봇의 위치추정)

  • Choi, Kyung-Jin;Lee, Young-Hyun;Park, Chong-Kug
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.5
    • /
    • pp.577-582
    • /
    • 2004
  • This paper describes position estimation algorithm using neural network for the navigation of the vision-based Wheeled Mobile Robot (WMR) in a corridor with taking ceiling lamps as landmark. From images of a corridor the lamp's line on the ceiling in corridor has a specific slope to the lateral position of the WMR. The vanishing point produced by the lamp's line also has a specific position to the orientation of WMR. The ceiling lamps has a limited size and shape like a circle in image. Simple image processing algorithms are used to extract lamps from the corridor image. Then the lamp's line and vanishing point's position are defined and calculated at known position of WMR in a corridor To estimate the lateral position and orientation of WMR from an image, the relationship between the position of WMR and the features of ceiling lamps have to be defined. Data set between position of WMR and features of lamps are configured. Neural network are composed and teamed with data set. Back propagation algorithm(BPN) is used for learning. And it is applied in navigation of WMR in a corridor.