• Title/Summary/Keyword: Vision Based Navigation

Search Result 195, Processing Time 0.042 seconds

Flexible camera series network for deformation measurement of large scale structures

  • Yu, Qifeng;Guan, Banglei;Shang, Yang;Liu, Xiaolin;Li, Zhang
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.587-595
    • /
    • 2019
  • Deformation measurement of large scale structures, such as the ground beds of high-rise buildings, tunnels, bridge, and railways, are important for insuring service quality and safety. The pose-relay videometrics method and displacement-relay videometrics method have already presented to measure the pose of non-intervisible objects and vertical subsidence of unstable areas, respectively. Both methods combine the cameras and cooperative markers to form the camera series networks. Based on these two networks, we propose two novel videometrics methods with closed-loop camera series network for deformation measurement of large scale structures. The closed-loop camera series network offers "closed-loop constraints" for the camera series network: the deformation of the reference points observed by different measurement stations is identical. The closed-loop constraints improve the measurement accuracy using camera series network. Furthermore, multiple closed-loops and the flexible combination of camera series network are introduced to facilitate more complex deformation measurement tasks. Simulated results show that the closed-loop constraints can enhance the measurement accuracy of camera series network effectively.

ARVisualizer : A Markerless Augmented Reality Approach for Indoor Building Information Visualization System

  • Kim, Albert Hee-Kwan;Cho, Hyeon-Dal
    • Spatial Information Research
    • /
    • v.16 no.4
    • /
    • pp.455-465
    • /
    • 2008
  • Augmented reality (AR) has tremendous potential in visualizing geospatial information, especially on the actual physical scenes. However, to utilize augmented reality in mobile system, many researches have undergone with GPS or ubiquitous marker based approaches. Although there are several papers written with vision based markerless tracking, previous approaches provide fairly good results only in largely under "controlled environments." Localization and tracking of current position become more complex problem when it is used in indoor environments. Many proposed Radio Frequency (RF) based tracking and localization. However, it does cause deployment problems of large RF-based sensors and readers. In this paper, we present a noble markerless AR approach for indoor (possible outdoor, too) navigation system only using monoSLAM (Monocular Simultaneous Localization and Map building) algorithm to full-fill our grand effort to develop mobile seamless indoor/outdoor u-GIS system. The paper briefly explains the basic SLAM algorithm, then the implementation of our system.

  • PDF

Morphological Hand-Gesture Recognition Algorithm (형태론적 손짓 인식 알고리즘)

  • Choi Jong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.8
    • /
    • pp.1725-1731
    • /
    • 2004
  • The use of gestures provides an attractive alternate to cumbersome interface devices for human-computer interaction. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. The most important issues in gesture recognition are the simplification of algorithm and the reduction of processing time. The mathematical morphology based on geometrical set theory is best used to perform the processing. A key idea of proposed algorithm in this paper is to apply morphological shape decomposition. The primitive elements extracted to a hand gesture include in very important information on the directivity of the hand gestures. Based on this characteristic, we proposed the morphological gesture recognition algorithm using feature vectors calculated to lines connecting the center points of a main-primitive element and sub-primitive elements. Through the experiment, we demonstrated the efficiency of proposed algorithm. Coupling natural interactions such as hand gesture with an appropriately designed interface is a valuable and powerful component in the building of TV switch navigating and video contents browsing system.

Research on Deep Learning-Based Methods for Determining Negligence through Traffic Accident Video Analysis (교통사고 영상 분석을 통한 과실 판단을 위한 딥러닝 기반 방법 연구)

  • Seo-Young Lee;Yeon-Hwi You;Hyo-Gyeong Park;Byeong-Ju Park;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.4
    • /
    • pp.559-565
    • /
    • 2024
  • Research on autonomous vehicles is being actively conducted. As autonomous vehicles emerge, there will be a transitional period in which traditional and autonomous vehicles coexist, potentially leading to a higher accident rate. Currently, when a traffic accident occurs, the fault ratio is determined according to the criteria set by the General Insurance Association of Korea. However, the time required to investigate the type of accident is substantial. Additionally, there is an increasing trend in fault ratio disputes, with requests for reconsideration even after the fault ratio has been determined. To reduce these temporal and material costs, we propose a deep learning model that automatically determines fault ratios. In this study, we aimed to determine fault ratios based on accident video through a image classification model based on ResNet-18 and video action recognition using TSN. If this model commercialized, could significantly reduce the time required to measure fault ratios. Moreover, it provides an objective metric for fault ratios that can be offered to the parties involved, potentially alleviating fault ratio disputes.

Stereo Vision Based 3D Input Device (스테레오 비전을 기반으로 한 3차원 입력 장치)

  • Yoon, Sang-Min;Kim, Ig-Jae;Ahn, Sang-Chul;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.429-441
    • /
    • 2002
  • This paper concerns extracting 3D motion information from a 3D input device in real time focused to enabling effective human-computer interaction. In particular, we develop a novel algorithm for extracting 6 degrees-of-freedom motion information from a 3D input device by employing an epipolar geometry of stereo camera, color, motion, and structure information, free from requiring the aid of camera calibration object. To extract 3D motion, we first determine the epipolar geometry of stereo camera by computing the perspective projection matrix and perspective distortion matrix. We then incorporate the proposed Motion Adaptive Weighted Unmatched Pixel Count algorithm performing color transformation, unmatched pixel counting, discrete Kalman filtering, and principal component analysis. The extracted 3D motion information can be applied to controlling virtual objects or aiding the navigation device that controls the viewpoint of a user in virtual reality setting. Since the stereo vision-based 3D input device is wireless, it provides users with a means for more natural and efficient interface, thus effectively realizing a feeling of immersion.

Design, Development and Testing of the Modular Unmanned Surface Vehicle Platform for Marine Waste Detection

  • Vasilj, Josip;Stancic, Ivo;Grujic, Tamara;Music, Josip
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.195-204
    • /
    • 2017
  • Mobile robots are used for years as a valuable research and educational tool in form of available open-platform designs and Do-It-Yourself kits. Rapid development and costs reduction of Unmanned Air Vehicles (UAV) and ground based mobile robots in recent years allowed researchers to utilize them as an affordable research platform. Despite of recent developments in the area of ground and airborne robotics, only few examples of Unmanned Surface Vehicle (USV) platforms targeted for research purposes can be found. Aim of this paper is to present the development of open-design USV drone with integrated multi-level control hardware architecture. Proposed catamaran - type water surface drone enables direct control over wireless radio link, separate development of algorithms for optimal propulsion control, navigation and communication with the ground-based control station. Whole design is highly modular, where each component can be replaced or modified according to desired task, payload or environmental conditions. Developed USV is planned to be utilized as a part of the system for detection and identification of marine and lake waste. Cameras mounted to the USV would record sea or lake surfaces, and recorded video sequences and images would be processed by state-of-the-art computer vision and machine learning algorithms in order to identify and classify marine and lake waste.

Development of A Haptic Interactive Virtual Exhibition Space (햅틱 상호작용을 제공하는 가상 전시공간 개발)

  • You, Yong-Hee;Cho, Yun-Hye;Choi, Geon-Suk;Sung, Mee-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.6
    • /
    • pp.412-416
    • /
    • 2007
  • In this paper, we present a haptic virtual exhibition space that allows users to interact with 3D graphic objects not only through the sense of sight but also through the sense of touch. The haptic virtual exhibition space offers users in different places some efficient ways to experience the exhibitions of a virtual musical museum using the basic human senses of perception, such as vision, audition, and touch. Depending on 3D graphic objects, we apply different properties to let those feel realistic. We also provide haptic device based navigation which prevents users from rushing between various interfaces: keyboard and mouse. The haptic virtual museum is based on Client-Server architecture and clients are represented in the 3D space in the form of avatars. In this paper, we mainly discuss the design of the haptic virtual exhibition space in detail and in the end, we provide performance analysis in comparison to other similar applications such as QTVR and VRML).

Place Modeling and Recognition using Distribution of Scale Invariant Features (스케일 불변 특징들의 분포를 이용한 장소의 모델링 및 인식)

  • Hu, Yi;Shin, Bum-Joo;Lee, Chang-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.4
    • /
    • pp.51-58
    • /
    • 2008
  • In this paper, we propose a place modeling based on the distribution of scale-invariant features, and a place recognition method that recognizes places by comparing the place model in a database with the extracted features from input data. The proposed method is based on the assumption that every place can be represented by unique feature distributions that are distinguishable from others. The proposed method uses global information of each place where one place is represented by one distribution model. Therefore, the main contribution of the proposed method is that the time cost corresponding to the increase of the number of places grows linearly without increasing exponentially. For the performance evaluation of the proposed method, the different number of frames and the different number of features are used, respectively. Empirical results illustrate that our approach achieves better performance in space and time cost comparing to other approaches. We expect that the Proposed method is applicable to many ubiquitous systems such as robot navigation, vision system for blind people, wearable computing, and so on.

  • PDF

Research Trends and Case Study on Keypoint Recognition and Tracking for Augmented Reality in Mobile Devices (모바일 증강현실을 위한 특징점 인식, 추적 기술 및 사례 연구)

  • Choi, Heeseung;Ahn, Sang Chul;Kim, Ig-Jae
    • Journal of the HCI Society of Korea
    • /
    • v.10 no.2
    • /
    • pp.45-55
    • /
    • 2015
  • In recent years, keypoint recognition and tracking technologies are considered as crucial task in many practical systems for markerless augmented reality. The keypoint recognition and technologies are widely studied in many research areas, including computer vision, robot navigation, human computer interaction, and etc. Moreover, due to the rapid growth of mobile market related to augmented reality applications, several effective keypoint-based matching and tracking methods have been introduced by considering mobile embedded systems. Therefore, in this paper, we extensively analyze the recent research trends on keypoint-based recognition and tracking with several core components: keypoint detection, description, matching, and tracking. Then, we also present one of our research related to mobile augmented reality, named mobile tour guide system, by real-time recognition and tracking of tour maps on mobile devices.

Integration of Condensation and Mean-shift algorithms for real-time object tracking (실시간 객체 추적을 위한 Condensation 알고리즘과 Mean-shift 알고리즘의 결합)

  • Cho Sang-Hyun;Kang Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.273-282
    • /
    • 2005
  • Real-time Object tracking is an important field in developing vision applications such as surveillance systems and vision based navigation. mean-shift algerian and Condensation algorithm are widely used in robust object tracking systems. Since the mean-shift algorithm is easy to implement and is effective in object tracking computation, it is widely used, especially in real-time tracking systems. One of the drawbacks is that it always converges to a local maximum which may not be a global maximum. Therefore, in a cluttered environment, the Mean-shift algorithm does not perform well. On the other hand, since it uses multiple hypotheses, the Condensation algorithm is useful in tracking in a cluttered background. Since it requires a complex object model and many hypotheses, it contains a high computational complexity. Therefore, it is not easy to apply a Condensation algorithm in real-time systems. In this paper, by combining the merits of the Condensation algorithm and the mean-shift algorithm we propose a new model which is suitable for real-time tracking. Although it uses only a few hypotheses, the proposed method use a high-likelihood hypotheses using mean-shift algorithm. As a result, we can obtain a better result than either the result produced by the Condensation algorithm or the result produced by the mean-shift algorithm.