• Title/Summary/Keyword: Monocular Camera

Search Result 111, Processing Time 0.023 seconds

One Idea on a Three Dimensional Measuring System Using Light Intensity Modulation

  • Fujimoto Ikumatsu;Cho In-Ho;Pak Jeong-Hyeon;Pyoun Young-Sik
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.1
    • /
    • pp.130-136
    • /
    • 2005
  • A new optical digitizing system for determining the position of a cursor in three dimensions(3D) and an experimental device for its measurement are presented. A semi-passive system using light intensity modulation, a technology that is well known in radar ranging, is employed in order to overcome precision limitations imposed by background light. This system consists of a charge-coupled device camera placed before a rotating mirror and a light-emitting diode whose intensity is modulated. Using a Fresnel pattern for light modulation, it is verified that a substantial improvement of the signal to noise ratio is realized for the background noise and that a resolution of less than a single pixel can be achieved. This opens the doorway to the realization of high precision 3D digitized measurement. We further propose that a 3D position measurement with a monocular optical system can be realized by a numerical experiment if a linear-period modulated waveform is adopted as the light-modulating one.

Image-based Visual Servoing Through Range and Feature Point Uncertainty Estimation of a Target for a Manipulator (목표물의 거리 및 특징점 불확실성 추정을 통한 매니퓰레이터의 영상기반 비주얼 서보잉)

  • Lee, Sanghyob;Jeong, Seongchan;Hong, Young-Dae;Chwa, Dongkyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.6
    • /
    • pp.403-410
    • /
    • 2016
  • This paper proposes a robust image-based visual servoing scheme using a nonlinear observer for a monocular eye-in-hand manipulator. The proposed control method is divided into a range estimation phase and a target-tracking phase. In the range estimation phase, the range from the camera to the target is estimated under the non-moving target condition to solve the uncertainty of an interaction matrix. Then, in the target-tracking phase, the feature point uncertainty caused by the unknown motion of the target is estimated and feature point errors converge sufficiently near to zero through compensation for the feature point uncertainty.

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.

Sidewalk Gaseous Pollutants Estimation Through UAV Video-based Model

  • Omar, Wael;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.1-20
    • /
    • 2022
  • As unmanned aerial vehicle (UAV) technology grew in popularity over the years, it was introduced for air quality monitoring. This can easily be used to estimate the sidewalk emission concentration by calculating road traffic emission factors of different vehicle types. These calculations require a simulation of the spread of pollutants from one or more sources given for estimation. For this purpose, a Gaussian plume dispersion model was developed based on the US EPA Motor Vehicle Emissions Simulator (MOVES), which provides an accurate estimate of fuel consumption and pollutant emissions from vehicles under a wide range of user-defined conditions. This paper describes a methodology for estimating emission concentration on the sidewalk emitted by different types of vehicles. This line source considers vehicle parameters, wind speed and direction, and pollutant concentration using a UAV equipped with a monocular camera. All were sampled over an hourly interval. In this article, the YOLOv5 deep learning model is developed, vehicle tracking is used through Deep SORT (Simple Online and Realtime Tracking), vehicle localization using a homography transformation matrix to locate each vehicle and calculate the parameters of speed and acceleration, and ultimately a Gaussian plume dispersion model was developed to estimate the CO, NOx concentrations at a sidewalk point. The results demonstrate that these estimated pollutants values are good to give a fast and reasonable indication for any near road receptor point using a cheap UAV without installing air monitoring stations along the road.

Augmented Reality System using Planar Natural Feature Detection and Its Tracking (동일 평면상의 자연 특징점 검출 및 추적을 이용한 증강현실 시스템)

  • Lee, A-Hyun;Lee, Jae-Young;Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.49-58
    • /
    • 2011
  • Typically, vision-based AR systems operate on the basis of prior knowledge of the environment such as a square marker. The traditional marker-based AR system has a limitation that the marker has to be located in the sensing range. Therefore, there have been considerable research efforts for the techniques known as real-time camera tracking, in which the system attempts to add unknown 3D features to its feature map, and these then provide registration even when the reference map is out of the sensing range. In this paper, we describe a real-time camera tracking framework specifically designed to track a monocular camera in a desktop workspace. Basic idea of the proposed scheme is that a real-time camera tracking is achieved on the basis of a plane tracking algorithm. Also we suggest a method for re-detecting features to maintain registration of virtual objects. The proposed method can cope with the problem that the features cannot be tracked, when they go out of the sensing range. The main advantage of the proposed system are not only low computational cost but also convenient. It can be applicable to an augmented reality system for mobile computing environment.

ARVisualizer : A Markerless Augmented Reality Approach for Indoor Building Information Visualization System

  • Kim, Albert Hee-Kwan;Cho, Hyeon-Dal
    • Spatial Information Research
    • /
    • v.16 no.4
    • /
    • pp.455-465
    • /
    • 2008
  • Augmented reality (AR) has tremendous potential in visualizing geospatial information, especially on the actual physical scenes. However, to utilize augmented reality in mobile system, many researches have undergone with GPS or ubiquitous marker based approaches. Although there are several papers written with vision based markerless tracking, previous approaches provide fairly good results only in largely under "controlled environments." Localization and tracking of current position become more complex problem when it is used in indoor environments. Many proposed Radio Frequency (RF) based tracking and localization. However, it does cause deployment problems of large RF-based sensors and readers. In this paper, we present a noble markerless AR approach for indoor (possible outdoor, too) navigation system only using monoSLAM (Monocular Simultaneous Localization and Map building) algorithm to full-fill our grand effort to develop mobile seamless indoor/outdoor u-GIS system. The paper briefly explains the basic SLAM algorithm, then the implementation of our system.

  • PDF

Dynamic Human Pose Tracking using Motion-based Search (모션 기반의 검색을 사용한 동적인 사람 자세 추적)

  • Jung, Do-Joon;Yoon, Jeong-Oh
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.7
    • /
    • pp.2579-2585
    • /
    • 2010
  • This paper proposes a dynamic human pose tracking method using motion-based search strategy from an image sequence obtained from a monocular camera. The proposed method compares the image features between 3D human model projections and real input images. The method repeats the process until predefined criteria and then estimates 3D human pose that generates the best match. When searching for the best matching configuration with respect to the input image, the search region is determined from the estimated 2D image motion and then search is performed randomly for the body configuration conducted within that search region. As the 2D image motion is highly constrained, this significantly reduces the dimensionality of the feasible space. This strategy have two advantages: the motion estimation leads to an efficient allocation of the search space, and the pose estimation method is adaptive to various kinds of motion.

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

Take-off and landing assistance system for efficient operation of compact drone CCTV in remote locations (원격지의 초소형 드론 CCTV의 효율적인 운영을 위한 이착륙 보조 시스템)

  • Byoung-Kug Kim
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.3
    • /
    • pp.287-292
    • /
    • 2023
  • In the case of fixed CCTV, there is a problem in that a shadow area occurs, even if the visible range is maximized by utilizing the pan-tilt and zoom functions. The representative solution for that problem is that a plurality of fixed CCTVs are used. This requires a large amount of additional equipment (e.g., wires, facilities, monitors, etc.) proportional to the number of the CCTVs. Another solution is to use drones that are equipped with cameras and fly. However, Drone's operation time is much short. In order to extend the time, we can use multiple drones and can fly one by one. In this case, drones that need to recharge their batteries re-enter into a ready state at the drone port for next operation. In this paper, we propose a system for precised positioning and stable landing on the drone port by utilizing a small drone equipped with a fixed forward-facing monocular camera. For our conclusion, we implement our proposed system, operate, and finally verify our feasibility.

The Obstacle Size Prediction Method Based on YOLO and IR Sensor for Avoiding Obstacle Collision of Small UAVs (소형 UAV의 장애물 충돌 회피를 위한 YOLO 및 IR 센서 기반 장애물 크기 예측 방법)

  • Uicheon Lee;Jongwon Lee;Euijin Choi;Seonah Lee
    • Journal of Aerospace System Engineering
    • /
    • v.17 no.6
    • /
    • pp.16-26
    • /
    • 2023
  • With the growing demand for unmanned aerial vehicles (UAVs), various collision avoidance methods have been proposed, mainly using LiDAR and stereo cameras. However, it is difficult to apply these sensors to small UAVs due to heavy weight or lack of space. The recently proposed methods use a combination of object recognition models and distance sensors, but they lack information on the obstacle size. This disadvantage makes distance determination and obstacle coordination complicated in an early-stage collision avoidance. We propose a method for estimating obstacle sizes using a monocular camera-YOLO and infrared sensor. Our experimental results confirmed that the accuracy was 86.39% within the distance of 40 cm. In addition, the proposed method was applied to a small UAV to confirm whether it was possible to avoid obstacle collisions.