• Title/Summary/Keyword: 3D image sensor

Search Result 334, Processing Time 0.029 seconds

Gimbal System Control for Drone for 3D Image (입체영상 촬영을 위한 드론용 짐벌시스템 제어)

  • Kim, Min;Byun, Gi-Sig;Kim, Gwan-Hyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.11
    • /
    • pp.2107-2112
    • /
    • 2016
  • This paper is designed to develop a Gimbal control stabilizer for drones Gimbal system control for drone for 3D image to make sure clean image in the shaking and wavering environments of drone system. The stabilizer is made of tools which support camera modules and IMU(Inertial Measurement Unit) sensor modules follow exact angles, which can brock vibrations outside of the camera modules. It is difficult for the camera modules to get clean image, because of irregular movements and various vibrations produced by flying drones. Moreover, a general PID controller used for the movements of rolling, pitching and yawing in order to control the various vibrations of various frequencies needs often to readjust PID control parameters. Therefore, this paper aims to conduct the Intelligent-PID controller as well as design the Gimbal control stabilizer to get clean images and to improve irregular movements and various vibrations problems referenced above.

Structural Damage Localization for Visual Inspection Using Unmanned Aerial Vehicle with Building Information Modeling Information (UAV와 BIM 정보를 활용한 시설물 외관 손상의 위치 측정 방법)

  • Lee, Yong-Ju;Park, Man-Woo
    • Journal of KIBIM
    • /
    • v.13 no.4
    • /
    • pp.64-73
    • /
    • 2023
  • This study introduces a method of estimating the 3D coordinates of structural damage from the detection results of visual inspection provided in 2D image coordinates using sensing data of UAV and 3D shape information of BIM. This estimation process takes place in a virtual space and utilizes the BIM model, so it is possible to immediately identify which member of the structure the estimated location corresponds to. Difference from conventional structural damage localization methods that require 3D scanning or additional sensor attachment, it is a method that can be applied locally and rapidly. Measurement accuracy was calculated through the distance difference between the measured position measured by TLS (Terrestrial Laser Scanner) and the estimated position calculated by the method proposed in this study, which can determine the applicability of this study and the direction of future research.

Reconstruction of the Lost Hair Depth for 3D Human Actor Modeling (3차원 배우 모델링을 위한 깊이 영상의 손실된 머리카락 영역 복원)

  • Cho, Ji-Ho;Chang, In-Yeop;Lee, Kwan-H.
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.1-9
    • /
    • 2007
  • In this paper, we propose a reconstruction technique of the lost hair region for 3D human actor modeling. An active depth sensor system can simultaneously capture both color and geometry information of any objects in real-time. However, it cannot acquire some regions whose surfaces are shiny and dark. Therefore, to get a natural 3D human model, the lost region in depth image should be recovered, especially human hair region. The recovery is performed using both color and depth images. We find out the hair region using color image first. After the boundary of hair region is estimated, the inside of hair region is estimated using an interpolation technique and closing operation. A 3D mesh model is generated after performing a series of operations including adaptive sampling, triangulation, mesh smoothing, and texture mapping. The proposed method can generate recovered 3D mesh stream automatically. The final 3D human model allows the user view interaction or haptic interaction in realistic broadcasting system.

  • PDF

the fusion of LiDAR Data and high resolution Image for the Precise Monitoring in Urban Areas (도심의 정밀 모니터링을 위한 LiDAR 자료와 고해상영상의 융합)

  • 강준묵;강영미;이형석
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.04a
    • /
    • pp.383-388
    • /
    • 2004
  • The fusion of a different kind sensor is fusion of the obtained data by the respective independent technology. This is a important technology for the construction of 3D spatial information. particularly, information is variously realized by the fusion of LiDAR and mobile scanning system and digital map, fusion of LiDAR data and high resolution, LiDAR etc. This study is to generate union DEM and digital ortho image by the fusion of LiDAR data and high resolution image and monitor precisely topology, building, trees etc in urban areas using the union DEM and digital ortho image. using only the LiDAR data has some problems because it needs manual linearization and subjective reconstruction.

  • PDF

Position Recognition and Indoor Autonomous Flight of a Small Quadcopter Using Distributed Image Matching (분산영상 매칭을 이용한 소형 쿼드콥터의 실내 비행 위치인식과 자율비행)

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.2_2
    • /
    • pp.255-261
    • /
    • 2020
  • We consider the problem of autonomously flying a quadcopter in indoor environments. Navigation in indoor settings poses two major issues. First, real time recognition of the marker captured by the camera. Second, The combination of the distributed images is used to determine the position and orientation of the quadcopter in an indoor environment. We autonomously fly a miniature RC quadcopter in small known environments using an on-board camera as the only sensor. We use an algorithm that combines data-driven image classification with image-combine techniques on the images captured by the camera to achieve real 3D localization and navigation.

Face Detection Using Adaboost and Template Matching of Depth Map based Block Rank Patterns (Adaboost와 깊이 맵 기반의 블록 순위 패턴의 템플릿 매칭을 이용한 얼굴검출)

  • Kim, Young-Gon;Park, Rae-Hong;Mun, Seong-Su
    • Journal of Broadcast Engineering
    • /
    • v.17 no.3
    • /
    • pp.437-446
    • /
    • 2012
  • A face detection algorithms using two-dimensional (2-D) intensity or color images have been studied for decades. Recently, with the development of low-cost range sensor, three-dimensional (3-D) information (i.e., depth image that represents the distance between a camera and objects) can be easily used to reliably extract facial features. Most people have a similar pattern of 3-D facial structure. This paper proposes a face detection method using intensity and depth images. At first, adaboost algorithm using intensity image classifies face and nonface candidate regions. Each candidate region is divided into $5{\times}5$ blocks and depth values are averaged in each block. Then, $5{\times}5$ block rank pattern is constructed by sorting block averages of depth values. Finally, candidate regions are classified as face and nonface regions by matching the constructed depth map based block rank patterns and a template pattern that is generated from training data set. For template matching, the $5{\times}5$ template block rank pattern is prior constructed by averaging block ranks using training data set. The proposed algorithm is tested on real images obtained by Kinect range sensor. Experimental results show that the proposed algorithm effectively eliminates most false positives with true positives well preserved.

Motion Capture System using Integrated Pose Sensors (융합센서 기반의 모션캡처 시스템)

  • Kim, Byung-Yul;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.65-74
    • /
    • 2010
  • At the aim of solving the problems appearing in traditional optical motion capturing systems such as the interference among multiple patches and the complexity of sensor and patch allocations, this paper proposes a new motion capturing system which is composed of a single camera and multiple motion sensors. A motion sensor is consisted of an acceleration sensor and a gyro sensor to detect the motion of a patched body and the orientation (roll, pitch, and yaw) of the motion, respectively. Although Image information provides the positions of the patches in 2D, the orientation information of the patch motions acquired by the motion sensors can generate 3D pose of the patches using simple equations. Since the proposed system uses the minimum number of sensors to detect the relative pose of a patch, it is easy to install on a moving body and can be economically used for various applications. The performance and the advantages of the proposed system have been proved by the experiments.

Real Time Distributed Parallel Processing to Visualize Noise Map with Big Sensor Data and GIS Data for Smart Cities (스마트시티의 빅 센서 데이터와 빅 GIS 데이터를 융합하여 실시간 온라인 소음지도로 시각화하기 위한 분산병렬처리 방법론)

  • Park, Jong-Won;Sim, Ye-Chan;Jung, Hae-Sun;Lee, Yong-Woo
    • Journal of Internet Computing and Services
    • /
    • v.19 no.4
    • /
    • pp.1-6
    • /
    • 2018
  • In smart cities, data from various kinds of sensors are collected and processed to provide smart services to the citizens. Noise information services with noise maps using the collected sensor data from various kinds of ubiquitous sensor networks is one of them. This paper presents a research result which generates three dimensional (3D) noise maps in real-time for smart cities. To make a noise map, we have to converge many informal data which include big image data of geographical Information and massive sensor data. Making such a 3D noise map in real-time requires the processing of the stream data from the ubiquitous sensor networks in real-time and the convergence operation in real-time. They are very challenging works. We developed our own methodology for real-time distributed and parallel processing for it and present it in this paper. Further, we developed our own real-time 3D noise map generation system, with the methodology. The system uses open source softwares for it. Here in this paper, we do introduce one of our systems which uses Apache Storm. We did performance evaluation using the developed system. Cloud computing was used for the performance evaluation experiments. It was confirmed that our system was working properly with good performance and the system can produce the 3D noise maps in real-time. The performance evaluation results are given in this paper, as well.

Indoor Positioning System Based on Camera Sensor Network for Mobile Robot Localization in Indoor Environments (실내 환경에서의 이동로봇의 위치추정을 위한 카메라 센서 네트워크 기반의 실내 위치 확인 시스템)

  • Ji, Yonghoon;Yamashita, Atsushi;Asama, Hajime
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.11
    • /
    • pp.952-959
    • /
    • 2016
  • This paper proposes a novel indoor positioning system (IPS) that uses a calibrated camera sensor network and dense 3D map information. The proposed IPS information is obtained by generating a bird's-eye image from multiple camera images; thus, our proposed IPS can provide accurate position information when objects (e.g., the mobile robot or pedestrians) are detected from multiple camera views. We evaluate the proposed IPS in a real environment with moving objects in a wireless camera sensor network. The results demonstrate that the proposed IPS can provide accurate position information for moving objects. This can improve the localization performance for mobile robot operation.

Depth Extraction of Integral Imaging Using Correlation (상관관계를 활용한 집적 영상의 깊이 추출 방법)

  • Kim, Youngjun;Cho, Ki-Ok;Kim, Cheolsu;Cho, Myungjin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.7
    • /
    • pp.1369-1375
    • /
    • 2016
  • In this paper, we present a depth extraction method of integral imaging using correlation between elemental images with phase only filter. Integral imaging is a passive three-dimensional (3D) imaging system records ray information of 3D objects through lenslet array by 2D image sensor, and displays 3D images by using the similar lenslet array. 2D images by lenslet array have different perspectives. These images are referred to as elemental images. Since the correlation can be calculated between elemental images, the depth information of 3D objects can be extracted. To obtain high correaltion between elemental images effectively, in this paper, we use phase only filter. Using this high correlation, the corresponding pixels between elemental images can be found so that depth information can be extracted by computational reconstruction technique. In this paper, to prove our method, we carry out optical experiment and calculate Peak Sidelobe Ratio (PSR) as a correlation metric.