• Title/Summary/Keyword: Sensor 3D data model

Search Result 128, Processing Time 0.029 seconds

Virtual Target Overlay Technique by Matching 3D Satellite Image and Sensor Image (3차원 위성영상과 센서영상의 정합에 의한 가상표적 Overlay 기법)

  • Cha, Jeong-Hee;Jang, Hyo-Jong;Park, Yong-Woon;Kim, Gye-Young;Choi, Hyung-Il
    • The KIPS Transactions:PartD
    • /
    • v.11D no.6
    • /
    • pp.1259-1268
    • /
    • 2004
  • To organize training in limited training area for an actuai combat, realistic training simulation plugged in by various battle conditions is essential. In this paper, we propose a virtual target overlay technique which does not use a virtual image, but Projects a virtual target on ground-based CCD image by appointed scenario for a realistic training simulation. In the proposed method, we create a realistic 3D model (for an instructor) by using high resolution Geographic Tag Image File Format(GeoTIFF) satellite image and Digital Terrain Elevation Data (DTED), and extract the road area from a given CCD image (for both an instructor and a trainee). Satellite images and ground-based sensor images have many differences in observation position, resolution, and scale, thus yielding many difficulties in feature-based matching. Hence, we propose a moving synchronization technique that projects the target on the sensor image according to the marked moving path on 3D satellite image by applying Thin-Plate Spline(TPS) interpolation function, which is an image warping function, on the two given sets of corresponding control point pair. To show the experimental result of the proposed method, we employed two Pentium4 1.8MHz personal computer systems equipped with 512MBs of RAM, and the satellite and sensor images of Daejoen area are also been utilized. The experimental result revealed the effective-ness of proposed algorithm.

Sell-modeling of Cylindrical Object based on Generic Model for 3D Object Recognition (3 차원 물체 인식을 위한 보편적 지식기반 실린더형 물체 자가모델링 기법)

  • Baek, Kyeong-Keun;Park, Yeon-Chool;Park, Joon-Young;Lee, Suk-Han
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.210-214
    • /
    • 2008
  • It is actually impossible to model and store all objects which exist in real home environment into robot's database in advance. To resolve this problem, this paper proposes new object modeling method that can be available for robot self-modeling, which is capable of estimating whole model's shape from partial surface data using Generic Model. And this whole produce is conducted to cylindrical objects like cup, bottles and cans which can be easily found at indoor environment. The detailed process is firstly we obtain cylinder's initial principle axis using points coordinates and normal vectors from object's surface after we separate cylindrical object from 3D image. This 3D image is obtained from 3D sensor. And second, we compensate errors in the principle axis repeatedly. Then finally, we do modeling whole cylindrical object using cross sectional principal axis and its radius To show the feasibility of the algorithm, We implemented it and evaluated its accuracy.

  • PDF

Geometric Regualrization of Irregular Building Polygons: A Comparative Study

  • Sohn, Gun-Ho;Jwa, Yoon-Seok;Tao, Vincent;Cho, Woo-Sug
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_1
    • /
    • pp.545-555
    • /
    • 2007
  • 3D buildings are the most prominent feature comprising urban scene. A few of mega-cities in the globe are virtually reconstructed in photo-realistic 3D models, which becomes accessible by the public through the state-of-the-art online mapping services. A lot of research efforts have been made to develop automatic reconstruction technique of large-scale 3D building models from remotely sensed data. However, existing methods still produce irregular building polygons due to errors induced partly by uncalibrated sensor system, scene complexity and partly inappropriate sensor resolution to observed object scales. Thus, a geometric regularization technique is urgently required to rectify such irregular building polygons that are quickly captured from low sensory data. This paper aims to develop a new method for regularizing noise building outlines extracted from airborne LiDAR data, and to evaluate its performance in comparison with existing methods. These include Douglas-Peucker's polyline simplication, total least-squared adjustment, model hypothesis-verification, and rule-based rectification. Based on Minimum Description Length (MDL) principal, a new objective function, Geometric Minimum Description Length (GMDL), to regularize geometric noises is introduced to enhance the repetition of identical line directionality, regular angle transition and to minimize the number of vertices used. After generating hypothetical regularized models, a global optimum of the geometric regularity is achieved by verifying the entire solution space. A comparative evaluation of the proposed geometric regulator is conducted using both simulated and real building vectors with various levels of noise. The results show that the GMDL outperforms the selected existing algorithms at the most of noise levels.

Using Skeleton Vector Information and RNN Learning Behavior Recognition Algorithm (스켈레톤 벡터 정보와 RNN 학습을 이용한 행동인식 알고리즘)

  • Kim, Mi-Kyung;Cha, Eui-Young
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.598-605
    • /
    • 2018
  • Behavior awareness is a technology that recognizes human behavior through data and can be used in applications such as risk behavior through video surveillance systems. Conventional behavior recognition algorithms have been performed using the 2D camera image device or multi-mode sensor or multi-view or 3D equipment. When two-dimensional data was used, the recognition rate was low in the behavior recognition of the three-dimensional space, and other methods were difficult due to the complicated equipment configuration and the expensive additional equipment. In this paper, we propose a method of recognizing human behavior using only CCTV images without additional equipment using only RGB and depth information. First, the skeleton extraction algorithm is applied to extract points of joints and body parts. We apply the equations to transform the vector including the displacement vector and the relational vector, and study the continuous vector data through the RNN model. As a result of applying the learned model to various data sets and confirming the accuracy of the behavior recognition, the performance similar to that of the existing algorithm using the 3D information can be verified only by the 2D information.

Development of 3D Petroglyph VR Contents based on Gesture Recognition (동작인식기반의 3D 암각화 VR 콘텐츠 구현)

  • Jung, Young-Kee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.1
    • /
    • pp.25-32
    • /
    • 2014
  • Petroglyphs is an essential part of the worldwide cultural heritage since it plays a key role for the comprehension of prehistoric communities previous to writing. nowadays 3D data are a critical component to permanently record the form of important cultural heritage so that they might be passed down to future generations. Recent 3D scanning technologies allow the generation of very realistic 3D model that can be used for multimedia museum exhibitions to attract the users into the 3D world. In this paper, we develop the 3D petroglyph VR contents based on a novel gesture recognition method. The proposed gesture recognition method can recognizes the movements of the user using 3D depth sensor by comparing with the pre-defined movements. Also this paper presents new approaches for 3D petroglyphs data recording using 3D scanning technology as accurate and non-destructive tools.

Structural Damage Localization for Visual Inspection Using Unmanned Aerial Vehicle with Building Information Modeling Information (UAV와 BIM 정보를 활용한 시설물 외관 손상의 위치 측정 방법)

  • Lee, Yong-Ju;Park, Man-Woo
    • Journal of KIBIM
    • /
    • v.13 no.4
    • /
    • pp.64-73
    • /
    • 2023
  • This study introduces a method of estimating the 3D coordinates of structural damage from the detection results of visual inspection provided in 2D image coordinates using sensing data of UAV and 3D shape information of BIM. This estimation process takes place in a virtual space and utilizes the BIM model, so it is possible to immediately identify which member of the structure the estimated location corresponds to. Difference from conventional structural damage localization methods that require 3D scanning or additional sensor attachment, it is a method that can be applied locally and rapidly. Measurement accuracy was calculated through the distance difference between the measured position measured by TLS (Terrestrial Laser Scanner) and the estimated position calculated by the method proposed in this study, which can determine the applicability of this study and the direction of future research.

Steep Slope Management System integrated with Realtime Monitoring Information into 3D Web GIS (상시계측센서정보와 3차원 Web GIS를 융합한 급경사지관리시스템)

  • Chung, Dong Ki;Sung, Jae Ryeol;Lee, Dong Wook;Chang, Ki Tae;Lee, Jin Duk
    • Journal of Korean Society of Disaster and Security
    • /
    • v.6 no.3
    • /
    • pp.9-17
    • /
    • 2013
  • Geospatial information data came recently in use to build the location-based service in various fields. These data were shown via a 2-D map in the past but now can be viewed as a 3-D map due to the dramatic evolution of IT technology, thus improving efficiency and raising practicality to a greater extent by providing a more realistic visualization of the field. In addition, many previous GIS applications have been provided under desktop environment, limiting access from remote sites and reducing its approachability for less experienced users. The latest trend offers service with web-based environment, providing efficient sharing of data to all users, both unknown and specific internal users. Therefore, real-time information sensors that have been installed on steep slopes are to be integrated with 3-D geospatial information in this study. It is also to be developed with web-based environment to improve usage and access. There are three steps taken to establish this system: firstly, a 3-D GIS database and 3-D terrain with higher resolution aerial photos and DEM (Digital Elevation Model) have been built; secondly, a system architecture was proposed to integrate real-time sensor information data with 3D Web-based GIS; thirdly, the system has been constructed for Gangwon Province as a test bed to verify the applicability.

Building Dataset of Sensor-only Facilities for Autonomous Cooperative Driving

  • Hyung Lee;Chulwoo Park;Handong Lee;Junhyuk Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.21-30
    • /
    • 2024
  • In this paper, we propose a method to build a sample dataset of the features of eight sensor-only facilities built as infrastructure for autonomous cooperative driving. The feature extracted from point cloud data acquired by LiDAR and build them into the sample dataset for recognizing the facilities. In order to build the dataset, eight sensor-only facilities with high-brightness reflector sheets and a sensor acquisition system were developed. To extract the features of facilities located within a certain measurement distance from the acquired point cloud data, a cylindrical projection method was applied to the extracted points after applying DBSCAN method for points and then a modified OTSU method for reflected intensity. Coordinates of 3D points, projected coordinates of 2D, and reflection intensity were set as the features of the facility, and the dataset was built along with labels. In order to check the effectiveness of the facility dataset built based on LiDAR data, a common CNN model was selected and tested after training, showing an accuracy of about 90% or more, confirming the possibility of facility recognition. Through continuous experiments, we will improve the feature extraction algorithm for building the proposed dataset and improve its performance, and develop a dedicated model for recognizing sensor-only facilities for autonomous cooperative driving.

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei;Zou, Shuanghui;Tian, Yifei;Sun, Su;Fong, Simon;Cho, Kyungeun;Qiu, Lvyang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1445-1456
    • /
    • 2018
  • Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.

Accurate Vehicle Positioning on a Numerical Map

  • Laneurit Jean;Chapuis Roland;Chausse Fr d ric
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.1
    • /
    • pp.15-31
    • /
    • 2005
  • Nowadays, the road safety is an important research field. One of the principal research topics in this field is the vehicle localization in the road network. This article presents an approach of multi sensor fusion able to locate a vehicle with a decimeter precision. The different informations used in this method come from the following sensors: a low cost GPS, a numeric camera, an odometer and a steer angle sensor. Taking into account a complete model of errors on GPS data (bias on position and nonwhite errors) as well as the data provided by an original approach coupling a vision algorithm with a precise numerical map allow us to get this precision.