• Title/Summary/Keyword: 3D Data Fusion

Search Result 137, Processing Time 0.032 seconds

Silence Reporting for Cooperative Sensing in Cognitive Radio Networks

  • Kim, Do-Yun;Choi, Young-June;Choi, Jeung Won
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.3
    • /
    • pp.59-64
    • /
    • 2018
  • A cooperative spectrum sensing has been proposed to improve the sensing performance in cognitive radio (CR) network. However, cooperative sensing causes additional overhead for reporting the result of local sensing to the fusion center. In this paper, we propose a technique to reduce the overhead of data transmission of cooperative sensing for applying the quantum data fusion technique in cognitive radio networks by omitting the lowest quantized in the local sensed results. If a CR node senses the lowest quantized level, it will not send its local sensing data in the corresponding sensing period. The fusion center can implcitly know that a spectific CR node sensed lowest level if there is no report from that CR node. The goal of proposed sensing policy is to reduce the overhead of quantized data fusion scheme for cooperative sensing. Also, our scheme can be adapted to all quantized data fusion schemes because it only deal with the form of the quantized data report. The experimental results show that the proposed scheme improves performance in terms of reporting overhead.

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

3D Precision Building Modeling Based on Fusion of Terrestrial LiDAR and Digital Close-Range Photogrammetry (지상라이다와 디지털지상사진측량을 융합한 건축물의 3차원 정밀모델링)

  • 사석재;이임평;최윤수;오의종
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.11a
    • /
    • pp.529-534
    • /
    • 2004
  • The increasing need and use of 3D GIS particularly in urban areas has produced growing attention on building reconstruction. Nowadays, the use of close-range data for building reconstruction has been intensively emphasized since they can provide higher resolution and more complete coverage than airborne sensory data. We developed a fusion approach for building reconstruction from both points and images. The proposed approach was then applied to reconstructing a building model from real data sets acquired from a large existing building. Based on the experimental results, we assured that the proposed approach cam achieve high resolution and accuracy in building reconstruction. The proposed approach can effectively contribute in developing an operational system producing large urban models for 3D GIS.

  • PDF

Map Building Based on Sensor Fusion for Autonomous Vehicle (자율주행을 위한 센서 데이터 융합 기반의 맵 생성)

  • Kang, Minsung;Hur, Soojung;Park, Ikhyun;Park, Yongwan
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.6
    • /
    • pp.14-22
    • /
    • 2014
  • An autonomous vehicle requires a technology of generating maps by recognizing surrounding environment. The recognition of the vehicle's environment can be achieved by using distance information from a 2D laser scanner and color information from a camera. Such sensor information is used to generate 2D or 3D maps. A 2D map is used mostly for generating routs, because it contains information only about a section. In contrast, a 3D map involves height values also, and therefore can be used not only for generating routs but also for finding out vehicle accessible space. Nevertheless, an autonomous vehicle using 3D maps has difficulty in recognizing environment in real time. Accordingly, this paper proposes the technology for generating 2D maps that guarantee real-time recognition. The proposed technology uses only the color information obtained by removing height values from 3D maps generated based on the fusion of 2D laser scanner and camera data.

FUSION OF LASER SCANNING DATA, DIGITAL MAPS, AERIAL PHOTOGRAPHS AND SATELLITE IMAGES FOR BUILDING MODELLING

  • Han, Seung-Hee;Bae, Yeon-Soung;Kim, Hong-Jin;Bae, Sang-Ho
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.899-902
    • /
    • 2006
  • For a quick and accurate 3D modelling of a building, laser scanning data, digital maps, aerial photographs and satellite images should be fusioned. Moreover, library establishment according to a standard structure of a building and effective texturing method are required in order to determine the structure of a building. In this study, we made a standard library by categorizing Korean village forms and presented a model that can predict a structure of a building from a shape of the roof on an aerial photo image. We made an ortho image using the high-definition digital image and considerable amount of ground scanning point cloud and mapped this image. These methods enabled a more quick and accurate building modelling.

  • PDF

Cooperative Localization in 2D for Multiple Mobile Robots by Optimal Fusion of Odometer and Inexpensive GPS data (다중 이동 로봇의 주행 계와 저가 GPS 데이터의 최적 융합을 통한 2차원 공간에서의 위치 추정)

  • Jo, Kyoung-Hwan;Lee, Ji-Hong;Jang, Choul-Soo
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.3
    • /
    • pp.255-261
    • /
    • 2007
  • We propose a optimal fusion method for localization of multiple robots utilizing correlation between GPS on each robot in common workspace. Each mobile robot in group collects position data from each odometer and GPS receiver and shares the position data with other robots. Then each robot utilizes position data of other robot for obtaining more precise estimation of own position. Because GPS data errors in common workspace have a close correlation, they contribute to improve localization accuracy of all robots in group. In this paper, we simulate proposed optimal fusion method of odometer and GPS through virtual robots and position data.

  • PDF

Effective Multi-Modal Feature Fusion for 3D Semantic Segmentation with Multi-View Images (멀티-뷰 영상들을 활용하는 3차원 의미적 분할을 위한 효과적인 멀티-모달 특징 융합)

  • Hye-Lim Bae;Incheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.505-518
    • /
    • 2023
  • 3D point cloud semantic segmentation is a computer vision task that involves dividing the point cloud into different objects and regions by predicting the class label of each point. Existing 3D semantic segmentation models have some limitations in performing sufficient fusion of multi-modal features while ensuring both characteristics of 2D visual features extracted from RGB images and 3D geometric features extracted from point cloud. Therefore, in this paper, we propose MMCA-Net, a novel 3D semantic segmentation model using 2D-3D multi-modal features. The proposed model effectively fuses two heterogeneous 2D visual features and 3D geometric features by using an intermediate fusion strategy and a multi-modal cross attention-based fusion operation. Also, the proposed model extracts context-rich 3D geometric features from input point cloud consisting of irregularly distributed points by adopting PTv2 as 3D geometric encoder. In this paper, we conducted both quantitative and qualitative experiments with the benchmark dataset, ScanNetv2 in order to analyze the performance of the proposed model. In terms of the metric mIoU, the proposed model showed a 9.2% performance improvement over the PTv2 model using only 3D geometric features, and a 12.12% performance improvement over the MVPNet model using 2D-3D multi-modal features. As a result, we proved the effectiveness and usefulness of the proposed model.

Data-driven Approach to Explore the Contribution of Process Parameters for Laser Powder Bed Fusion of a Ti-6Al-4V Alloy

  • Jeong Min Park;Jaimyun Jung;Seungyeon Lee;Haeum Park;Yeon Woo Kim;Ji-Hun Yu
    • Journal of Powder Materials
    • /
    • v.31 no.2
    • /
    • pp.137-145
    • /
    • 2024
  • In order to predict the process window of laser powder bed fusion (LPBF) for printing metallic components, the calculation of volumetric energy density (VED) has been widely calculated for controlling process parameters. However, because it is assumed that the process parameters contribute equally to heat input, the VED still has limitation for predicting the process window of LPBF-processed materials. In this study, an explainable machine learning (xML) approach was adopted to predict and understand the contribution of each process parameter to defect evolution in Ti alloys in the LPBF process. Various ML models were trained, and the Shapley additive explanation method was adopted to quantify the importance of each process parameter. This study can offer effective guidelines for fine-tuning process parameters to fabricate high-quality products using LPBF.

Cylindrical Object Recognition using Sensor Data Fusion (센서데이터 융합을 이용한 원주형 물체인식)

  • Kim, Dong-Gi;Yun, Gwang-Ik;Yun, Ji-Seop;Gang, Lee-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.8
    • /
    • pp.656-663
    • /
    • 2001
  • This paper presents a sensor fusion method to recognize a cylindrical object a CCD camera, a laser slit beam and ultrasonic sensors on a pan/tilt device. For object recognition with a vision sensor, an active light source projects a stripe pattern of light on the object surface. The 2D image data are transformed into 3D data using the geometry between the camera and the laser slit beam. The ultrasonic sensor uses an ultrasonic transducer array mounted in horizontal direction on the pan/tilt device. The time of flight is estimated by finding the maximum correlation between the received ultrasonic pulse and a set of stored templates - also called a matched filter. The distance of flight is calculated by simply multiplying the time of flight by the speed of sound and the maximum amplitude of the filtered signal is used to determine the face angle to the object. To determine the position and the radius of cylindrical objects, we use a statistical sensor fusion. Experimental results show that the fused data increase the reliability for the object recognition.

  • PDF

A Study on Performance Improvement of Target Motion Analysis using Target Elevation Tracking and Fusion in Conformal Array Sonar (컨포멀 소나에서의 표적고각 추적 및 융합을 이용한 표적기동분석 성능향상 연구)

  • Lee, HaeHo;Park, GyuTae;Shin, KeeCheol;Cho, SungIl
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.3
    • /
    • pp.320-331
    • /
    • 2019
  • In this paper, we propose a method of TMA(Target Motion Analysis) performance improvement using target elevation tracking and fusion in conformal array sonar. One of the most important characteristics of conformal array sonar is to detect a target elevation by a vertical beam. It is possible to get a target range to maximize advantages of the proposed TMA technology using this characteristic. And the proposed techniques include target tracking, target fusion, calculation of target range by multipath as well as TMA. A simulation study demonstrates the outstanding performance of proposed techniques.