• Title/Summary/Keyword: 3D Object Detection

Search Result 232, Processing Time 0.026 seconds

Simulation of Deformable Objects using GLSL 4.3

  • Sung, Nak-Jun;Hong, Min;Lee, Seung-Hyun;Choi, Yoo-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.4120-4132
    • /
    • 2017
  • In this research, we implement a deformable object simulation system using OpenGL's shader language, GLSL4.3. Deformable object simulation is implemented by using volumetric mass-spring system suitable for real-time simulation among the methods of deformable object simulation. The compute shader in GLSL 4.3 which helps to access the GPU resources, is used to parallelize the operations of existing deformable object simulation systems. The proposed system is implemented using a compute shader for parallel processing and it includes a bounding box-based collision detection solution. In general, the collision detection is one of severe computing bottlenecks in simulation of multiple deformable objects. In order to validate an efficiency of the system, we performed the experiments using the 3D volumetric objects. We compared the performance of multiple deformable object simulations between CPU and GPU to analyze the effectiveness of parallel processing using GLSL. Moreover, we measured the computation time of bounding box-based collision detection to show that collision detection can be processed in real-time. The experiments using 3D volumetric models with 10K faces showed the GPU-based parallel simulation improves performance by 98% over the CPU-based simulation, and the overall steps including collision detection and rendering could be processed in real-time frame rate of 218.11 FPS.

Lightweight Deep Learning Model for Real-Time 3D Object Detection in Point Clouds (실시간 3차원 객체 검출을 위한 포인트 클라우드 기반 딥러닝 모델 경량화)

  • Kim, Gyu-Min;Baek, Joong-Hwan;Kim, Hee Yeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.9
    • /
    • pp.1330-1339
    • /
    • 2022
  • 3D object detection generally aims to detect relatively large data such as automobiles, buses, persons, furniture, etc, so it is vulnerable to small object detection. In addition, in an environment with limited resources such as embedded devices, it is difficult to apply the model because of the huge amount of computation. In this paper, the accuracy of small object detection was improved by focusing on local features using only one layer, and the inference speed was improved through the proposed knowledge distillation method from large pre-trained network to small network and adaptive quantization method according to the parameter size. The proposed model was evaluated using SUN RGB-D Val and self-made apple tree data set. Finally, it achieved the accuracy performance of 62.04% at mAP@0.25 and 47.1% at mAP@0.5, and the inference speed was 120.5 scenes per sec, showing a fast real-time processing speed.

A Study Access to 3D Object Detection Applied to features and Cars

  • Schneiderman, Henry
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 2008.06a
    • /
    • pp.103-110
    • /
    • 2008
  • In this thesis, we describe a statistical method for 3D object detection. In this method, we decompose the 3D geometry of each object into a small number of viewpoints. For each viewpoint, we construct a decision rule that determines if the object is present at that specific orientation. Each decision rule uses the statistics of both object appearance and "non-object" visual appearance. We represent each set of statistics using a product of histograms. Each histogram represents the joint statistics of a subset of wavelet coefficients and their position on the object. Our approach is to use many such histograms representing a wide variety of visual attributes. Using this method, we have developed the first algorithm that can reliably detect faces that vary from frontal view to full profile view and the first algorithm that can reliably detect cars over a wide range of viewpoints.

  • PDF

AR Anchor System Using Mobile Based 3D GNN Detection

  • Jeong, Chi-Seo;Kim, Jun-Sik;Kim, Dong-Kyun;Kwon, Soon-Chul;Jung, Kye-Dong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.54-60
    • /
    • 2021
  • AR (Augmented Reality) is a technology that provides virtual content to the real world and provides additional information to objects in real-time through 3D content. In the past, a high-performance device was required to experience AR, but it was possible to implement AR more easily by improving mobile performance and mounting various sensors such as ToF (Time-of-Flight). Also, the importance of mobile augmented reality is growing with the commercialization of high-speed wireless Internet such as 5G. Thus, this paper proposes a system that can provide AR services via GNN (Graph Neural Network) using cameras and sensors on mobile devices. ToF of mobile devices is used to capture depth maps. A 3D point cloud was created using RGB images to distinguish specific colors of objects. Point clouds created with RGB images and Depth Map perform downsampling for smooth communication between mobile and server. Point clouds sent to the server are used for 3D object detection. The detection process determines the class of objects and uses one point in the 3D bounding box as an anchor point. AR contents are provided through app and web through class and anchor of the detected object.

Position Detection of a Scattering 3D Object by Use of the Axially Distributed Image Sensing Technique

  • Cho, Myungjin;Shin, Donghak;Lee, Joon-Jae
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.414-418
    • /
    • 2014
  • In this paper, we present a method to detect the position of a 3D object in scattering media by using the axially distributed sensing (ADS) method. Due to the scattering noise of the elemental images recorded by the ADS method, we apply a statistical image processing algorithm where the scattering elemental images are converted into scatter-reduced ones. With the scatter-reduced elemental images, we reconstruct the 3D images using the digital reconstruction algorithm based on ray back-projection. The reconstructed images are used for the position detection of a 3D object in the scattering medium. We perform the preliminary experiments and present experimental results.

3D Multiple Objects Detection and Tracking on Accurate Depth Information for Pose Recognition (자세인식을 위한 정확한 깊이정보에서의 3차원 다중 객체검출 및 추적)

  • Lee, Jae-Won;Jung, Jee-Hoon;Hong, Sung-Hoon
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.963-976
    • /
    • 2012
  • 'Gesture' except for voice is the most intuitive means of communication. Thus, many researches on how to control computer using gesture are in progress. User detection and tracking in these studies is one of the most important processes. Conventional 2D object detection and tracking methods are sensitive to changes in the environment or lights, and a mix of 2D and 3D information methods has the disadvantage of a lot of computational complexity. In addition, using conventional 3D information methods can not segment similar depth object. In this paper, we propose object detection and tracking method using Depth Projection Map that is the cumulative value of the depth and motion information. Simulation results show that our method is robust to changes in lighting or environment, and has faster operation speed, and can work well for detection and tracking of similar depth objects.

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

Tracking of 2D or 3D Irregular Movement by a Family of Unscented Kalman Filters

  • Tao, Junli;Klette, Reinhard
    • Journal of information and communication convergence engineering
    • /
    • v.10 no.3
    • /
    • pp.307-314
    • /
    • 2012
  • This paper reports on the design of an object tracker that utilizes a family of unscented Kalman filters, one for each tracked object. This is a more efficient design than having one unscented Kalman filter for the family of all moving objects. The performance of the designed and implemented filter is demonstrated by using simulated movements, and also for object movements in 2D and 3D space.

Synthesizing Image and Automated Annotation Tool for CNN based Under Water Object Detection (강건한 CNN기반 수중 물체 인식을 위한 이미지 합성과 자동화된 Annotation Tool)

  • Jeon, MyungHwan;Lee, Yeongjun;Shin, Young-Sik;Jang, Hyesu;Yeu, Taekyeong;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.139-149
    • /
    • 2019
  • In this paper, we present auto-annotation tool and synthetic dataset using 3D CAD model for deep learning based object detection. To be used as training data for deep learning methods, class, segmentation, bounding-box, contour, and pose annotations of the object are needed. We propose an automated annotation tool and synthetic image generation. Our resulting synthetic dataset reflects occlusion between objects and applicable for both underwater and in-air environments. To verify our synthetic dataset, we use MASK R-CNN as a state-of-the-art method among object detection model using deep learning. For experiment, we make the experimental environment reflecting the actual underwater environment. We show that object detection model trained via our dataset show significantly accurate results and robustness for the underwater environment. Lastly, we verify that our synthetic dataset is suitable for deep learning model for the underwater environments.

A Review of 3D Object Tracking Methods Using Deep Learning (딥러닝 기술을 이용한 3차원 객체 추적 기술 리뷰)

  • Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.1
    • /
    • pp.30-37
    • /
    • 2021
  • Accurate 3D object tracking with camera images is a key enabling technology for augmented reality applications. Motivated by the impressive success of convolutional neural networks (CNNs) in computer vision tasks such as image classification, object detection, image segmentation, recent studies for 3D object tracking have focused on leveraging deep learning. In this paper, we review deep learning approaches for 3D object tracking. We describe key methods in this field and discuss potential future research directions.