• Title/Summary/Keyword: 3차원 객체 검출

Search Result 53, Processing Time 0.03 seconds

3D Multiple Objects Detection and Tracking on Accurate Depth Information for Pose Recognition (자세인식을 위한 정확한 깊이정보에서의 3차원 다중 객체검출 및 추적)

  • Lee, Jae-Won;Jung, Jee-Hoon;Hong, Sung-Hoon
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.963-976
    • /
    • 2012
  • 'Gesture' except for voice is the most intuitive means of communication. Thus, many researches on how to control computer using gesture are in progress. User detection and tracking in these studies is one of the most important processes. Conventional 2D object detection and tracking methods are sensitive to changes in the environment or lights, and a mix of 2D and 3D information methods has the disadvantage of a lot of computational complexity. In addition, using conventional 3D information methods can not segment similar depth object. In this paper, we propose object detection and tracking method using Depth Projection Map that is the cumulative value of the depth and motion information. Simulation results show that our method is robust to changes in lighting or environment, and has faster operation speed, and can work well for detection and tracking of similar depth objects.

A Review of 3D Object Tracking Methods Using Deep Learning (딥러닝 기술을 이용한 3차원 객체 추적 기술 리뷰)

  • Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.1
    • /
    • pp.30-37
    • /
    • 2021
  • Accurate 3D object tracking with camera images is a key enabling technology for augmented reality applications. Motivated by the impressive success of convolutional neural networks (CNNs) in computer vision tasks such as image classification, object detection, image segmentation, recent studies for 3D object tracking have focused on leveraging deep learning. In this paper, we review deep learning approaches for 3D object tracking. We describe key methods in this field and discuss potential future research directions.

3D Coordinates Transformation in Orthogonal Stereo Vision (직교식 스테레오 비젼 시스템에서의 3차원 좌표 변환)

  • Yoon, Hee-Joo;Cha, Sun-Hee;Cha, Eui-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.855-858
    • /
    • 2005
  • 본 시스템은 어항 속의 물고기 움직임을 추적하기 위해 직교식 스테레오 비젼 시스템(Othogonal Stereo Vision System)으로부터 동시에 독립된 영상을 획득하고 획득된 영상을 처리하여 좌표를 얻어내고 3차원 좌표로 생성해내는 시스템이다. 제안하는 방법은 크게 두 대의 카메라로부터 동시에 영상을 획득하는 방법과 획득된 영상에 대한 처리 및 물체 위치 검출, 그리고 3차원 좌표 생성으로 구성된다. Frame Grabber를 사용하여 초당 8-Frame의 두 개의 영상(정면영상, 상면영상)을 획득하며, 실시간으로 갱신하는 배경영상과의 차영상을 통하여 이동객체를 추출하고, Labeling을 이용하여 Clustering한 후, Cluster의 중심좌표를 검출한다. 검출된 각각의 좌표를 직선방정식을 이용하여 3차원 좌표보정을 수행하여 이동객체의 좌표를 생성한다.

  • PDF

3D Object's shape and motion recovery using stereo image and Paraperspective Camera Model (스테레오 영상과 준원근 카메라 모델을 이용한 객체의 3차원 형태 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.135-142
    • /
    • 2003
  • Robust extraction of 3D object's features, shape and global motion information from 2D image sequence is described. The object's 21 feature points on the pyramid type synthetic object are extracted automatically using color transform technique. The extracted features are used to recover the 3D shape and global motion of the object using stereo paraperspective camera model and sequential SVD(Singuiar Value Decomposition) factorization method. An inherent error of depth recovery due to the paraperspective camera model was removed by using the stereo image analysis. A 30 synthetic object with 21 features reflecting various position was designed and tested to show the performance of proposed algorithm by comparing the recovered shape and motion data with the measured values.

Hand Gesture Interface for Manipulating 3D Objects in Augmented Reality (증강현실에서 3D 객체 조작을 위한 손동작 인터페이스)

  • Park, Keon-Hee;Lee, Guee-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.20-28
    • /
    • 2010
  • In this paper, we propose a hand gesture interface for the manipulation of augmented objects in 3D space using a camera. Generally a marker is used for the detection of 3D movement in 2D images. However marker based system has obvious defects since markers are always to be included in the image or we need additional equipments for controling objects, which results in reduced immersion. To overcome this problem, we replace marker by planar hand shape by estimating the hand pose. Kalman filter is for robust tracking of the hand shape. The experimental result indicates the feasibility of the proposed algorithm for hand based AR interfaces.

Lightweight Deep Learning Model for Real-Time 3D Object Detection in Point Clouds (실시간 3차원 객체 검출을 위한 포인트 클라우드 기반 딥러닝 모델 경량화)

  • Kim, Gyu-Min;Baek, Joong-Hwan;Kim, Hee Yeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.9
    • /
    • pp.1330-1339
    • /
    • 2022
  • 3D object detection generally aims to detect relatively large data such as automobiles, buses, persons, furniture, etc, so it is vulnerable to small object detection. In addition, in an environment with limited resources such as embedded devices, it is difficult to apply the model because of the huge amount of computation. In this paper, the accuracy of small object detection was improved by focusing on local features using only one layer, and the inference speed was improved through the proposed knowledge distillation method from large pre-trained network to small network and adaptive quantization method according to the parameter size. The proposed model was evaluated using SUN RGB-D Val and self-made apple tree data set. Finally, it achieved the accuracy performance of 62.04% at mAP@0.25 and 47.1% at mAP@0.5, and the inference speed was 120.5 scenes per sec, showing a fast real-time processing speed.

Planar-Object Position Estimation by using Scale & Affine Invariant Features (불변하는 스케일-아핀 특징 점을 이용한 평면객체의 위치 추정)

  • Lee, Seok-Jun;Jung, Soon-Ki
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.795-800
    • /
    • 2008
  • 카메라로 입력되는 영상에서 객체를 인식하기 위한 노력은, 기존의 컴퓨터 비전분야에서 좋은 이슈로 연구되고 있다. 영상 내부에 등장하는 객체를 인식하고 해당 객체를 포함하고 있는 전체 이미지에서 현재 영상의 위치를 인식하기 위해서는, 영상 내에 등장할 객체에 대한 트레이닝이 필요하다. 본 논문에서는 영상에 등장할 객체에 대해서, 특징 점을 검출(feature detection)하고 각 점들이 가지는 픽셀 그라디언트 방향의 벡터 값들을 그 이웃하는 벡터 값들과 함께 DoG(difference-of-Gaussian)함수를 이용하여 정형화 한다. 이는 추후에 입력되는 영상에서 검출되는 특징 점들과 그 이웃들 간의 거리나 스케일의 비율 등의 파리미터를 이용하여 비교함으로써, 현재 특징 점들의 위치를 추정하는 정보로 사용된다. 본 논문에서는 광역의 시설 단지를 촬영한 인공위성 영상을 활용하여 시설물 내부에 존재는 건물들에 대한 초기 특징 점들을 검출하고 데이터베이스로 저장한다. 트레이닝이 마친 후에는, 프린트된 인공위성 영상내부의 특정 건물을 카메라를 이용하여 촬영하고, 이 때 입력된 영상의 특징 점을 해석하여 기존에 구축된 데이터베이스 내의 특징 점과 비교하는 과정을 거친다. 매칭되는 특징 점들은 DoG로 정형화된 벡터 값들을 이용하여 해당 건물에 대한 위치를 추정하고, 3차원으로 기 모델링 된 건물을 증강현실 기법을 이용하여 영상에 정합한 후 가시화 한다.

  • PDF

A Study on Detection and Resolving of Occlusion Area by Street Tree Object using ResNet Algorithm (ResNet 알고리즘을 이용한 가로수 객체의 폐색영역 검출 및 해결)

  • Park, Hong-Gi;Bae, Kyoung-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.77-83
    • /
    • 2020
  • The technologies of 3D spatial information, such as Smart City and Digital Twins, are developing rapidly for managing land and solving urban problems scientifically. In this construction of 3D spatial information, an object using aerial photo images is built as a digital DB. Realistically, the task of extracting a texturing image, which is an actual image of the object wall, and attaching an image to the object wall are important. On the other hand, occluded areas occur in the texturing image. In this study, the ResNet algorithm in deep learning technologies was tested to solve these problems. A dataset was constructed, and the street tree was detected using the ResNet algorithm. The ability of the ResNet algorithm to detect the street tree was dependent on the brightness of the image. The ResNet algorithm can detect the street tree in an image with side and inclination angles.

Design of ToF-Stereo Fusion Sensor System for 3D Spatial Scanning (3차원 공간 스캔을 위한 ToF-Stereo 융합 센서 시스템 설계)

  • Yun Ju Lee;Sun Kook Yoo
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.134-141
    • /
    • 2023
  • In this paper, we propose a ToF-Stereo fusion sensor system for 3D space scanning that increases the recognition rate of 3D objects, guarantees object detection quality, and is robust to the environment. The ToF-Stereo sensor fusion system uses a method of fusing the sensing values of the ToF sensor and the Stereo RGB sensor, and even if one sensor does not operate, the other sensor can be used to continuously detect an object. Since the quality of the ToF sensor and the Stereo RGB sensor varies depending on the sensing distance, sensing resolution, light reflectivity, and illuminance, a module that can adjust the function of the sensor based on reliability estimation is placed. The ToF-Stereo sensor fusion system combines the sensing values of the ToF sensor and the Stereo RGB sensor, estimates the reliability, and adjusts the function of the sensor according to the reliability to fuse the two sensing values, thereby improving the quality of the 3D space scan.

Object Detection and 3D Position Estimation based on Stereo Vision (스테레오 영상 기반의 객체 탐지 및 객체의 3차원 위치 추정)

  • Son, Haengseon;Lee, Seonyoung;Min, Kyoungwon;Seo, Seongjin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.4
    • /
    • pp.318-324
    • /
    • 2017
  • We introduced a stereo camera on the aircraft to detect flight objects and to estimate the 3D position of them. The Saliency map algorithm based on PCT was proposed to detect a small object between clouds, and then we processed a stereo matching algorithm to find out the disparity between the left and right camera. In order to extract accurate disparity, cost aggregation region was used as a variable region to adapt to detection object. In this paper, we use the detection result as the cost aggregation region. In order to extract more precise disparity, sub-pixel interpolation is used to extract float type-disparity at sub-pixel level. We also proposed a method to estimate the spatial position of an object by using camera parameters. It is expected that it can be applied to image - based object detection and collision avoidance system of autonomous aircraft in the future.