• 제목/요약/키워드: bounding box

검색결과 156건 처리시간 0.034초

조직화되지 않은 점군으로부터의 3차원 완전 형상 복원 (Complete 3D Surface Reconstruction from Unstructured Point Cloud)

  • 이일섭;김석일
    • 대한기계학회논문집A
    • /
    • 제29권4호
    • /
    • pp.570-577
    • /
    • 2005
  • In this study a complete 3D surface reconstruction method is proposed based on the concept that the vertices of surface model can be completely matched to the unstructured point cloud. In order to generate the initial mesh model from the point cloud, the mesh subdivision of bounding box and shrink-wrapping algorithm are introduced. The control mesh model for well representing the topology of point cloud is derived from the initial mesh model by using the mesh simplification technique based on the original QEM algorithm, and the parametric surface model for approximately representing the geometry of point cloud is derived by applying the local subdivision surface fitting scheme on the control mesh model. And, to reconstruct the complete matching surface model, the insertion of isolated points on the parametric surface model and the mesh optimization are carried out Especially, the fast 3D surface reconstruction is realized by introducing the voxel-based nearest-point search algorithm, and the simulation results reveal the availability of the proposed surface reconstruction method.

설계대상물의 외부공간을 이용한 3차원 CAD 시스템에 의한 설계지원 (Design Support Based on 3D-CAD System using functional Space Surrounding Design Object)

  • 남윤의;석천청웅
    • 산업경영시스템학회지
    • /
    • 제32권1호
    • /
    • pp.102-110
    • /
    • 2009
  • Concurrent Engineering(CE) has presented new possibilities for successful product development by incorporating various product life-cycle functions from the earlier stage of design. In the product design, geometric representation is vital not only in its traditional role as a means of communicating design information but also in its role as a means of externalizing designer's thought process by visualizing the design product. During the last dozens of years, there has been extraordinary development of computer-aided tools intended to generate, present or communicate 3D models. However, there has not been comparable progress in the development of 3D-CAD systems intended to represent and manipulate a variety of product life-cycle information in a consistent manner. This paper proposes a novel concept, Minus Volume (MV), to incorporate various design information relevant to product lift-cycle functions. MV is a functional shape that is extracted from a design object within a bounding box. A prototype 3D-CAD system is implemented based on the MV concept and illustrated with the successful implementation of concurrent design and manufacturing.

Visual Object Tracking Fusing CNN and Color Histogram based Tracker and Depth Estimation for Automatic Immersive Audio Mixing

  • Park, Sung-Jun;Islam, Md. Mahbubul;Baek, Joong-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권3호
    • /
    • pp.1121-1141
    • /
    • 2020
  • We propose a robust visual object tracking algorithm fusing a convolutional neural network tracker trained offline from a large number of video repositories and a color histogram based tracker to track objects for mixing immersive audio. Our algorithm addresses the problem of occlusion and large movements of the CNN based GOTURN generic object tracker. The key idea is the offline training of a binary classifier with the color histogram similarity values estimated via both trackers used in this method to opt appropriate tracker for target tracking and update both trackers with the predicted bounding box position of the target to continue tracking. Furthermore, a histogram similarity constraint is applied before updating the trackers to maximize the tracking accuracy. Finally, we compute the depth(z) of the target object by one of the prominent unsupervised monocular depth estimation algorithms to ensure the necessary 3D position of the tracked object to mix the immersive audio into that object. Our proposed algorithm demonstrates about 2% improved accuracy over the outperforming GOTURN algorithm in the existing VOT2014 tracking benchmark. Additionally, our tracker also works well to track multiple objects utilizing the concept of single object tracker but no demonstrations on any MOT benchmark.

효율적인 이동 객체 궤적 색인을 위한 최소 전파 TB-tree (Minimal Propagation TB-tree for Efficient Indexing of Moving Objects Trajectories)

  • 고주일;김명근;정원일;김재홍;배해영
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 한국공간정보시스템학회 2003년도 추계학술대회
    • /
    • pp.141-146
    • /
    • 2003
  • 시간이 흐름에 따라 연속적으로 위치를 변경하는 객체를 이동 객체(Moving Objects)라고 한다. 이러한 이동 객체의 대용량 위치 정보를 효율적으로 검색하기 위하여 색인이 필요하며, 대표적인 색인으로 TB-tree가 제안되었다. 그러나 전통적인 R-tree 기반의 TB-tree는 엄격한 궤적 보존 정책에 의해 레코드가 삽입될 때마다 해당 레코드의 선행자(predecessor)를 포함하는 단말 노드를 검색해야 하며, 레코드 삽입으로 인한 단말 노드 MBB의 변경을 중간 노드들의 MBB에도 반영해야 하는 갱신 부하를 가지고 있다. 본 논문에서는 대용량 이동 객체 궤적 정보의 효율적인 색인을 위한 최소 전파 TB-tree를 제안한다. 본 기법은 앞으로 삽입될 이동 객체의 궤적을 포함하는 예상된 MBB(EMBB: Expected Minimum Bounding Box)를 트리에 먼저 반영한 후 레코드가 삽입될 때마다 중간 노드의 MBB를 갱신하지 않고, 객체가 EMBB을 벗어났을 때 중간 노드의 MBB를 조정하여 TB-tree의 MBB조정 횟수를 줄이고, 또한 TB-tree에 별도의 테이블 구조를 둠으로써 레코드 삽입을 위한 단말 노드 검색 비용을 줄여 전체적인 TB-tree의 갱신 비용을 감소시킨다.

  • PDF

Thickness and clearance visualization based on distance field of 3D objects

  • Inui, Masatomo;Umezun, Nobuyuki;Wakasaki, Kazuma;Sato, Shunsuke
    • Journal of Computational Design and Engineering
    • /
    • 제2권3호
    • /
    • pp.183-194
    • /
    • 2015
  • This paper proposes a novel method for visualizing the thickness and clearance of 3D objects in a polyhedral representation. The proposed method uses the distance field of the objects in the visualization. A parallel algorithm is developed for constructing the distance field of polyhedral objects using the GPU. The distance between a voxel and the surface polygons of the model is computed many times in the distance field construction. Similar sets of polygons are usually selected as close polygons for close voxels. By using this spatial coherence, a parallel algorithm is designed to compute the distances between a cluster of close voxels and the polygons selected by the culling operation so that the fast shared memory mechanism of the GPU can be fully utilized. The thickness/clearance of the objects is visualized by distributing points on the visible surfaces of the objects and painting them with a unique color corresponding to the thickness/clearance values at those points. A modified ray casting method is developed for computing the thickness/clearance using the distance field of the objects. A system based on these algorithms can compute the distance field of complex objects within a few minutes for most cases. After the distance field construction, thickness/clearance visualization at a near interactive rate is achieved.

깊이 맵을 이용한 객체 분리 방법 (Object Segmentation Using Depth Map)

  • 유경민;조용주
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2013년도 추계학술대회
    • /
    • pp.639-640
    • /
    • 2013
  • 이 연구는 DIBR 기반 다시점 중간 영상 생성과정에서 원하는 객체를 좀 더 양질의 영상으로 출력하기 그 객체가 위치한 영역을 찾아내는 방법을 구현하였다. 이 방법은 사용자가 영역을 정해주어야 하는 기존의 GrabCut 방식을 보완하여 영상 처리 작업을 통해 바운딩 박스를 자동으로 찾아내도록 하였다. 그리고 GrabCut 알고리즘을 적용한 후에, 깊이 영상의 히스토그램을 이용해서 전경과 배경을 좀 더 명확하게 분리할 수 있도록 하였다. 이를 통해 기존의 방법에 비해서는 좀 더 나은 결과를 얻을 수 있음을 확인하였다. 본 논문에서는 이러한 방법에 대해서 설명하고, 향후 과제를 논한다.

  • PDF

Z-Buffer와 간략화된 모델을 이용한 효율적인 가려지는 물체 제거 기법(Occlusion Culling)에 관한 연구 (A Study on the Efficient Occlusion Culling Using Z-Buffer and Simplified Model)

  • 정성준;이규열;최항순;성우제;조두연
    • 한국CDE학회논문집
    • /
    • 제8권2호
    • /
    • pp.65-74
    • /
    • 2003
  • For virtual reality, virtual manufacturing system, or simulation based design, we need to visualize very large and complex 3D models which are comprising of very large number of polygons. To overcome the limited hardware performance and to attain smooth realtime visualization, there have been many researches about algorithms which reduce the number of polygons to be processed by graphics hardware. One of these algorithms, occlusion culling is a method of rejecting the objects which are not visible because they are occluded by other objects, and then passing only the visible objects to graphics hardware. Existing occlusion culling algorithms have some shortcomings such as the required long preprocessing time, the limitation of occluder shape, or the need for special hardware implementation. In this study, an efficient occlusion culling algorithm is proposed. The proposed algorithm reads and analyzes Z-buffer of graphics hardware using Microsoft DirectX, and then determines each object's visibility. This proposed algorithm can speed up visualization by reading Z-buffer using DirectX which can access hardware directly compared to OpenGL, by reading only the region to which each object is projected instead of reading the whole Z-Buffer, and the proposed algorithm can perform more exact visibility test by using simplified model instead of using bounding box. For evaluation, the proposed algorithm was applied to very large polygonal models. And smooth realtime visualization was attained.

효율적인 이동 객체의 궤적 색인을 위한 TB-tree 갱신 기법 (TB-tree Update Technique for Efficient Indexing Trajectories of Moving Objects)

  • 고주일;김명근;정원일;김재홍;배해영
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2003년도 가을 학술발표논문집 Vol.30 No.2 (2)
    • /
    • pp.145-147
    • /
    • 2003
  • 시간이 흐름에 따라 위치가 연속적으로 변경되는 객체를 이동 객체(Moving Objects)라고 한다. 이러한 이동 객체의 대용량 궤적 정보를 효율적으로 검색하기 위해서 색인이 필요하며 대표적인 색인으로 TB-tree가 있다. 그러나 전통적인 공간 색인인 R-tree 기반의 TB-tree는 엄격한 궤적 보존 정책에 의해 레코드가 삽입될 때마다 해당 레코드의 선행자(predecessor)를 포함하는 단말 노드를 검색해야 하며, 레코드 삽입으로 인한 단말 노드 MBB의 변경을 해당 단말 노드에서부터 루트 노드까지 반영해야하는 갱신 부하를 가지고 있다. 본 논문에서는 대용량 궤적 정보의 효율적인 색인을 위한 TB-tree 갱신 기법을 제안한다. 본 기법은 앞으로 삽입될 이동 객체의 궤적을 포함하는 예상된 MBB(EMBB: Expected Minimum Bounding Box)를 트리에 먼저 반영한다. 그 후 새로운 레코드가 삽입될 때마다 중간 노드의 MBB를 갱신하지 않고, 삽입되는 레코드의 MBB가 EMBB을 벗어났을 때 EMBB를 재설정하여 실제로 삽입된 레코드의 MBB와 재설정된 EMBB를 포함하도록 중간 노드의 MBB를 조정하므로 TB-tree의 MBB 조정 횟수를 줄인다. 또한 TB-tree에 선행자를 포함하는 단말 노드를 직접적(direct)으로 접근하기 위하여 별도의 선행자 테이블(Predecessor Table) 구조를 두어 레코드 삽입을 위해 선행자를 포함하는 단말 노드의 검색비용을 줄여 전체적인 색인 갱신 비용이 감소된다.

  • PDF

딥러닝을 활용한 단안 카메라 기반 실시간 물체 검출 및 거리 추정 (Monocular Camera based Real-Time Object Detection and Distance Estimation Using Deep Learning)

  • 김현우;박상현
    • 로봇학회논문지
    • /
    • 제14권4호
    • /
    • pp.357-362
    • /
    • 2019
  • This paper proposes a model and train method that can real-time detect objects and distances estimation based on a monocular camera by applying deep learning. It used YOLOv2 model which is applied to autonomous or robot due to the fast image processing speed. We have changed and learned the loss function so that the YOLOv2 model can detect objects and distances at the same time. The YOLOv2 loss function added a term for learning bounding box values x, y, w, h, and distance values z as 클래스ification losses. In addition, the learning was carried out by multiplying the distance term with parameters for the balance of learning. we trained the model location, recognition by camera and distance data measured by lidar so that we enable the model to estimate distance and objects from a monocular camera, even when the vehicle is going up or down hill. To evaluate the performance of object detection and distance estimation, MAP (Mean Average Precision) and Adjust R square were used and performance was compared with previous research papers. In addition, we compared the original YOLOv2 model FPS (Frame Per Second) for speed measurement with FPS of our model.

Real-Time Earlobe Detection System on the Web

  • Kim, Jaeseung;Choi, Seyun;Lee, Seunghyun;Kwon, Soonchul
    • International journal of advanced smart convergence
    • /
    • 제10권4호
    • /
    • pp.110-116
    • /
    • 2021
  • This paper proposed a real-time earlobe detection system using deep learning on the web. Existing deep learning-based detection methods often find independent objects such as cars, mugs, cats, and people. We proposed a way to receive an image through the camera of the user device in a web environment and detect the earlobe on the server. First, we took a picture of the user's face with the user's device camera on the web so that the user's ears were visible. After that, we sent the photographed user's face to the server to find the earlobe. Based on the detected results, we printed an earring model on the user's earlobe on the web. We trained an existing YOLO v5 model using a dataset of about 200 that created a bounding box on the earlobe. We estimated the position of the earlobe through a trained deep learning model. Through this process, we proposed a real-time earlobe detection system on the web. The proposed method showed the performance of detecting earlobes in real-time and loading 3D models from the web in real-time.