• Title/Summary/Keyword: 3D view

Search Result 1,613, Processing Time 0.065 seconds

Assembly Part Image-based 3D Shape Retrieval using Attentional View Pooling (Attentional View Pooling을 이용한 조립 부품 이미지 기반 3 차원 물체 검색)

  • Lee, Eun Ji;Kang, Isaac;Kim, Min Woo;Park, Seon Ji;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.72-75
    • /
    • 2020
  • 조립 부품 이미지에 해당하는 3D CAD 모델 매칭 기술은 최근 로봇 조립 기술의 발전으로 필요성이 대두되고 있다. 이미지 기반 3 차원 모델 매칭 연구는 진행되어 왔지만 가구 부품 이미지와는 특성이 다른 RGB[5] 이미지나 스케치 이미지를 다루는[1] 접근들이었다. 딥러닝을 사용하는 스케치 이미지 기반 3 차원 물제 검색 연구에서는 대부분 3 차원 이미지를 다각도에서 렌더링한 view 이미지들에서 feature를 추출하고 pooling 하여 하나의 feature를 출력한다. 그러나 기존의 view pooling 방식은 단순한 평균 방식으로, 부품 이미지에 따른 view를 반영하기에는 한계가 있었다. 따라서 본 논문에서는 조립 부품 이미지 기반 3 차원 물체 검색을 위해 query 부품 이미지에 따라 다른 view 이미지에 집중할 수 있는 방식의 attentional view pooling을 제안한다. 또한 조립 부품 데이터의 특성 상 class 당 CAD 모델이 하나인 상황이므로 학습 데이터가 터무니없이 부족하여 이를 해결하기 위한 학습 데이터 증강 방법을 제안한다. 실험은 의자 부품 11가지에 대해 진행하였고 이를 통해 제안하는 방식의 성능을 입증하였다.

  • PDF

A Study on 3D Reconstruction of Urban Area

  • Park Y. M.;Kwon K. R.;Lee K. W.
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.470-473
    • /
    • 2005
  • This paper proposes a reconstruction method for the shape and color information of 3-dimensional buildings. The proposed method is range scanning by laser range finder and image coordinates' color information mapping to laser coordinate by a fixed CCD camera on laser range finder. And we make a 'Far-View' using high-resolution satellite image. The 'Far-View' is created that the height of building using DEM after contours of building extraction. The user select a region of 'Far View' and then, appear detailed 3D-reconstruction of building The outcomes apply to city plan, 3D-environment game and movie background etc.

  • PDF

A PIM/PSM Component Modeling Technique Based on 2+1 View Integrated Metamodel (2+1 View 통합 메타모델 기반 PIM/PSM 컴포넌트 모델링 기법)

  • Song, Chee-Yang;Cho, Eun-Sook
    • The KIPS Transactions:PartD
    • /
    • v.16D no.3
    • /
    • pp.381-394
    • /
    • 2009
  • As a technique to enhance reusability for the created artifacts in software modeling process, the model driven method such like MDA has been applied. Unfortunately, the hierarchical and systematic MDA based development technique using UML is poor yet. This causes the problem that the MDA modeling with high consistency and reusability based on MDA metamodel is not being realized. To solve this problem, this paper proposes a MDA (PIM/PSM) component modeling technique using 2+1 view integrated metamodel. At first, the meta-architecture view model which can represents development process view and MVC view is defined. Then, the hierarchical integrated metamodels per view are addressed separately for modeling process and MVC at metamodel level on the hierarchy of the defined meta-architecture view model. These metamodels are defined hierarchically by layering the modeling elements in PIM and PSM pattern for UML models and GUI models. Appling the proposed metamodel to an ISMS application system, it is shown as a component modeling case study based on MDA. Through this approach, we are able to perform a component model with consistency and hierarchy corresponding to development process and MVC way. Accordingly, this may improve more independence and reusability of model.

3D Reconstruction of an Indoor Scene Using Depth and Color Images (깊이 및 컬러 영상을 이용한 실내환경의 3D 복원)

  • Kim, Se-Hwan;Woo, Woon-Tack
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.53-61
    • /
    • 2006
  • In this paper, we propose a novel method for 3D reconstruction of an indoor scene using a multi-view camera. Until now, numerous disparity estimation algorithms have been developed with their own pros and cons. Thus, we may be given various sorts of depth images. In this paper, we deal with the generation of a 3D surface using several 3D point clouds acquired from a generic multi-view camera. Firstly, a 3D point cloud is estimated based on spatio-temporal property of several 3D point clouds. Secondly, the evaluated 3D point clouds, acquired from two viewpoints, are projected onto the same image plane to find correspondences, and registration is conducted through minimizing errors. Finally, a surface is created by fine-tuning 3D coordinates of point clouds, acquired from several viewpoints. The proposed method reduces the computational complexity by searching for corresponding points in 2D image plane, and is carried out effectively even if the precision of 3D point cloud is relatively low by exploiting the correlation with the neighborhood. Furthermore, it is possible to reconstruct an indoor environment by depth and color images on several position by using the multi-view camera. The reconstructed model can be adopted for interaction with as well as navigation in a virtual environment, and Mediated Reality (MR) applications.

  • PDF

Sequential Point Cloud Generation Method for Efficient Representation of Multi-view plus Depth Data (다시점 영상 및 깊이 영상의 효율적인 표현을 위한 순차적 복원 기반 포인트 클라우드 생성 기법)

  • Kang, Sehui;Han, Hyunmin;Kim, Binna;Lee, Minhoe;Hwang, Sung Soo;Bang, Gun
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.166-173
    • /
    • 2020
  • Multi-view images, which are widely used for providing free-viewpoint services, can enhance the quality of synthetic views when the number of views increases. However, there needs an efficient representation method because of the tremendous amount of data. In this paper, we propose a method for generating point cloud data for the efficient representation of multi-view color and depth images. The proposed method conducts sequential reconstruction of point clouds at each viewpoint as a method of deleting duplicate data. A 3D point of a point cloud is projected to a frame to be reconstructed, and the color and depth of the 3D point is compared with the pixel where it is projected. When the 3D point and the pixel are similar enough, then the pixel is not used for generating a 3D point. In this way, we can reduce the number of reconstructed 3D points. Experimental results show that the propose method generates a point cloud which can generate multi-view images while minimizing the number of 3D points.

Illumination Compensation Algorithm based on Segmentation with Depth Information for Multi-view Image (깊이 정보를 이용한 영역분할 기반의 다시점 영상 조명보상 기법)

  • Kang, Keunho;Ko, Min Soo;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.4
    • /
    • pp.935-944
    • /
    • 2013
  • In this paper, a new illumination compensation algorithm by segmentation with depth information is proposed to improve the coding efficiency of multi-view images. In the proposed algorithm, a reference image is first segmented into several layers where each layer is composed of objects with a similar depth value. Then we separate objects from each other even in the same layer by labeling each separate region in the layered image. Then, the labeled reference depth image is converted to the position of the distortion image view by using 3D warping algorithm. Finally, we apply an illumination compensation algorithm to each of matched regions in the converted reference view and distorted view. The occlusion regions that occur by 3D warping are also compensated by a global compensation method. Through experimental results, we are able to confirm that the proposed algorithm has better performance to improve coding efficiency.

Eye-Catcher : Real-time 2D/3D Mixed Contents Display System

  • Chang, Jin-Wook;Lee, Kyoung-Il;Park, Tae-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.51-54
    • /
    • 2008
  • In this paper, we propose a practical method for displaying 2D/True3D mixed contents in real-time. Many companies released their 3D display recently, but the costs of producing True3D contents are still very expensive. Since there are already a lot of 2D contents in the world and it is more effective to mix True3D objects into the 2D contents than making True3D contents directly, people became interested in mixing 2D/True3D contents. Moreover, real-time 2D/True3D mixing is helpful for 3D displays because the scenario of the contents can be easily changed on playback-time by adjusting the 3D effects and the motion of the True3D object interactively. In our system, True3D objects are rendered into multiple view-point images, which are composed with 2D contents by using depth information, and then they are multiplexed with pre-generated view masks. All the processes are performed on a graphics processor. We were still able to play a 2D/True3D mixed contents with Full HD resolution in real-time using a normal graphics processor.

  • PDF

Adaptive Block-based Depth-map Coding Method (적응적 블록기반 깊이정보 맵 부호화 방법)

  • Kim, Kyung-Yong;Park, Gwang-Hoon;Suh, Doug-Young
    • Journal of Broadcast Engineering
    • /
    • v.14 no.5
    • /
    • pp.601-615
    • /
    • 2009
  • This paper proposes an efficient depth-map coding method for generating virtual-view images in 3D-Video. Virtual-view images can be generated by the view-interpolation based on the depth-map of the image. A conventional video coding method such as H.264 has been used. However, a conventional video coding method does not consider the image characteristics of the depth-map. Therefore, this paper proposes an adaptive depth-map coding method that can select between the H.264/AVC coding scheme and the proposed gray-coded bit plane-based coding scheme in a unit of block. This improves the coding efficiency of the depth-map data. Simulation results show that the proposed method, in comparison with the H.264/AVC coding scheme, improves the average BD-rate savings by 7.43% and the average BD-PSNR gains by 0.5dB. It also improves the subjective picture quality of synthesized virtual-view images using decoded depth-maps.

Development of a Three Dimensional Last Data Generation System using FFD (FFD를 이용한 3차원 라스트 데이터 생성 시스템)

  • 박인덕;임창현;김시경
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.9
    • /
    • pp.700-706
    • /
    • 2003
  • This paper presents a 3D last design system that provides the 3-dimensional last data based on the FFD(Free Form Deformation) method. The proposed system utilizes the control points for deformation factor to convert from the 3D point cloud foot data to the 3D point cloud last data. The deformation factor of the FFD is obtained from the conventional last design technique, and constructed on the FFD lattice based on the bottom view and lateral view of the measured 3D point cloud foot data. In addition, the control points of FFD lattice is decided on the anatomical points of foot. The deformed 3D last obtained from the proposed FFD is saved as a 3D dxf foot data. The experimental results demonstrate that the proposed system have the descent 3D last data based on the openGL window.

3D View Synthesis with Feature-Based Warping

  • Hu, Ningning;Zhao, Yao;Bai, Huihui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.11
    • /
    • pp.5506-5521
    • /
    • 2017
  • Three-dimensional video (3DV), as the new generation of video format standard, can provide the viewers with a vivid screen sense and a realistic stereo impression. Meanwhile the view synthesis has become an important issue for 3DV application. Differently from the conventional methods based on depth, in this paper we propose a new view synthesis algorithm, which can employ the correlation among views and warp in the image domain only. There are mainly two contributions. One is the incorporation of sobel edge points into feature extraction and matching, which can obtain a better stable homography and then a visual comfortable synthesis view compared to SIFT points only. The other is a novel image blending method proposed to obtain a better synthesis image. Experimental results demonstrate that the proposed method can improve the synthesis quality both in subjectivity and objectivity.