• Title/Summary/Keyword: 3D object view

Search Result 178, Processing Time 0.028 seconds

Cooperative recognition using multi-view images

  • Kojoh, Toshiyuki;Nagata, Tadashi;Zha, Hong-Bin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10b
    • /
    • pp.70-75
    • /
    • 1993
  • We represent a method of 3-D object recognition using multi images in this paper. The recognition process is executed as follows. Object models as prior knowledgement are generated and stored on a computer. To extract features of a recognized object, three CCD cameras are set at vertices of a regular triangle and take images of an object to be recognized. By comparing extracted features with generated models, the object is recognized. In general, it is difficult to recognize 3-D objects because there are the following problems such as how to make the correspondence to both stereo images, generate and store an object model according to a recognition process, and effectively collate information gotten from input images. We resolve these problems using the method that the collation on the basis of features independent on the viewpoint, the generation of object models as enumerating some candidate models in an early recognition level, the execution a tight cooperative process among results gained by analyzing each image. We have made experiments based on real images in which polyhedral objects are used as objects to be recognized. Some of results reveal the usefulness of the proposed method.

  • PDF

Development of Automatic System for 3D Visualization of Biological Objects

  • Choi, Tae Hyun;Hwnag, Heon;Kim, Chul Su
    • Agricultural and Biosystems Engineering
    • /
    • v.1 no.2
    • /
    • pp.95-99
    • /
    • 2000
  • Nondestructive methods such as ultrasonic and magnetic resonance imaging systems have many advantages but still much expensive. And they do not give exact color information and may miss some details. If it is allowed to destruct some biological objects to get interior and exterior informations, constructing 3D image form a series of slices sectional images gives more useful information with relatively low cost. In this paper, a PC based automatic 3D model generator was developed. The system was composed of three modules. The first module was the object handling and image acquisition module, which fed and sliced the object sequentially and maintains the paraffine cool to be in solid state and captures the sectional image consecutively. The second one was the system control and interface module, which controls actuators for feeding, slicing, and image capturing. And the last was the image processing and visualization module, which processed a series of acquired sectional images and generated 3D volumetric model. Handling module was composed of the gripper, which grasped and fed the object and the cutting device, which cuts the object by moving cutting edge forward and backward. sliced sectional images were acquired and saved in a form of bitmap file. 2D sectional image files were segmented from the background paraffine and utilized to generate the 3D model. Once 3-D model was constructed on the computer, user could manipulated it with various transformation methods such as translation, rotation, scaling including arbitrary sectional view.

  • PDF

The Analysis of View and Daylights for the Design of Public Housing Complexes Using a Residential Environment Analysis System Integrated into a CAD System (주거환경분석시스템의 CAD 시스템 통합을 통한 공동주택단지설계 시 일조 및 조망분석에 관한 연구)

  • Park, Soo-Hoon;Ryu, Jeong-Won
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.2
    • /
    • pp.137-145
    • /
    • 2007
  • This paper concerns about residential environment analysis program implementation for design and analysis on public housing complexes such that view and daylight analysis processes are automated and integrated into existing design routine to achieve better design efficiency. Considering the architectural design trends this paper chooses ArchiCAD as a platform for a CAD system, which contains the concepts such as integrated object-oriented CAD, virtual building and BIM. Residential environment analysis system consists of three components. The first component is the 3D modeling part defining 3D form information for external geographic contour models, site models and interior/exterior of apartment buildings. The second is the parametric library part handling the design parameters for view and daylight analysis. The last is the user interface for the input/output and integration of data for the environment analysis. Daylight analysis shows rendered images as well as results of daylight reports and grades per time and performs the calculations for floor shadow. It separates the site-only analysis from the analysis of site and exterior environmental parameters. View analysis considers horizontal and vertical view angles to produce view image from each unit and uses the bitmap analysis method to determine opening ratio, scenery ratio and void ratio. We could expect better performance and precision from this residential environment analysis system than the existing 2D drawing based view and daylight analysis methods and overcome the existing one-way flow of design information from 3D form to analysis reports so that site design modifications are automatically reflected on analysis results. Each part is developed in a module so that further integration and extension into other related estimation and construction management systems are made possible.

Shape Recognition of 3-D Object Using Texels (텍셀을 이용한 3차원 물체의 형상 인식)

  • Kim, Do-Nyun;Cho, Dong-Sub
    • Proceedings of the KIEE Conference
    • /
    • 1990.11a
    • /
    • pp.460-464
    • /
    • 1990
  • Texture provides an important source of information about the local orientation of visible surfaces. An important task that arises in many computer vision systems is the reconstruction of three-dimensional depth information from two-dimensional images. The surface orientation of texel is classified by the Artificial Neural Network. The classification method to recognize the shape of 3D object with artificial neural network requires less developing time comparing to conventional method. The segmentation problem is assumed to be solved. The surface in view is smooth and is covered with repeated texture elements. In this study, 3D shape reconstruct using interpolation method.

  • PDF

CUDA-based Object Oriented Programming Techniques for Efficient Parallel Visualization of 3D Content (3차원 콘텐츠의 효율적인 병렬 시각화를 위한 CUDA 환경 기반 객체 지향 프로그래밍 기법)

  • Park, Tae-Jung
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.169-176
    • /
    • 2012
  • This paper presents a parallel object-oriented programming (OOP) platform for efficient visualization of three-dimensional content in CUDA environments. For this purpose, this paper discusses the features and limitations in implementing C++ object-oriented codes using CUDA and proposes the solutions. Also, it presents how to implement a 3D parallel visualization platform based on the MVC (Model/View/Controller) design pattern. Also, it provides sample implementations for integral MLS (iMLS) and signed distance fields (SDFs) based on the Marching Cubes and Raytracing. The proposed approach enables GPU parallel processing only by implementing simple interfaces. Based on this, developers can expect general benefits that are common in general OOP techniques including abstractization and inheritance. Though I implemented only two specific samples in this paper, I expect my approach can be widely applied to general computer graphics problems.

3D SCENE EDITING BY RAY-SPACE PROCESSING

  • Lv, Lei;Yendo, Tomohiro;Tanimoto, Masayuki;Fujii, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.732-736
    • /
    • 2009
  • In this paper we focus on EPI (Epipolar-Plane Image), the horizontal cross section of Ray-Space, and we propose a novel method that chooses objects we want and edits scenes by using multi-view images. On the EPI acquired by camera arrays uniformly distributed along a line, all the objects are represented as straight lines, and the slope of straight lines are decided by the distance between objects and camera plane. Detecting a straight line of a specific slope and removing it mean that an object in a specific depth has been detected and removed. So we propose a scheme to make a layer of a specific slope compete with other layers instead of extracting layers sequentially from front to back. This enables an effective removal of obstacles, object manipulation and a clearer 3D scene with what we want to see will be made.

  • PDF

Recent Technologies for the Acquisition and Processing of 3D Images Based on Deep Learning (딥러닝기반 입체 영상의 획득 및 처리 기술 동향)

  • Yoon, M.S.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.5
    • /
    • pp.112-122
    • /
    • 2020
  • In 3D computer graphics, a depth map is an image that provides information related to the distance from the viewpoint to the subject's surface. Stereo sensors, depth cameras, and imaging systems using an active illumination system and a time-resolved detector can perform accurate depth measurements with their own light sources. The 3D image information obtained through the depth map is useful in 3D modeling, autonomous vehicle navigation, object recognition and remote gesture detection, resolution-enhanced medical images, aviation and defense technology, and robotics. In addition, the depth map information is important data used for extracting and restoring multi-view images, and extracting phase information required for digital hologram synthesis. This study is oriented toward a recent research trend in deep learning-based 3D data analysis methods and depth map information extraction technology using a convolutional neural network. Further, the study focuses on 3D image processing technology related to digital hologram and multi-view image extraction/reconstruction, which are becoming more popular as the computing power of hardware rapidly increases.

RAY-SPACE INTERPOLATION BYWARPING DISPARITY MAPS

  • Moriy, Yuji;Yendoy, Tomohiro;Tanimotoy, Masayuki;Fujiiz, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.583-587
    • /
    • 2009
  • In this paper we propose a new method of Depth-Image-Based Rendering (DIBR) for Free-viewpoint TV (FTV). In the proposed method, virtual viewpoint images are rendered with 3D warping instead of estimating the view-dependent depth since depth estimation is usually costly and it is desirable to eliminate it from the rendering process. However, 3D warping causes some problems that do not occur in the method with view-dependent depth estimation; for example, the appearance of holes on the rendered image, and the occurrence of depth discontinuity on the surface of the object at virtual image plane. Depth discontinuity causes artifacts on the rendered image. In this paper, these problems are solved by reconstructing disparity information at virtual camera position from neighboring two real cameras. In the experiments, high quality arbitrary viewpoint images were obtained.

  • PDF

Implementation of Web3D using VR Authoring Tool (가상현실 저작툴을 이용한 Web3D 구현)

  • 김성태;김윤호;송학현;류광렬
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.239-242
    • /
    • 2003
  • Although most of all modern Web contents accomplished by passive interface which used 2D, by way of developing super-highway internet net and a 3D compression technology, it gradually changed VR Web3D. In this approach, we presents a recently VR tech. as well as the implementation of Web3D based on VR mapping tools.

  • PDF

A Mode Selection Algorithm using Scene Segmentation for Multi-view Video Coding (객체 분할 기법을 이용한 다시점 영상 부호화에서의 예측 모드 선택 기법)

  • Lee, Seo-Young;Shin, Kwang-Mu;Chung, Ki-Dong
    • Journal of KIISE:Information Networking
    • /
    • v.36 no.3
    • /
    • pp.198-203
    • /
    • 2009
  • With the growing demand for multimedia services and advances in display technology, new applications for 3$\sim$D scene communication have emerged. While multi-view video of these emerging applications may provide users with more realistic scene experience, drastic increase in the bandwidth is a major problem to solve. In this paper, we propose a fast prediction mode decision algorithm which can significantly reduce complexity and time consumption of the encoding process. This is based on the object segmentation, which can effectively identify the fast moving foreground object. As the foreground object with fast motion is more likely to be encoded in the view directional prediction mode, we can properly limit the motion compensated coding for a case in point. As a result, time savings of the proposed algorithm was up to average 45% without much loss in the quality of the image sequence.