• Title/Summary/Keyword: 3D view

Search Result 1,615, Processing Time 0.036 seconds

View Synthesis Using OpenGL for Multi-viewpoint 3D TV (다시점 3차원 방송을 위한 OpenGL을 이용하는 중간영상 생성)

  • Lee, Hyun-Jung;Hur, Nam-Ho;Seo, Yong-Duek
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.507-520
    • /
    • 2006
  • In this paper, we propose an application of OpenGL functions for novel view synthesis from multi-view images and depth maps. While image based rendering has been meant to generate synthetic images by processing the camera view with a graphic engine, little has been known about how to apply the given images and depth information to the graphic engine and render the scene. This paper presents an efficient way of constructing a 3D space with camera parameters, reconstructing the 3D scene with color and depth images, and synthesizing virtual views in real-time as well as their depth images.

Hybrid 3DTV Systems Based on the Cross-View SHVC (양안 교차 SHVC 기반 융합형 3DTV 시스템)

  • Kang, Dong Wook;Jung, Kyeong Hoon;Kim, Jin Woo;Kim, Jong Ho
    • Journal of Broadcast Engineering
    • /
    • v.23 no.2
    • /
    • pp.316-319
    • /
    • 2018
  • When a terrestrial UHD broadcasting service and a mobile HD broadcasting service are provided using the PLP function provided by ATSC 3.0 and domestic UHD broadcasting standard, a small amount of data may be additionally transmitted to further provide high quality UHD-3D broadcasting service. The left and right images of the stereoscopic image are input, one view image is encoded by the SHVC method, and the other view images are encoded by the SHVC method of the two-view cross-referencing method. However, since the base layers (BL) of the two encoders are mutually common, the two encoders correspond to encoders that generate one BL stream and two enhancement layer (EL) streams. The average encoding efficiency is 16% more efficient compared to the third independent HEVC encoding for the UHD-3D broadcast service. The proposed scheme reduces the fluctuation of PSNR per image frame and increases the image quality of minimum PSNR frame by 0.6dB.

A New Calibration of 3D Point Cloud using 3D Skeleton (3D 스켈레톤을 이용한 3D 포인트 클라우드의 캘리브레이션)

  • Park, Byung-Seo;Kang, Ji-Won;Lee, Sol;Park, Jung-Tak;Choi, Jang-Hwan;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.247-257
    • /
    • 2021
  • This paper proposes a new technique for calibrating a multi-view RGB-D camera using a 3D (dimensional) skeleton. In order to calibrate a multi-view camera, consistent feature points are required. In addition, it is necessary to acquire accurate feature points in order to obtain a high-accuracy calibration result. We use the human skeleton as a feature point to calibrate a multi-view camera. The human skeleton can be easily obtained using state-of-the-art pose estimation algorithms. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D skeleton obtained through the posture estimation algorithm as a feature point. Since the human body information captured by the multi-view camera may be incomplete, the skeleton predicted based on the image information acquired through it may be incomplete. After efficiently integrating a large number of incomplete skeletons into one skeleton, multi-view cameras can be calibrated by using the integrated skeleton to obtain a camera transformation matrix. In order to increase the accuracy of the calibration, multiple skeletons are used for optimization through temporal iterations. We demonstrate through experiments that a multi-view camera can be calibrated using a large number of incomplete skeletons.

2D-3D Pose Estimation using Multi-view Object Co-segmentation (다시점 객체 공분할을 이용한 2D-3D 물체 자세 추정)

  • Kim, Seong-heum;Bok, Yunsu;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.1
    • /
    • pp.33-41
    • /
    • 2017
  • We present a region-based approach for accurate pose estimation of small mechanical components. Our algorithm consists of two key phases: Multi-view object co-segmentation and pose estimation. In the first phase, we explain an automatic method to extract binary masks of a target object captured from multiple viewpoints. For initialization, we assume the target object is bounded by the convex volume of interest defined by a few user inputs. The co-segmented target object shares the same geometric representation in space, and has distinctive color models from those of the backgrounds. In the second phase, we retrieve a 3D model instance with correct upright orientation, and estimate a relative pose of the object observed from images. Our energy function, combining region and boundary terms for the proposed measures, maximizes the overlapping regions and boundaries between the multi-view co-segmentations and projected masks of the reference model. Based on high-quality co-segmentations consistent across all different viewpoints, our final results are accurate model indices and pose parameters of the extracted object. We demonstrate the effectiveness of the proposed method using various examples.

Making of View Finder for Drone Photography (드론 촬영을 위한 뷰파인더 제작)

  • Park, Sung-Dae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.12
    • /
    • pp.1645-1652
    • /
    • 2018
  • A drone which was developed first for military purpose has been expanded to various civil areas, with its technological development. Of the drones developed for such diverse purposes, a drone for photography has a camera installed and is actively applied to a variety of image contents making, beyond filming and broadcasting. A drone for photography makes it possible to shoot present and dynamic images which were hard to be photographed with conventional photography technology. This study made a view finder which helps a drone operator to control a drone and directly view an object to shoot with the drone camera. The view finder for drones is a type of glasses. It was developed in the way of printing out the data modelled with 3D MAX in a 3D printer and installing a ultra-small LCD monitor. The view finder for drones makes it possible to fly a drone safely and achieve accurate framing of an object to shoot.

Implementation of 3D View Web Service based on ASE File Parsing and Model Database Linking (ASE 파일 파싱 및 모델 데이터베이스 연동을 통한 3D View 웹 서비스 구현)

  • Yeo, Yun-Seok;Park, Jong-Koo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.11b
    • /
    • pp.685-688
    • /
    • 2003
  • 인터넷 사용자들이 정적인 정보 페이지 검색의 형태를 벗어나 인터넷상에서 프로그램을 실행하여 정보를 해석하고 볼 뿐만 아니라, 변경하고 새롭게 창조할 수 있는 동적인 정보를 제공하기 위해 가장 보편적인 3차원 데이터, 3D MAX-Studio의 텍스트 출력물인 ASE 포맷 파일을 파싱하여 렌더링 해주는 3D Viewer 프로그램 구현 하고, 이를 ActiveX 컴포넌트인 OCX로 만들어 웹 페이지 상에서 실행 가능하게 한다. 그리고 데이터의 효율적 관리와 사용자와의 상호작용을 위하여 ASE 모델들을 위한 데이터베이스를 구축하여 사용자 상호작용적인 3D View Web Service를 실현한다. 이를 통하여 인터넷을 통한 실시간적인 정보 교환이나, 네트워크상의 가상공간 내에서의 공통 업무 작업의 가능성을 내다 보았다.

  • PDF

3D Coordinates Acquisition by using Multi-view X-ray Images (다시점 X선 영상을 이용한 3차원 좌표 획득)

  • Yi, Sooyeong;Rhi, Jaeyoung;Kim, Soonchul;Lee, Jeonggyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.10
    • /
    • pp.886-890
    • /
    • 2013
  • In this paper, a 3D coordinates acquisition method for a mechanical assembly is developed by using multiview X-ray images. The multi-view X-ray images of an object are obtained by a rotary table. From the rotation transformation, it is possible to obtain the 3D coordinates of corresponding edge points on multi-view X-ray images by triangulation. The edge detection algorithm in this paper is based on the attenuation characteristic of the X-ray. The 3D coordinates of the object points are represented on a graphic display, which is used for the inspection of a mechanical assembly.

Design and Implementation of Multi-View 3D Video Player (다시점 3차원 비디오 재생 시스템 설계 및 구현)

  • Heo, Young-Su;Park, Gwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.258-273
    • /
    • 2011
  • This paper designs and implements a multi-view 3D video player system which is operated faster than existing video player systems. The structure for obtaining the near optimum speed in a multi-processor environment by parallelizing the component modules is proposed to process large volumes of multi-view image data at high speed. In order to use the concurrency of bottleneck, we designed image decoding, synthesis and rendering modules in a pipeline structure. For load balancing, the decoder module is divided into the unit of viewpoint, and the image synthesis module is geometrically divided based on synthesized images. As a result of this experiment, multi-view images were correctly synthesized and the 3D sense could be felt when watching the images on the multi-view autostereoscopic display. The proposed application processing structure could be used to process large volumes of multi-view image data at high speed, using the multi-processors to their maximum capacity.

Development of 3D Stereoscopic Image Generation System Using Real-time Preview Function in 3D Modeling Tools

  • Yun, Chang-Ok;Yun, Tae-Soo;Lee, Dong-Hoon
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.746-754
    • /
    • 2008
  • A 3D stereoscopic image is generated by interdigitating every scene with video editing tools that are rendered by two cameras' views in 3D modeling tools, like Autodesk MAX(R) and Autodesk MAYA(R). However, the depth of object from a static scene and the continuous stereo effect in the view of transformation, are not represented in a natural method. This is because after choosing the settings of arbitrary angle of convergence and the distance between the modeling and those two cameras, the user needs to render the view from both cameras. So, the user needs a process of controlling the camera's interval and rendering repetitively, which takes too much time. Therefore, in this paper, we will propose the 3D stereoscopic image editing system for solving such problems as well as exposing the system's inherent limitations. We can generate the view of two cameras and can confirm the stereo effect in real-time on 3D modeling tools. Then, we can intuitively determine immersion of 3D stereoscopic image in real-time, by using the 3D stereoscopic image preview function.

  • PDF

Production of fusion-type realistic contents using 3D motion control technology (3D모션 컨트롤 기술을 이용한 융합형 실감 콘텐츠 제작)

  • Jeong, Sun-Ri;Chang, Seok-Joo
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.4
    • /
    • pp.146-151
    • /
    • 2019
  • In this paper, we developed a multi-view video content based on real-world technology and a pilot using the production technology, and provided realistic contents production technology that can select a desired direction at a user 's view point by providing users with various viewpoint images. We also created multi-view video contents that can indirectly experience local cultural tourism resources and produced cyber tour contents based on multi-view video (realistic technology). This technology development can be used to create 3D interactive real-world contents that are used in all public education fields such as libraries, kindergartens, elementary schools, middle schools, elderly universities, housewives classrooms, lifelong education centers, The domestic VR market is still in it's infancy, and it's expected to develop in combination with the 3D market related to games and shopping malls. As the domestic educational trend and the demand for social public education system are growing, it is expected to increase gradually.