• Title/Summary/Keyword: 3D video

Search Result 1,152, Processing Time 0.028 seconds

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei;Zou, Shuanghui;Tian, Yifei;Sun, Su;Fong, Simon;Cho, Kyungeun;Qiu, Lvyang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1445-1456
    • /
    • 2018
  • Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.

Power Estimation of The Embedded 3D Graphics Renderer (내장형 3차원 그래픽 렌더링 처리기의 전력소모)

  • Jang, Tae-Hong;Lee, Moon-Key
    • Journal of Korea Game Society
    • /
    • v.4 no.3
    • /
    • pp.65-70
    • /
    • 2004
  • The conventional 3D graphic accelerator is mainly focused on high performance in the application area of computer graphic and 3D video game How ever the existing 3D architecture is not suitable for portable devices because of its huge power. So, we analyze the embedded 3D graphics renderer. After the analyzing, to reduce the power, triangle set-up stage and edge walking stage are executed sequentially while scan-line processing stage and span processing stage which control performance of 3D graphic accelerator are executed parallel.

  • PDF

Emotion fusion video communication services for real-time avatar matching technology (영상통신 감성융합 서비스를 위한 실시간 아바타 정합기술)

  • Oh, Dong Sik;Kang, Jun Ku;Sin, Min Ho
    • Journal of Digital Convergence
    • /
    • v.10 no.10
    • /
    • pp.283-288
    • /
    • 2012
  • 3D is the one of the current world in the spotlight as part of the future earnings of the business sector. Existing flat 2D and stereoscopic 3D to change the 3D shape and texture make walking along the dimension of the real world and the virtual reality world by making it feel contemporary reality of coexistence good show. 3D for the interest of the people has been spreading throughout the movie which is based on a 3D Avata. 3D TV market of the current conglomerate of changes in the market pioneer in the 3D market, further leap into the era of the upgrade was. At the same time, however, the modern man of the world, if becoming a necessity in the smartphone craze new innovation in the IT market mobile phone market and also has made. A small computer called a smartphone enough, the ripple velocity and the aftermath of the innovation of the telephone, the Internet as much as to leave many issues. Smartphone smart phone is a mobile phone that can be several functions. The current iPhone, Android. In addition to a large number of Windows Phone smartphones are released. Above the overall prospects of the future and a business service model for 3D facial expression as input avatar virtual 3D character on camera on your smartphone camera to recognize a user's emotional expressions on the face of the person is able to synthetic synthesized avatars in real-time to other mobile phone users matching, transmission, and be able to communicate in real-time sensibility fused video communication services to the development of applications.

A Study on the video production reflecting the characteristic of 3D stereoscopic (3D 영상의 특징을 반영한 영상제작에 대한 연구)

  • Lee, Yongwhan;Kang, Changhoon;Shin, Jinseob
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2013.07a
    • /
    • pp.303-306
    • /
    • 2013
  • 본 논문에서는 3D 영상 제작의 특징적인 면을 알아보고, 이를 통해 영상을 촬영과 편집해보는 전체 과정에 대해 알아본다. 3D 영상을 제작하기 위해서 기존의 영상촬영과는 달리 고려할 사항이 많다. 이에 본 논문에서는 기본적인 3D 영상 촬영의 특징과 촬영상의 주의 할 점을 실제 3D 영상 촬영을 통해 제시하고자 한다. 특히 깊이 특징을 나타내기 위한 화면의 구도 설정과 이를 통한 실제 영상의 결과를 중심으로 이론과 실제 촬영상의 결과를 알아본다.

  • PDF

Analysis on the Backgrounds Expression for 3D Animation (3D 애니메이션의 배경 표현에 관한 분석)

  • Park, Sung-Dae;Jung, Yee-Ji;Kim, Cheeyong
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.2
    • /
    • pp.268-276
    • /
    • 2015
  • This article analyzes the background representation of 3D animation and look at what its proper background expression. With the development of computer graphics technology, the background of the 3D animations can be expressed as The actual background. In contrast, "The Smurfs" which was released recently was created to take the actual background. However, 3D animation with real background is not appropriate in terms of creative expression space in the main role of the animation. In this Study, we analyze the character and background of the animation made in 3D graphics. Based on this, we propose a correct representation of 3D animation background.

R&D Opportunity Technology Selection in Intelligent Video Surveillance Industry (지능형 영상 보안 산업에서 R&D 기회 기술 선택)

  • Kang, Wonho;Choi, Gyunghyun
    • Journal of Korea Technology Innovation Society
    • /
    • v.20 no.3
    • /
    • pp.781-804
    • /
    • 2017
  • Video surveillance industry as a high-tech industry has traits that a pace of source technology development is slow while a pace of application technology development is fast. Unlike domestic companies have shown technological excellence in the field of video surveillance hardware, global leading companies are leading technologies in the field of intelligence video surveillance which would lead the future of the industry. Therefore, technology selection of the domestic companies determines the viability of the company with respect to the terms of market competitiveness reinforcement. In order to achieving this, find out the technology areas where the global leading companies are focused on analysis of global patents. After global patents analysis, identify the status of domestic technologies and analyze the difference between the global leading companies and the domestic companies. Decompose the technologies by element technology-application matrix which is obtained through a panel discussion of domestic SMEs' CTO, CEO, or other experts, they derive the necessary R&D opportunity technologies to ensure the future competitiveness of the company.

The Kinematic Analysis and Comparison of Foreign and Domestic 100m Elite Woman's Hurdling Techniques (국내외 우수 여자선수 100m 허들동작의 운동학적 비교 분석)

  • Ryu, Jae-Kyun;Yeo, Hong-Chul;Chang, Jae-Kwan
    • Korean Journal of Applied Biomechanics
    • /
    • v.17 no.4
    • /
    • pp.157-167
    • /
    • 2007
  • The purpose of this study was to analyze kinematic techniques in the woman's 100m hurdle. In order to find the kinematic parameters, a 3-D video system for kinematic analysis-kwon3d 3.1(Kwon3D Motion Analysis Program Version 3.1)-was used. Eight JVC video cameras(GR-HD1KR) were used to film the performance of Lee Yeon-Kyoung at a frame rate of 60fields/s. The kinematic characteristics from the first hurdle to last hurdle were analyzed at the clearing hurdle spots such as distance, velocities, heights and angles. The real-life three-dimensional coordinates of 20 body landmarks during each phases were collected using a Direct Linear Transformation procedure. After analyzing the kinematic variables in the 100m hurdle run, the following conclusion were obtained; Lee Yeon-Kyoung had to maintain constant stride lengths between hurdles and increase takeoff distance before clearance and shorter landing distance after clearance. She also had to hit the correct takeoff point in front of the hurdle and extend the lead leg at the moment of landing in order to minimize the loss of velocity. She had to sprint between hurdles as fast as possible over 8m/s and run powerful first stride and shortened third stride preparing for the following hurdle clearances.

3D Depth Information Extraction Algorithm Based on Motion Estimation in Monocular Video Sequence (단안 영상 시퀸스에서 움직임 추정 기반의 3차원 깊이 정보 추출 알고리즘)

  • Park, Jun-Ho;Jeon, Dae-Seong;Yun, Yeong-U
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.549-556
    • /
    • 2001
  • The general problems of recovering 3D for 2D imagery require the depth information for each picture element form focus. The manual creation of those 3D models is consuming time and cost expensive. The goal in this paper is to simplify the depth estimation algorithm that extracts the depth information of every region from monocular image sequence with camera translation to implement 3D video in realtime. The paper is based on the property that the motion of every point within image which taken from camera translation depends on the depth information. Full-search motion estimation based on block matching algorithm is exploited at first step and ten, motion vectors are compensated for the effect by camera rotation and zooming. We have introduced the algorithm that estimates motion of object by analysis of monocular motion picture and also calculates the averages of frame depth and relative depth of region to the average depth. Simulation results show that the depth of region belongs to a near object or a distant object is in accord with relative depth that human visual system recognizes.

  • PDF

High-Quality Depth Map Generation of Humans in Monocular Videos (단안 영상에서 인간 오브젝트의 고품질 깊이 정보 생성 방법)

  • Lee, Jungjin;Lee, Sangwoo;Park, Jongjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.2
    • /
    • pp.1-11
    • /
    • 2014
  • The quality of 2D-to-3D conversion depends on the accuracy of the assigned depth to scene objects. Manual depth painting for given objects is labor intensive as each frame is painted. Specifically, a human is one of the most challenging objects for a high-quality conversion, as a human body is an articulated figure and has many degrees of freedom (DOF). In addition, various styles of clothes, accessories, and hair create a very complex silhouette around the 2D human object. We propose an efficient method to estimate visually pleasing depths of a human at every frame in a monocular video. First, a 3D template model is matched to a person in a monocular video with a small number of specified user correspondences. Our pose estimation with sequential joint angular constraints reproduces a various range of human motions (i.e., spine bending) by allowing the utilization of a fully skinned 3D model with a large number of joints and DOFs. The initial depth of the 2D object in the video is assigned from the matched results, and then propagated toward areas where the depth is missing to produce a complete depth map. For the effective handling of the complex silhouettes and appearances, we introduce a partial depth propagation method based on color segmentation to ensure the detail of the results. We compared the result and depth maps painted by experienced artists. The comparison shows that our method produces viable depth maps of humans in monocular videos efficiently.

A method of Level of Details control table for 3D point density scalability in Video based Point Cloud Compression (V-PCC 기반 3차원 포인트 밀도 확장성을 위한 LoD 제어 테이블 방법)

  • Im, Jiheon;Kim, Junsik;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.178-181
    • /
    • 2019
  • 포인트 클라우드 콘텐츠는 3D 포인트 집합으로 이루어진 3D 데이터로, 일반적으로 3D 포인트 클라우드는 하나의 객체를 표현하기 위하여 수십, 수백만 개의 3차원 포인트(Point) 데이터가 필요하며, 각 포인트 데이터는 3차원 좌표계의 (x, y, z)좌표와 포인트의 색(color), 반사율(reflectance), 법선벡터(normal) 등과 같은 속성(attribute)으로 구성되어 있다. 따라서 기존 2D영상보다 한 단계 높은 차원과 다양한 속성으로 구성된 포인트 클라우드를 사용자에게 제공하기 위해서는 고효율의 인코딩/디코딩 기술 연구가 필요하며, 다양한 대역폭, 장치 및 관심 영역에 따라 차별화된 서비스를 제공하기 위한 품질 확장성 기능의 개발이 요구된다. 이에 본 논문에서는 포인트 클라우드 압축에 사용되는 V-PCC에서 3차원 미디어인 포인트 클라우드의 3D 공간 내 포인트 간의 밀도를 변경하여, 새로운 품질 변화를 달성하고 비트전송률 변경을 추가 지원하는 방법을 제시하였다.

  • PDF