• 제목/요약/키워드: 3D video

검색결과 1,152건 처리시간 0.045초

3-D DCT를 이용한 비디오 장면 전환 검출 (Video Scene Change Detection Using a 3-D DCT)

  • 우석훈;원치선
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 신호처리소사이어티 추계학술대회 논문집
    • /
    • pp.157-160
    • /
    • 2003
  • In this paper. we propose a simple and effective video scene change detection algorithm using a 3-D DCT. The 3-D DCT that we employ is a 2$\times$2$\times$2 DCT has simple computations composed only of adding and shifting operations. The simple average values of multiresolution represented video using the 2$\times$2$\times$2 DCT are used as a detection feature vector.

  • PDF

Versatile Video Coding을 활용한 Video based Point Cloud Compression 방법 (Video based Point Cloud Compression with Versatile Video Coding)

  • 권대혁;한희지;최해철
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2020년도 하계학술대회
    • /
    • pp.497-499
    • /
    • 2020
  • 포인트 클라우드는 다수의 3D 포인터를 사용한 3D 데이터의 표현 방식 중 하나이며, 멀티미디어 획득 및 처리 기술의 발전에 따라 다양한 분야에서 주목하고 있는 기술이다. 특히 포인트 클라우드는 3D 데이터를 정밀하게 수집하고 표현할 수 있는 장점을 가진다. 하지만 포인트 클라우드는 방대한 양의 데이터를 가지고 있어 효율적인 압축이 필수적이다. 이에 따라 국제 표준화 단체인 Moving Picture Experts Group에서는 포인트 클라우드 데이터의 효율적인 압축을 위하여 Video based Point Cloud Compression(V-PCC)와 Geometry based Point Cloud Coding에 대한 표준을 제정하고 있다. 이 중 V-PCC는 기존 High Efficiency Video Coding(HEVC) 표준을 활용하여 포인트 클라우드를 압축하여 활용성이 높다는 장점이 있다. 본 논문에서는 V-PCC에 사용하는 HEVC 코덱을 2020년 7월 표준화 완료될 예정인 Versatile Video Coding으로 대체하여 V-PCC의 압축 성능을 더 개선할 수 있음을 보인다.

  • PDF

Technical Improvement Using a Three-Dimensional Video System for Laparoscopic Partial Nephrectomy

  • Komatsuda, Akari;Matsumoto, Kazuhiro;Miyajima, Akira;Kaneko, Gou;Mizuno, Ryuichi;Kikuchi, Eiji;Oya, Mototsugu
    • Asian Pacific Journal of Cancer Prevention
    • /
    • 제17권5호
    • /
    • pp.2475-2478
    • /
    • 2016
  • Background: Laparoscopic partial nephrectomy is one of the major surgical techniques for small renal masses. However, it is difficult to manage cutting and suturing procedures within acceptable time periods. To overcome this difficulty, we applied a three-dimensional (3D) video system with laparoscopic partial nephrectomy, and evaluated its utility. Materials and Methods: We retrospectively enrolled 31 patients who underwent laparoscopic partial nephrectomy between November 2009 and June 2014. A conventional two-dimensional (2D) video system was used in 20 patients, and a 3D video system in 11. Patient characteristics and video system type (2D or 3D) were recorded, and correlations with perioperative outcomes were analyzed. Results: Mean age of the patients was $55.8{\pm}12.4$, mean body mass index was $25.7{\pm}3.9kg/m^2$, mean tumor size was $2.0{\pm}0.8cm$, mean R.E.N.A.L nephrometry score was $6.9{\pm}1.9$, and clinical stage was T1a in all patients. There were no significant differences in operative time (p=0.348), pneumoperitoneum time (p=0.322), cutting time (p=0.493), estimated blood loss (p=0.335), and Clavien grade of >II complication rate (p=0.719) between the two groups. However, warm ischemic time was significantly shorter in the 3D group than the 2D group (16.1 min vs. 21.2min, p=0.021), which resulted from short suturing time (9.1 min vs. 15.2 min, p=0.008). No open conversion occurred in either group. Conclusions: A 3D video system allows the shortening of warm ischemic time in laparoscopic partial nephrectomy and thus may be useful in improving the procedure.

Implementation of AR Remote Rendering Techniques for Real-time Volumetric 3D Video

  • Lee, Daehyeon;Lee, Munyong;Lee, Sang-ha;Lee, Jaehyun;Kwon, Soonchul
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제12권2호
    • /
    • pp.90-97
    • /
    • 2020
  • Recently, with the growth of mixed reality industrial infrastructure, relevant convergence research has been proposed. For real-time mixed reality services such as remote video conferencing, the research on real-time acquisition-process-transfer methods is required. This paper aims to implement an AR remote rendering method of volumetric 3D video data. We have proposed and implemented two modules; one, the parsing module of the volumetric 3D video to a game engine, and two, the server rendering module. The result of the experiment showed that the volumetric 3D video sequence data of about 15 MB was compressed by 6-7%. The remote module was streamed at 27 fps at a 1200 by 1200 resolution. The results of this paper are expected to be applied to an AR cloud service.

Media GIS Web Service Architecture using Three-Dimensional GIS Database

  • Kim, Sung-Soo;Kim, Kyong-Ho;Kim, Kyung-Ok
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.117-122
    • /
    • 2002
  • In this paper, we propose Media GIS web service architecture using 3D geographical database and GPS-related data resulted from 4S-Van. We introduce a novel interoperable geographical data service concept; so-called, Virtual World Mapping (VWM) that can map 3D graphic world with real-world video. Our proposed method can easily retrieve geographical in-formation and attributes to reconstruct 3D virtual space according to certain frame in video sequences. Our proposed system architecture also has an advantage that can provide geographical information service with video stream without any image processing procedures. In addition to, describing the details of our components, we present a Media GIS web service system by using GeoVideo Server, which performs VWM technique.

  • PDF

Web-Based Media GIS Architecture Using the Virtual World Mapping Technique

  • Kim, Sung-Soo;Kim, Kyong-Ho;Kim, Kyoung-Ok
    • 대한원격탐사학회지
    • /
    • 제19권1호
    • /
    • pp.71-80
    • /
    • 2003
  • In this Paper, we Propose web-based Media GIS architecture using 3D geographical database and GPS-related data resulted from 45-Van. We introduce a novel interoperable geographical data service concept; so-called, Virtual World Mapping (VWM) that can map 3D graphic world with real-world video. Our proposed method can easily retrieve geographical information and attributes to reconstruct 3D virtual space according to certain frame in video sequences. Our proposed system architecture also has an advantage that can provide geographical information service with video stream without any image processing procedures. In addition to, describing the details of our components, we present a Media GIS web service system by using GeoVideoServer, which performs VWM technique.

Video Augmentation by Image-based Rendering

  • Seo, Yong-Duek;Kim, Seung-Jin;Sang, Hong-Ki
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 1998년도 Proceedings of International Workshop on Advanced Image Technology
    • /
    • pp.147-153
    • /
    • 1998
  • This paper provides a method for video augmentation using image interpolation. In computer graphics or augmented reality, 3D information of a model object is necessary to generate 2D views of the model, which are then inserted into or overlayed on environmental views or real video frames. However, we do not require any three dimensional model but images of the model object at some locations to render views according to the motion of video camera which is calculated by an SFM algorithm using point matches under weak-perspective (scaled-orthographic) projection model. Thus, a linear view interpolation algorithm is applied rather than a 3D ray-tracing method to get a view of the model at different viewpoints from model views. In order to get novel views in a way that agrees with the camera motion the camera coordinate system is embedded into model coordinate system at initialization time on the basis of 3D information recovered from video images and model views, respectively. During the sequence, motion parameters from video frames are used to compute interpolation parameters, and rendered model views are overlayed on corresponding video frames. Experimental results for real video frames and model views are given. Finally, discussion on the limitations of the method and subjects for future research are provided.

  • PDF

Video Mosaics in 3D Space

  • Chon, Jaechoon;Fuse, Takashi;Shimizu, Eihan
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.390-392
    • /
    • 2003
  • Video mosaicing techniques have been widely used in virtual reality environments. Especially in GIS field, video mosaics are becoming more and more common in representing urban environments. Such applications mainly use spherical or panoramic mosaics that are based on images taken from a rotating camera around its nodal point. The viewpoint, however, is limited to location within a small area. On the other hand, 2D-mosaics, which are based on images taken from a translating camera, can acquire data in wide area. The 2D-mosaics still have some problems : it can‘t be applied to images taken from a rotational camera in large angle. To compensate those problems , we proposed a novel method for creating video mosaics in 3D space. The proposed algorithm consists of 4 steps: feature -based optical flow detection, camera orientation, 2D-image projection, and image registration in 3D space. All of the processes are fully automatic and successfully implemented and tested with real images.

  • PDF

비디오 프레임 영상을 이용한 자유 입체 모자이크 영상 제작에 관한 연구 (A Study on Generation of Free Stereo Mosaic Image Using Video Sequences)

  • 노명종;조우석;박준구
    • 한국측량학회지
    • /
    • 제27권4호
    • /
    • pp.453-460
    • /
    • 2009
  • 3차원 정보를 추출하기 위해서는 중복된 영역에 대하여 서로 다른 시야각을 가지고 있는 입체 영상이 존재하여야 하며, 비디오 프레임 영상에 있어서 입체 모자이크 영상은 연속적인 프레임 영상에서 좌우 슬라이스 영상을 추출하여 모자이킹 함으로써 제작할 수 있다. 따라서 본 논문에서는 항공기에 탑재한 비디오카메라로 촬영한 영상의 활용성을 극대화하기 위하여 3차원 정보 추출이 가능한 입체 모자이크 영상을 제작하는데 목적을 두고 있다. 모자이크 영상을 제작하기 위해서는 인접한 비디오 프레임 영상간의 위치관계를 규명하기 위한 이동변수가 결정되어져야 하므로, 본 연구에서는 자유 모자이크 방법으로 GPS/INS 데이터 없이 인접한 프레임 영상간의 상호표정요소를 이용한 방법을 사용하였다. 이동변수를 결정 한 후, 영상등록, 영상 슬라이싱, 접합선 추출 및 3D 모자이킹 과정을 거쳐 최종적인 입체 모자이크 영상을 제작하였다. 본 연구의 결과로써 제작된 자유 입체 모자이크 영상과 그 적용 가능성을 분석하기 위하여 종시차와 횡시차를 분석하여 나타내었다.

실감형 다시점 스케일러블 비디오 코딩 방법의 설계 및 구현 (Design and Implementation of a Realistic Multi-View Scalable Video Coding Scheme)

  • 박민우;박광훈
    • 방송공학회논문지
    • /
    • 제14권6호
    • /
    • pp.703-720
    • /
    • 2009
  • 본 논문에서는 3D 컨텐츠 서비스에 대한 사용자의 욕구를 만족시킴과 동시에 미래 컴퓨팅 환경에 적합한 새로운 동영상 코딩 방법으로서 실감형 다시점 스케일러블 비디오 코딩 방법을 제안하였다. 미래의 비디오 코딩 방법은 스테레오 스코픽 또는 다시점 비디오를 통하여 삼차원 실감형 입체영상을 사용자로 하여금 느끼게 하는 실감형 서비스를 지원함과 동시에 다양한 통신환경 및 다양한 종류의 단말을 통합적으로 지원하기 위한 'One-source Multi-use'를 달성할 수 있어야 한다. 지금까지 2차원 디스플레이만을 지원하는 동영상 코딩 방법과는 다르게 본 논문에서 제안하는 실감형 다시점 스케일러블 비디오 코딩 방법은 그러한 실감 서비스를 지원할 수 있는 방법이다. 제안된 코딩 방법은 다시점 비디오 코딩 방법과 스케일러블 비디오 코딩 방법의 기능을 통합하는 방향으로 설계되고 구현된 후 성능 평가를 통해 실제 3D 서비스에서의 응용 가능성을 살펴보았다. 성능 평가를 통해 본 논문에서 제안하는 코딩 구조가 코딩 효율을 효율적으로 유지하면서 시점간의 랜덤 액세스 성능을 크게 높여 주는 것을 확인할 수 있었다.