• Title/Summary/Keyword: 3D video

Search Result 1,156, Processing Time 0.028 seconds

Design of Wavelet-Based 3D Comb Filter for Composite Video Decoder (컴포지트 비디오 디코더를 위한 웨이블릿 기반 3차원 콤 필터의 설계)

  • Kim Nam-Sub;Cho Won-Kyung
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.5
    • /
    • pp.542-553
    • /
    • 2006
  • Because Y and C signals in a composite video signal are piled one on another in the same frequency, it is impossible to separate them completely. Therefore, it is necessary to develop efficient separation technique in order to minimize degradation of video quality. In this paper, we propose wavelet-based 3D comb filter algorithm and architecture for separating Y and C signals from a composite video signal. The proposed algorithm uses wavelet transform and thresholding of compared lines for acquiring the maximum video quality. Simulation results show that the proposed algorithm has better image quality and better PSNR than previous algorithms. For real application of the proposed algorithm, we developed a hardware architecture and the architecture was implemented by using VHDL. Finally, a VLSI layout of the proposed architecture was generated by using 0.25 micrometer CMOS process.

  • PDF

3D DCT Video Information Hiding

  • Kim, Young-Gon;Jie Yang;Lee, Hye-Joo;Hong, Jin-Woo;Lee, Moon-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2002.11a
    • /
    • pp.169-172
    • /
    • 2002
  • Embedding information into video data is a topic that recently gained increasing attention. This paper proposes a new approach for digital watermarking and secure copyright protection of video, the principal aim being to discourage illicit copying and distribution of copyrighted material. The method presented here is based on the three dimensional discrete cosine transform of video scene, in contrast with previous works on video watermarking where each video frame was marked separately, or where only intra-frame or motion compensation parameters were marked in MPEG compressed videos. The watermark sequence used is encrypted, pseudo-noise signal to the video. The performance of the presented technique is evaluated experimentally

  • PDF

Watermarking Method using Similarity between Frames in the Scene (장면내의 프레임간 유사성을 이용한 워터마킹 방법)

  • Ahn, I.Y.
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.42 no.4
    • /
    • pp.21-26
    • /
    • 2005
  • This paper presents a watermarking method using similarity between frames in the scene to resist to various attacks and to improve the video quality. This method inserts and detects a watermark in the frame pair every 3 frame. The experimental simulations show that the video quality is improved more than 45dB compared with previous methods and the watermark is resistant to frame drop, MPEG compression and low pass filter attacks.

Intelligent Composition of CG and Dynamic Scene (CG와 동영상의 지적합성)

  • 박종일;정경훈;박경세;송재극
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1995.06a
    • /
    • pp.77-81
    • /
    • 1995
  • Video composition is to integrate multiple image materials into one scene. It considerably enhances the degree of freedom in producing various scenes. However, we need to adjust the viewing point sand the image planes of image planes of image materials for high quality video composition. In this paper, were propose an intelligent video composition technique concentrating on the composition of CG and real scene. We first model the camera system. The projection is assumed to be perspective and the camera motion is assumed to be 3D rotational and 3D translational. Then, we automatically extract camera parameters comprising the camera model from real scene by a dedicated algorithm. After that, CG scene is generated according to the camera parameters of the real scene. Finally the two are composed into one scene. Experimental results justify the validity of the proposed method.

ADD-Net: Attention Based 3D Dense Network for Action Recognition

  • Man, Qiaoyue;Cho, Young Im
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.21-28
    • /
    • 2019
  • Recent years with the development of artificial intelligence and the success of the deep model, they have been deployed in all fields of computer vision. Action recognition, as an important branch of human perception and computer vision system research, has attracted more and more attention. Action recognition is a challenging task due to the special complexity of human movement, the same movement may exist between multiple individuals. The human action exists as a continuous image frame in the video, so action recognition requires more computational power than processing static images. And the simple use of the CNN network cannot achieve the desired results. Recently, the attention model has achieved good results in computer vision and natural language processing. In particular, for video action classification, after adding the attention model, it is more effective to focus on motion features and improve performance. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications. In this paper, we proposed a 3D dense convolutional network based on attention mechanism(ADD-Net), recognition of human motion behavior in the video.

Design of 3D Stereoscopic Electronic Book Authoring Tool Based on DirectX (DirectX기반 3차원 입체 eBook 영상 및 이미지 저작 도구 설계)

  • Park, Jinwoo;Lee, Keunhyoung;Kim, Jinmo;Hwang, Soyoung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.171-173
    • /
    • 2015
  • This paper proposes a design method of an authoring tool for making 3D e-book using DirectX development tools. There are several functions such as generation and modification of 3D objects, modification of textures, stereoscopic modes and pictures, video export and so on in the proposed authoring tool. To support these functions, we proposes design scheme such as data structures for generating 3D objects, anaglyph method using color differences and video export method using BandiCap library.

  • PDF

Developing a Sensory Ride Film 'Dragon Dungeon Racing' (효율적인 입체 라이드 콘텐츠 제작을 위한 연구)

  • Chae, Eel-Jin;Choi, Chul-Young;Choi, Kyu-Don;Kim, Ki-Hong
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.2
    • /
    • pp.178-185
    • /
    • 2011
  • The recent development of 3D and its application contents have made it possible for people to experience more various 3D contents such as 3D/4D, VR, 3D ride film, I-max, sensory 3D games at theme parks, large-scale exhibitions, 4D cinemas and Video ride. Among them, Video ride, a motion-based genre, especially is getting more popularity, where viewers are immersed in and get indirect experiences in virtual reality. In this study, the production process of the genre of sensory 3D image getting attention recently and ride film are introduced. In the material selection of 3D images, the space and the setting up which is suitable to the fierce movement of rides are studied and some examples of the realization of creative direction ideas and effective technologies using the functions of Stereo Camera which has been first applied to MAYA 2009 are also illustrated. When experts in this 3D image production create more interesting stories with the cultural diversity and introduce enhanced 3D production techniques for excellent contents, domestic relevant companies will be sufficiently able to compete with their foreign counterparts and further establish their successfully unique and strong domains in the image contents sector.

Similarity-Based Patch Packing Method for Efficient Plenoptic Video Coding in TMIV

  • Kim, HyunHo;Kim, Yong-Hwan
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.250-252
    • /
    • 2022
  • As immersive video contents have started to emerge in the commercial market, research on it is required. For this, efficient coding methods for immersive video are being studied in the MPEG-I Visual workgroup, and they released Test Model for Immersive Video (TMIV). In current TMIV, the patches are packed into atlas in order of patch size. However, this simple patch packing method can reduce the coding efficiency in terms of 2D encoder. In this paper, we propose patch packing method which pack the patches into atlases by using the similarity of each patch for improving coding efficiency of 3DoF+ video. Experimental result shows that there is a 0.3% BD-rate savings on average over the anchor of TMIV.

  • PDF

Vanishing point-based 3D object detection method for improving traffic object recognition accuracy

  • Jeong-In, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.93-101
    • /
    • 2023
  • In this paper, we propose a method of creating a 3D bounding box for an object using a vanishing point to increase the accuracy of object recognition in an image when recognizing an traffic object using a video camera. Recently, when vehicles captured by a traffic video camera is to be detected using artificial intelligence, this 3D bounding box generation algorithm is applied. The vertical vanishing point (VP1) and horizontal vanishing point (VP2) are derived by analyzing the camera installation angle and the direction of the image captured by the camera, and based on this, the moving object in the video subject to analysis is specified. If this algorithm is applied, it is easy to detect object information such as the location, type, and size of the detected object, and when applied to a moving type such as a car, it is tracked to determine the location, coordinates, movement speed, and direction of each object by tracking it. Able to know. As a result of application to actual roads, tracking improved by 10%, in particular, the recognition rate and tracking of shaded areas (extremely small vehicle parts hidden by large cars) improved by 100%, and traffic data analysis accuracy was improved.

Stereoscopic Video Services for Terrestrial DMB (지상파 DMB를 위한 스테레오스코픽 영상 서비스)

  • Kim, Yong-Han
    • Journal of Broadcast Engineering
    • /
    • v.14 no.1
    • /
    • pp.85-88
    • /
    • 2009
  • Recently "DMB Video-Associated Stereoscopic Data Services" standard has been published by TTA. The standard enables DMB broadcasters to provide 3D or stereoscopic interactive data services based on MPEG-4 BIFS (Binary Format for Scenes). The purpose is to entice viewers to utilize DMB interactive data services more often by providing realistic and protrusive image objects overlaid on top of the main video in the background. This paper provides the background, technical analysis, and in-depth considerations for the standard. Also the results of standard verification are provided including the results of interoperability test with the existing terrestrial DMB receivers.