• Title/Summary/Keyword: 비트표현

Search Result 283, Processing Time 0.023 seconds

MMT based V3C data packetizing method (MMT 기반 V3C 데이터 패킷화 방안)

  • Moon, Hyeongjun;Kim, Yeonwoong;Park, Seonghwan;Nam, Kwijung;Kim, Kyuhyeon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.836-838
    • /
    • 2022
  • 3D Point Cloud는 3D 콘텐츠를 더욱 실감 나게 표현하기 위한 데이터 포맷이다. Point Cloud 데이터는 3차원 공간상에 존재하는 데이터로 기존의 2D 영상에 비해 거대한 용량을 가지고 있다. 최근 대용량 Point Cloud의 3D 데이터를 압축하기 위해 V-PCC(Video-based Point Cloud Compression)와 같은 다양한 방법이 제시되고 있다. 따라서 Point Cloud 데이터의 원활한 전송 및 저장을 위해서는 V-PCC와 같은 압축 기술이 요구된다. V-PCC는 Point Cloud의 데이터들을 Patch로써 뜯어내고 2D에 Projection 시켜 3D의 영상을 2D 형식으로 변환하고 2D로 변환된 Point Cloud 영상을 기존의 2D 압축 코덱을 활용하여 압축하는 기술이다. 이 V-PCC로 변환된 2D 영상은 기존 2D 영상을 전송하는 방식을 활용하여 네트워크 기반 전송이 가능하다. 본 논문에서는 V-PCC 방식으로 압축한 V3C 데이터를 방송망으로 전송 및 소비하기 위해 MPEG Media Transport(MMT) Packet을 만드는 패킷화 방안을 제안한다. 또한 Server와 Client에서 주고받은 V3C(Visual Volumetric Video Coding) 데이터의 비트스트림을 비교하여 검증한다.

  • PDF

Design of a 3D Graphics Geometry Accelerator using the Programmable Vertex Shader (Programmable Vertex Shader를 내장한 3차원 그래픽 지오메트리 가속기 설계)

  • Ha Jin-Seok;Jeong Hyung-Gi;Kim Sang-Yeon;Lee Kwang-Yeob
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.9 s.351
    • /
    • pp.53-58
    • /
    • 2006
  • A Vertex Shader is designed to show more 3D graphics expressions, and to increase flexibility of the fixed function T&L (Transform and Lighting) engine. Design of this Shader is based on Vertex Shader 1.1 of DirectX 8.1 and OpenGL ARB. The Vertex Shader consists of four floating point ALUs for vectors operation. The previous 32bits floating point data type is replaced to 24bits floating point data type in order to design the Vertex Shader that consume low-power and occupy small area. A Xilinx Virtex2 300M gate module is used to verify behaviour of the core. The result of Synopsys synthesis shows that the proposed Vertex Shader performs 115MHz speed at the TSMC 0.13um process and it can operate as the rate of 12.5M Polygons/sec. It shows the complexity of 110,000 gates in the same process.

An Efficient Compression Method of Integral Images Using Adaptive Block Modes (적응적인 블록 모드를 이용한 집적 영상의 효율적인 압축 방법)

  • Jeon, Ju-Il;Kang, Hyun-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.1-9
    • /
    • 2010
  • In this paper, we propose an efficient compression method of the integral images. The integral imaging is a well-known technique to represent and record three-dimensional images. The proposed method is based on three dimensional discrete cosine transform (3D-DCT). The 3D-DCT techniques for the integral images have been reported as an efficient coding method for the integral images which reduces the redundancies between adjacent elemental images. The proposed method is a compression method efficient to integral images using adaptive block mode(ABM), based on the 3D-DCT technique. In the ABM, 3D-DCT blocks adaptive to the characteristics of integral images are constructed, which produces variable block size 3D-DCT blocks, and then 3D-DCTs for the 3D blocks are performed. Experimental results show that the proposed method gives significant improvement in coding efficiency. Especially, at the high bit-rates, the proposed method is more excellent since the overhead incurred by the proposed method take less part of the total bits.

3-D Wavelet Compression with Lifting Scheme for Rendering Concentric Mosaic Image (동심원 모자이크 영상 표현을 위한 Lifting을 이용한 3차원 웨이브렛 압축)

  • Jang Sun-Bong;Jee Inn-Ho
    • Journal of Broadcast Engineering
    • /
    • v.11 no.2 s.31
    • /
    • pp.164-173
    • /
    • 2006
  • The data structure of the concentric mosaic can be regarded as a video sequence with a slowly panning camera. We take a concentric mosaic with match or alignment of video sequences. Also the concentric mosaic required for huge memory. Thus, compressing is essential in order to use the concentric mosaic. Therefore we need the algorithm that compressed data structure was maintained and the scene was decoded. In this paper, we used 3D lifting transform to compress concentric mosaic. Lifting transform has a merit of wavelet transform and reduces computation quantities and memory. Because each frame has high correlation, the complexity which a scene is detected form 3D transformed bitstream is increased. Thus, in order to have higher performance and decrease the complexity of detecting of a scene we executed 3D lifting and then transformed data set was sequently compressed with each frame unit. Each frame has a flexible bit rate. Also, we proposed the algorithm that compressed data structure was maintained and the scene was decoded by using property of lifting structure.

Inter-frame vertex selection algorithm for lossy coding of shapes in video sequences (동영상에서의 모양 정보 부호화를 위한 정점 선택 알고리즘)

  • Suh, Jong-Yeul;Kim, Kyong-Joong;Kang, Moon-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.4
    • /
    • pp.35-45
    • /
    • 2000
  • The vertex-based boundary encoding scheme is widely used in object-based video coding area and computer graphics due to its scalability with natural looking approximation. Existing single framebased vertex encoding algorithm is not efficient for temporally correlated video sequences because it does not remove temporal redundancy. In the proposed method, a vertex point is selected from not only the boundary points of the current frame but also the vertex points of the previous frame to remove temporal redundancy of shape information in video sequences. The problem of selecting optimal vertex points is modeled as finding shortest path in the directed acyclic graph with weight The boundary is approximated by a polygon which can be encoded with the smallest number of bits for maximum distortion. The temporal redundancy between two successive frames is efficiently removed with the proposed scheme, resulting in lower bit-rate than the conventional algorithms.

  • PDF

A Design of high throughput IDCT processor in Distrited Arithmetic Method (처리율을 개선시킨 분산연산 방식의 IDCT 프로세서 설계)

  • 김병민;배현덕;조태원
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.40 no.6
    • /
    • pp.48-57
    • /
    • 2003
  • In this paper, An 8${\times}$l ID-IDCT processor with adder-based distributed arithmetic(DA) and bit-serial method Is presented. To reduce hardware cost and to improve operating speed, the proposed 8${\times}$1 ID-IDCT used the bit-serial method and DA method. The transform of coefficient equation results in reduction in hardware cost and has a regularity in implementation. The sign extension computation method reduces operation clock. As a result of logic synthesis, The gate count of designed 8${\times}$1 1D-IDCT is 17,504. The sign extension processing block has gate count of 3,620. That is 20% of total 8${\times}$1 ID-IDCT architecture. But the sign extension processing block improves more than twice in throughput. The designed IDCT processes 50Mpixels per second and at a clock frequency of 100MHz.

Quantization Method for Normalization of JPEG Pleno Hologram (JPEG Pleno 홀로그램 데이터의 정규화를 위한 양자화)

  • Kim, Kyung-Jin;Kim, Jin-Kyum;Oh, Kwan-Jung;Kim, Jin-Woong;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.587-597
    • /
    • 2020
  • In this paper, we analyze the normalization that occurs when processing digital hologram and propose an optimized quantization method. In JPEG Pleno, which standardizes the compression of holograms, full complex holograms are defined as complex numbers with 32-bit or 64-bit precision, and the range of values varies greatly depending on the method of hologram generation and object type. Such data with high precision and wide dynamic range are converted to fixed-point or integer numbers with lower precision for signal processing and compression. In addition, in order to reconstruct the hologram to the SLM (spatial light modulator), it is approximated with a precision of a value that can be expressed by the pixels of the SLM. This process can be refereed as a normalization process using quantization. In this paper, we introduce a method for normalizing high precision and wide range hologram using quantization technique and propose an optimized method.

Adaptive TBC in Intra Prediction on Versatile Video Coding (VVC의 화면 내 예측에서 적응적 TBC를 사용하는 방법)

  • Lee, Won Jun;Park, Gwang Hoon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.109-112
    • /
    • 2020
  • VVC uses 67 modes in intra prediction. Most probable mode (MPM) is used to reduce data for the representation of intra prediction mode. If the mode to send exists in the MPM candidate, the index of the MPM list is transmitted. If it does not exist in the MPM candidate, TBC encoding is applied. When TBC is applied in intra prediction, three are selected in order of low number mode and coded into 5 bits. The remaining modes except the mode encoded by 5 bits are encoded by 6 bits. In this paper, we examine the limitations of the TBC used in VVC intra prediction and propose an adaptive method that can encode more efficiently than conventional methods when using TBC in intra prediction. As a result, the coding efficiency of the overall coding performance is 0.01% and 0.04% in AI and RA, respectively, compared with the conventional coding method.

Research of Semantic Considered Tree Mining Method for an Intelligent Knowledge-Services Platform

  • Paik, Juryon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.27-36
    • /
    • 2020
  • In this paper, we propose a method to derive valuable but hidden infromation from the data which is the core foundation in the 4th Industrial Revolution to pursue knowledge-based service fusion. The hyper-connected societies characterized by IoT inevitably produce big data, and with the data in order to derive optimal services for trouble situations it is first processed by discovering valuable information. A data-centric IoT platform is a platform to collect, store, manage, and integrate the data from variable devices, which is actually a type of middleware platforms. Its purpose is to provide suitable solutions for challenged problems after processing and analyzing the data, that depends on efficient and accurate algorithms performing the work of data analysis. To this end, we propose specially designed structures to store IoT data without losing the semantics and provide algorithms to discover the useful information with several definitions and proofs to show the soundness.

Colormap Construction and Combination Method between Colormaps (컬러맵의 생성과 컬러맵간의 결합 방법)

  • Kim, Jin-Hong;Jo, Cheol-Hyo;Kim, Du-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.4
    • /
    • pp.541-550
    • /
    • 1994
  • A true color image is needed many data on the occasion of the transmission and storage. Therefore, we want to describe color image by a minority data without unreasonableness at eyesight. In this paper, it is presented 256 colormap construction method in RGB, YIQ/YUV space and common colormap expression method at merge between colormaps by reason of dissimilar original color image to display at a monitor for each other colormap at the same time. In comparison with processed result in RGB, YIQ/YUV space, it was measured by PSNR, standard variation, and edge preservation rate using sobel operator. Process time is 3second in colormap construction and 2second in merge between colormaps. In the PSNR value, RGB space has higher 0.15, 0.34 on an average than YIQ and YUV spae. Standard variation has lower in 0.15, 0.41 on an average than Yiq and YUV space. But in the data compression, YIQ/YUV space have about 1/3 compression efficiency than RGB space by reason of use to only 4bit of 8bit in color component.

  • PDF