• Title/Summary/Keyword: Compression format

Search Result 100, Processing Time 0.028 seconds

Standard Technology for Digital Cable Stereoscopic 3DTV Broadcasting (디지털 케이블 양얀식 3DTV 방송 표준 기술)

  • You, Woong-Shik;Lee, Bong-Ho;Jung, Joon-Young;Yun, Kug-Jin;Choi, Dong-Joon;Cheong, Won-Sik;Hur, Nam-Ho;Kwon, Oh-Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.9B
    • /
    • pp.1126-1142
    • /
    • 2011
  • This paper addresses the stereoscopic 3D broadcasting technology that delivers the 3DTV contents through the digital cable networks. In order to convey the 3D contents via DCA TV network, specifications of 3D video format, compression, multiplexing, signalling and transport are to be developed. Since 3D has some constraints unlike 2D, it is required to be well designed by considering the capacity of the additional view and the backward/forward compatibility. This paper goes with the latest trends of 3D standard, requirements and service scenarios and then covers the 3D format, compression, multiplexing and signaling, service information and transport/reception technologies.

Analysis of Uniqueness and Robustness Properties of Ordinal Signature for Video Matching (비디오 정합을 위한 오디널 특징의 유일성 및 강건성 분석)

  • Jeong Kwang-Min;Kim Jeong-Yeop;Hyun Ki-Ho;Ha Yeong-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.5
    • /
    • pp.576-584
    • /
    • 2006
  • Content-based video matching is measuring a similarity of video signature compared to the original clip and copies of media. Specially, it is very important to match the exact frame position, but it depends on frame rate, noise condition and compression format of video. Ordinal signature shows good performance than other video signatures under normal condition but the previous didn't try to find the uniqueness and robustness. Hua et al. performed a uniqueness test under compressed in different formats or frame size. However, they used other compression format image instead of noise in robustness test. This paper proposes robustness test method using several noise models and analyzes the performance of robustness and uniqueness.

  • PDF

Forensic Analysis of HEIF Files on Android and Apple Devices (스마트폰에서 촬영된 HEIF 파일 특징 분석에 관한 연구)

  • Kwon, Youngjin;Bang, Sumin;Han, Jaehyeok;Lee, Sangjin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.10
    • /
    • pp.421-428
    • /
    • 2021
  • The High Efficiency Image File Format (HEIF) is an MPEG-developed image format that utilizes the video codec H.265 to store still screens in a single image format. The iPhone has been using HEIF since 2017, and Android devices such as the Galaxy S10 have also supported the format since 2019. The format can provide images with good compression rates, but it has a complex internal structure and lacks significant compatibility between devices and software, making it not popular to replace commonly used JPEG (or JPG) files. However, despite the fact that many devices are already using HEIF, digital forensics research regarding it is lacking. This means that we can be exposed to the risk of missing potential evidence due to insufficient understanding of the information contained inside the file during digital forensics investigations. Therefore, in this paper, we analyze the HEIF formatted photo file taken on the iPhone and the motion photo file taken on the Galaxy to find out the information and features contained inside the file. We also investigate whether or not the software we tested support HEIF and present the requirement of forensic tools to analyze HEIF.

Compressing Method of NetCDF Files Based on Sparse Matrix (희소행렬 기반 NetCDF 파일의 압축 방법)

  • Choi, Gyuyeun;Heo, Daeyoung;Hwang, Suntae
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.11
    • /
    • pp.610-614
    • /
    • 2014
  • Like many types of scientific data, results from simulations of volcanic ash diffusion are of a clustered sparse matrix in the netCDF format. Since these data sets are large in size, they generate high storage and transmission costs. In this paper, we suggest a new method that reduces the size of the data of volcanic ash diffusion simulations by converting the multi-dimensional index to a single dimension and keeping only the starting point and length of the consecutive zeros. This method presents performance that is almost as good as that of ZIP format compression, but does not destroy the netCDF structure. The suggested method is expected to allow for storage space to be efficiently used by reducing both the data size and the network transmission time.

A Study on the Efficiency of ASTC Texture Format in Mobile Game Environment (모바일 게임 환경의 ASTC 텍스쳐 포맷 효용성 연구)

  • Hong, Seong-Chan;Kim, Tae-Gyu;Jung, Won-Joe
    • Journal of Korea Game Society
    • /
    • v.19 no.6
    • /
    • pp.91-98
    • /
    • 2019
  • This study verified the memory occupancy, CPU processing speed, and average frame comparison of texture formats of ASTC and ETC in mobile Android OS. The virtual game scene was implemented as an experimental environment and built on the Android platform. Based on this, comparative verification data was extracted. ASTC has a 36% lower share of memory usage of 2D textures than ETC. CPU processing speed was 18% faster. The average frame confirmed 54 frames that was 58% higher. In the smart mobile game environment, ASTC confirmed the result of comparative advantage over ETC.

Dynamic Rank Subsetting with Data Compression

  • Hong, Seokin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.1-9
    • /
    • 2020
  • In this paper, we propose Dynamic Rank Subsetting (DRAS) technique that enhances the energy-efficiency and the performance of memory system through the data compression. The goal of this technique is to enable a partial chip access by storing data in a compressed format within a subset of DRAM chips. To this end, a memory rank is dynamically configured to two independent sub-ranks. When writing a data block, it is compressed with a data compression algorithm and stored in one of the two sub-ranks. To service a memory request for the compressed data, only a sub-rank is accessed, whereas, for a memory request for the uncompressed data, two sub-ranks are accessed as done in the conventional memory systems. Since DRAS technique requires minimal hardware modification, it can be used in the conventional memory systems with low hardware overheads. Through experimental evaluation with a memory simulator, we show that the proposed technique improves the performance of the memory system by 12% on average and reduces the power consumption of memory system by 24% on average.

Compression of Multispectral Images (멀티 스펙트럴 영상들의 압축)

  • Enrico Piazza
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.1
    • /
    • pp.28-39
    • /
    • 2003
  • This paper is an overview of research contributions by the authors to the use of compression techniques to handle high resolution, multi-spectral images. Originally developed in the remote sensing context, the same techniques are here applied to food and medical images. The objective is to point out the potential of this kind of processing in different contexts such as remote sensing, food monitoring, and medical imaging and to stimulate new research exploitations. Compression is based on the simple assumption that it is possible to find out a relationship between pixels close one each other in multi-spectral images it translates to the possibility to say that there is a certain degree of correlation within pixels belonging to the same band in a close neighbourhood. Once found a correlation based on certain coefficient on one band, the coefficients of this relationship are, in turn, quite probably, similar to the ones calculated in one of the other bands. Based upon this second observation, an algorithm was developed, able to reduce the number of bit/pixel from 16 to 4 in satellite remote sensed multi-spectral images. A comparison is carried out between different methods about their speed and compression ratio. As reference it was taken the behaviour of three common algorithms, LZW (Lempel-Ziv-Welch), Huffman and RLE (Run Length Encoding), as they are used in common graphic format such as GIF, JPEG and PCX. The Presented methods have similar results in both speed and compression ratio to the commonly used programs and are to be preferred when the decompression must be carried out on line, inside a main program or when there is the need of a custom made compression algorithm.

  • PDF

The Header Compression Scheme for Real-Time Multimedia Service Data in All IP Network (All IP 네트워크에서 실시간 멀티미디어 서비스 데이터를 위한 헤더 압축 기술)

  • Choi, Sang-Ho;Ho, Kwang-Chun;Kim, Yung-Kwon
    • Journal of IKEEE
    • /
    • v.5 no.1 s.8
    • /
    • pp.8-15
    • /
    • 2001
  • This paper remarks IETF based requirements for IP/UDP/RTP header compression issued in 3GPP2 All IP Ad Hoc Meeting and protocol stacks of the next generation mobile station. All IP Network, for real time application such as Voice over IP (VoIP) multimedia services based on 3GPP2 3G cdma2000. Frames for various protocols expected in the All IP network Mobile Station (MS) are explained with several figures including the bit-for-bit notation of header format based on IETF draft of Robust Header Compression Working Group (ROHC). Especially, this paper includes problems of IS-707 Radio Link Protocol (RLP) for header compression which will be expected to modify in All IP network MS's medium access layer to accommodate real time packet data service[1]. And also, since PPP has also many problems in header compression and mobility aspects in MS protocol stacks for 3G cdma2000 packet data network based on Mobile IP (PN-4286)[2], we introduce the problem of solution for header compression of PPP. Finally. we suggest the guidelines for All IP network MS header compression about expected protocol stacks, radio resource efficiency and performance.

  • PDF

SHVC-based V-PCC Content ISOBMFF Encapsulation and DASH Configuration Method (SHVC 기반 V-PCC 콘텐츠 ISOBMFF 캡슐화 및 DASH 구성 방안)

  • Nam, Kwijung;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.548-560
    • /
    • 2022
  • Video based Point Cloud Compression (V-PCC) is one of the compression methods for compressing point clouds, and shows high efficiency in dynamic point cloud compression with movement due to the feature of compressing point cloud data using an existing video codec. Accordingly, V-PCC is drawing attention as a core technology for immersive content services such as AR/VR. In order to effectively service these V-PCC contents through a media streaming platform, it is necessary to encapsulate them in the existing media file format, ISO based Media File Format (ISOBMFF). However, in order to service through an adaptive streaming platform such as Dynamic Adaptive Streaming over HTTP (DASH), it is necessary to encode V-PCC contents of various qualities and store them in the server. Due to the size of the 2D media, it causes a great burden on the encoder and the server compared to the existing 2D media. As a method to solve such a problem, it may be considered to configure a streaming platform based on content obtained through V-PCC content encoding based on SHVC. Therefore, this paper encapsulates the SHVC-based V-PCC bitstream into ISOBMFF suitable for DASH service and proposes a configuration method to service it. In addition, in this paper, we propose ISOBMFF encapsulation and DASH configuration method to effectively service SHVC-based V-PCC contents, and confirm them through verification experiments.

Huffman Code Design and PSIP Structure of Hangul Data for Digital Broadcasting (디지털 방송용 한글 허프만 부호 설계 및 PSIP 구조)

  • 황재정;진경식;한학수;최준영;이진환
    • Journal of Broadcast Engineering
    • /
    • v.6 no.1
    • /
    • pp.98-107
    • /
    • 2001
  • In this paper we derive an optimal Huffman code set with escape coding that miximizes coding efficiency for the Hangul text data. The Hangul code can be represented in the standard Wansung or Unicode format, and we can generate a set of Huffamn codes for both. The current Korean DT standard has not defined a Hangul compression algorithm which may be confronted with a serious data rate for the digital data broadcasting system Generation of the optimal Huffman code set is to solve the data transmission problem. A relevant PSIP structure for the DTB standard is also proposed As a result characters which have the probability of less than 0.0043 are escape coded, showing the optimum compression efficiency of 46%.

  • PDF