• Title/Summary/Keyword: encoding

Search Result 4,358, Processing Time 0.026 seconds

Discriminative and Non-User Specific Binary Biometric Representation via Linearly-Separable SubCode Encoding-based Discretization

  • Lim, Meng-Hui;Teoh, Andrew Beng Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.2
    • /
    • pp.374-388
    • /
    • 2011
  • Biometric discretization is a process of transforming continuous biometric features of an identity into a binary bit string. This paper mainly focuses on improving the global discretization method - a discretization method that does not base on information specific to each user in bitstring extraction, which appears to be important in applications that prioritize strong security provision and strong privacy protection. In particular, we demonstrate how the actual performance of a global discretization could further be improved by embedding a global discriminative feature selection method and a Linearly Separable Subcode-based encoding technique. In addition, we examine a number of discriminative feature selection measures that can reliably be used for such discretization. Lastly, encouraging empirical results vindicate the feasibility of our approach.

Investigation of Relationship between Reflection Resonance and Applied Current Density in Bragg Photonic Crystal

  • Kim, Bumseok
    • Journal of Integrative Natural Science
    • /
    • v.5 no.1
    • /
    • pp.27-31
    • /
    • 2012
  • Relationship between reflection resonance and applied current density in Bragg photonic crystal has been investigated. Multiple bit encodes of distributed Bragg reflector features have been prepared by electrochemical etching of crystalline silicon by using various square wave current densities. Optical characterization of multi-encoding of distributed Bragg reflectors on porous silicon was achieved by Ocean optics 2000 spectrometer for the search of possible applications of multiple bit encoding of distributed Bragg reflectors such as multiplexed assays and chemical sensors. The morphology and cross-sectional structure of multi-encoded distributed Bragg reflectors was investigated by field emission scanning electron micrograph.

A Context-based Fast Encoding Quad Tree Plus Binary Tree (QTBT) Block Structure Partition

  • Marzuki, Ismail;Choi, Hansol;Sim, Donggyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.175-177
    • /
    • 2018
  • This paper proposes an algorithm to speed up block structure partition of quad tree plus binary tree (QTBT) in Joint Exploration Test Model (JEM) encoder. The proposed fast encoding of QTBT block partition employs three spatially neighbor coded blocks, such as left, top-left, and top of current block, to early terminate QTBT block structure pruning. The propose algorithm is organized based on statistical similarity of those spatially neighboring blocks, such as block depths and coded block types, which are coded with overlapped block motion compensation (OBMC) and adaptive multi transform (AMT). The experimental results demonstrate about 30% encoding time reduction with 1.3% BD-rate loss on average compared to the anchor JEM-7.1 software under random access configuration.

  • PDF

Multi-Layer Perceptron Based Ternary Tree Partitioning Decision Method for Versatile Video Coding (다목적 비디오 부/복호화를 위한 다층 퍼셉트론 기반 삼항 트리 분할 결정 방법)

  • Lee, Taesik;Jun, Dongsan
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.6
    • /
    • pp.783-792
    • /
    • 2022
  • Versatile Video Coding (VVC) is the latest video coding standard, which had been developed by the Joint Video Experts Team (JVET) of ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG) in 2020. Although VVC can provide powerful coding performance, it requires tremendous computational complexity to determine the optimal block structures during the encoding process. In this paper, we propose a fast ternary tree decision method using two neural networks with 7 nodes as input vector based on the multi-layer perceptron structure, names STH-NN and STV-NN. As a training result of neural network, the STH-NN and STV-NN achieved accuracies of 85% and 91%, respectively. Experimental results show that the proposed method reduces the encoding complexity up to 25% with unnoticeable coding loss compared to the VVC test model (VTM).

Fast Affine Motion Estimation Method for Versatile Video Coding (다목적 비디오 부호화를 위한 고속 어파인 움직임 예측 방법)

  • Jung, Seong-Won;Jun, Dong-San
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.4_2
    • /
    • pp.707-714
    • /
    • 2022
  • Versatile Video Coding (VVC) is the most recent video coding standard, which had been developed by Joint Video Expert Team (JVET). It can improve significant coding performance compared to the previous standard, namely High Efficiency Video Coding (HEVC). Although VVC can achieve the powerful coding performance, it requires the tremendous computational complexity of VVC encoder. Especially, affine motion compensation (AMC) was adopted the block-based 4-parameter or 6-parameter affine prediction to overcome the limit of translational motion model while VVC require the cost of higher encoding complexity. In this paper, we proposed the early termination of AMC that determines whether the affine motion estimation for AMC is performed or not. Experimental results showed that the proposed method reduced the encoding complexity of affine motion estimation (AME) up to 16% compared to the VVC Test Model 17 (VTM17).

A Study of Big Time Series Data Compression based on CNN Algorithm (CNN 기반 대용량 시계열 데이터 압축 기법연구)

  • Sang-Ho Hwang;Sungho Kim;Sung Jae Kim;Tae Geun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.1
    • /
    • pp.1-7
    • /
    • 2023
  • In this paper, we implement a lossless compression technique for time-series data generated by IoT (Internet of Things) devices to reduce the disk spaces. The proposed compression technique reduces the size of the encoded data by selectively applying CNN (Convolutional Neural Networks) or Delta encoding depending on the situation in the Forecasting algorithm that performs prediction on time series data. In addition, the proposed technique sequentially performs zigzag encoding, splitting, and bit packing to increase the compression ratio. We showed that the proposed compression method has a compression ratio of up to 1.60 for the original data.

Performance Comparison According to Image Generation Method in NIDS (Network Intrusion Detection System) using CNN

  • Sang Hyun, Kim
    • International journal of advanced smart convergence
    • /
    • v.12 no.2
    • /
    • pp.67-75
    • /
    • 2023
  • Recently, many studies have been conducted on ways to utilize AI technology in NIDS (Network Intrusion Detection System). In particular, CNN-based NIDS generally shows excellent performance. CNN is basically a method of using correlation between pixels existing in an image. Therefore, the method of generating an image is very important in CNN. In this paper, the performance comparison of CNN-based NIDS according to the image generation method was performed. The image generation methods used in the experiment are a direct conversion method and a one-hot encoding based method. As a result of the experiment, the performance of NIDS was different depending on the image generation method. In particular, it was confirmed that the method combining the direct conversion method and the one-hot encoding based method proposed in this paper showed the best performance.

Real - Time Applications of Video Compression in the Field of Medical Environments

  • K. Siva Kumar;P. Bindhu Madhavi;K. Janaki
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.73-76
    • /
    • 2023
  • We introduce DCNN and DRAE appraoches for compression of medical videos, in order to decrease file size and storage requirements, there is an increasing need for medical video compression nowadays. Using a lossy compression technique, a higher compression ratio can be attained, but information will be lost and possible diagnostic mistakes may follow. The requirement to store medical video in lossless format results from this. The aim of utilizing a lossless compression tool is to maximize compression because the traditional lossless compression technique yields a poor compression ratio. The temporal and spatial redundancy seen in video sequences can be successfully utilized by the proposed DCNN and DRAE encoding. This paper describes the lossless encoding mode and shows how a compression ratio greater than 2 (2:1) can be achieved.

Thermodynamics-Based Weight Encoding Methods for Improving Reliability of Biomolecular Perceptrons (생체분자 퍼셉트론의 신뢰성 향상을 위한 열역학 기반 가중치 코딩 방법)

  • Lim, Hee-Woong;Yoo, Suk-I.;Zhang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.12
    • /
    • pp.1056-1064
    • /
    • 2007
  • Biomolecular computing is a new computing paradigm that uses biomolecules such as DNA for information representation and processing. The huge number of molecules in a small volume and the innate massive parallelism inspired a novel computation method, and various computation models and molecular algorithms were developed for problem solving. In the meantime, the use of biomolecules for information processing supports the possibility of DNA computing as an application for biological problems. It has the potential as an analysis tool for biochemical information such as gene expression patterns. In this context, a DNA computing-based model of a biomolecular perceptron has been proposed and the result of its experimental implementation was presented previously. The weight encoding and weighted sum operation, which are the main components of a biomolecular perceptron, are based on the competitive hybridization reactions between the input molecules and weight-encoding probe molecules. However, thermodynamic symmetry in the competitive hybridizations is assumed, so there can be some error in the weight representation depending on the probe species in use. Here we suggest a generalized model of hybridization reactions considering the asymmetric thermodynamics in competitive hybridizations and present a weight encoding method for the reliable implementation of a biomolecular perceptron based on this model. We compare the accuracy of our weight encoding method with that of the previous one via computer simulations and present the condition of probe composition to satisfy the error limit.

Noise additived image encoding By EZW algorithm (EZW를 이용한 잡음 영상의 부호화)

  • 김형준;김재필;김향진;김영애;임재윤
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.27-30
    • /
    • 2000
  • In this paper, we propose new method for denoising in processing the image compression. Usually, to compress the noise image, we must have the denoising step before encoding. But this method has a embedded character, so need not an additional noise eliminator. In SAQ step, an embedded signal is quantized more detail and the other side is suppressed. Comparing with the conventional method, we can get the enhanced image quality.

  • PDF