• Title/Summary/Keyword: low-complexity domain

Search Result 118, Processing Time 0.02 seconds

Fast Inverse Transform Considering Multiplications (곱셈 연산을 고려한 고속 역변환 방법)

  • Hyeonju Song;Yung-Lyul Lee
    • Journal of Broadcast Engineering
    • /
    • v.28 no.1
    • /
    • pp.100-108
    • /
    • 2023
  • In hybrid block-based video coding, transform coding converts spatial domain residual signals into frequency domain data and concentrates energy in a low frequency band to achieve a high compression efficiency in entropy coding. The state-of-the-art video coding standard, VVC(Versatile Video Coding), uses DCT-2(Discrete Cosine Transform type 2), DST-7(Discrete Sine Transform type 7), and DCT-8(Discrete Cosine Transform type 8) for primary transform. In this paper, considering that DCT-2, DST-7, and DCT-8 are all linear transformations, we propose an inverse transform that reduces the number of multiplications in the inverse transform by using the linearity of the linear transform. The proposed inverse transform method reduced encoding time and decoding time by an average 26%, 15% in AI and 4%, 10% in RA without the increase of bitrate compared to VTM-8.2.

An adaptive watermarking for remote sensing images based on maximum entropy and discrete wavelet transformation

  • Yang Hua;Xu Xi;Chengyi Qu;Jinglong Du;Maofeng Weng;Bao Ye
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.192-210
    • /
    • 2024
  • Most frequency-domain remote sensing image watermarking algorithms embed watermarks at random locations, which have negative impact on the watermark invisibility. In this study, we propose an adaptive watermarking scheme for remote sensing images that considers the information complexity to select where to embed watermarks to improve watermark invisibility without affecting algorithm robustness. The scheme converts remote sensing images from RGB to YCbCr color space, performs two-level DWT on luminance Y, and selects the high frequency coefficient of the low frequency component (HHY2) as the watermark embedding domain. To achieve adaptive embedding, HHY2 is divided into several 8*8 blocks, the entropy of each sub-block is calculated, and the block with the maximum entropy is chosen as the watermark embedding location. During embedding phase, the watermark image is also decomposed by two-level DWT, and the resulting high frequency coefficient (HHW2) is then embedded into the block with maximum entropy using α- blending. The experimental results show that the watermarked remote sensing images have high fidelity, indicating good invisibility. Under varying degrees of geometric, cropping, filtering, and noise attacks, the proposed watermarking can always extract high identifiable watermark images. Moreover, it is extremely stable and impervious to attack intensity interference.

An Interpolation Filter Design for the Full Digital Audio Amplifier (완전 디지털 오디오 증폭기를 위한 보간 필터 설계)

  • Heo, Seo-Weon;Sung, Hyuk-Kee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.2
    • /
    • pp.253-258
    • /
    • 2012
  • A computationally efficient interpolation filter with a low-distortion performance is a key component to utilize the naturally-sampled pulse width modulation (NPWM) in a digital domain. To realize the efficient interpolation filter, we propose a novel design based on the recently-proposed modified Farrow filter. The proposed filter shows a better pass-band distortion performance maintaining similar degree of complexity compared with the conventional Lagrange interpolation filter. We achieve the maximum distortion deviation of 10-3 dB to 20-kHz audible frequency range and distortion reduction of 1/6 times compared with the Lagrange interpolation filter.

Resilient Reduced-State Resource Reservation

  • Csaszar Andras;Takacs Attila;Szabo Robert;Henk Tamas
    • Journal of Communications and Networks
    • /
    • v.7 no.4
    • /
    • pp.509-524
    • /
    • 2005
  • Due to the strict requirements of emerging applications, per-flow admission control is gaining increasing importance. One way to implement per-flow admission control is using an on­path resource reservation protocol, where the admission decision is made hop-by-hop after a new flow request arrives at the network boundary. The next-steps in signaling (NSIS) working group of the Internet engineering task force (IETF) is standardising such an on-path signaling protocol. One of the reservation methods considered by NSIS is reduced-state mode, which, suiting the differentiated service (DiffServ) concept, only allows per-class states in interior nodes of a domain. Although there are clear benefits of not dealing with per-flow states in interior nodes-like scalability and low complexity-, without per-flow states the handling of re-routed flows, e.g., after a failure, is a demanding and highly non-trivial task. To be applied in carrier-grade networks, the protocol needs to be resilient in this situation. In this article, we will explain the consequences of a route failover to resource reservation protocols: Severe congestion and incorrect admission decisions due to outdated reservation states. We will set requirements that handling solutions need to fulfill, and we propose extensions to reduced-state protocols accordingly. We show with a set of simulated scenarios that with the given solutions reduced-state protocols can handle re-routed flows practically as fast and robust as stateful protocols.

An Adaptive Estimation Model for Propagation Errors Incurred by CD in FD-CD Transcoding (FD-CD 트랜스코딩기법에서 CD에 의한 전파 왜곡의 적응적 예측 모델)

  • Kim Jin-soo;Kim Jae-Gon
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1571-1579
    • /
    • 2004
  • Recently, FD(Frame Dropping)-CD(Coefficient Dropping) transcoding is considered mainly due to the low computational complexity and simple implementation. But, CD errors in the FD-CD transtoding scheme tend to be propagated and they have a significant effect on the qualities of decoded images. In this paper, we derive the error characteristics incurred by the CD operations and propose an effective estimation model that adaptively describes well the characteristics of propagation/accumulation errors in compressed domain. Furthermore, we apply the proposed model to distortion control achieving nearly constant distortion allocation among frames. Simulation results show that the proposed model is quite accurate in estimating the overall distortions and is effectively applied to distortion control over a range of sequences with varying scene types.

  • PDF

Low-Complexity Distributed Algorithms for Uplink CoMP in Heterogeneous LTE Networks

  • Annavajjala, Ramesh
    • Journal of Communications and Networks
    • /
    • v.18 no.2
    • /
    • pp.150-161
    • /
    • 2016
  • Coordinated multi-point transmission (CoMP) techniques are being touted as enabling technologies for interference mitigation in next generation heterogeneous wireless networks (HetNets). In this paper, we present a comparative performance study of uplink (UL) CoMP algorithms for the 3GPP LTE HetNets. Focusing on a distributed and functionally-split architecture, we consider six distinct UL-CoMP algorithms: 1. Joint reception in the frequency-domain (JRFD) 2. Two-stage equalization (TSEQ) 3. Log-likelihood ratio exchange (LLR-E) 4. Symmetric TSEQ (S-TSEQ) 5. Transport block selection diversity (TBSD) 6. Coordinated scheduling with adaptive interference mitigation (CS-AIM) where JRFD, TSEQ, S-TSEQ, TBSD and CS-AIM are our main contributions in this paper, and quantify their relative performances via the post-processing signal-to-interference-plus-noise ratio distributions.We also compare the CoMP-specific front-haul rate requirements for all the schemes considered in this paper. Our results indicate that, with a linear minimum mean-square error receiver, the JRFD and TSEQ have identical performances, whereas S-TSEQ relaxes the front-haul latency requirements while approaching the performance of TSEQ. Furthermore, in a HetNet environment, we find that CS-AIM provides an attractive alternative to TBSD and LLR-E with a significantly reduced CoMP-specific front-haul rate requirement.

A Study on DRM Model using Electronic Cash System (영상 이동변위 기반의 휴대 장치의 새로운 사용자 인터페이스)

  • Jin, Hong-Yik;Park, Sea-Nae;Sim, Dong-Gyu;NamKung, Jae-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.4
    • /
    • pp.454-461
    • /
    • 2008
  • This paper is regarding a new input interface based on displacement of mobile devices having a camera. The mobile device can capture consecutive images by the camera, the displacement of the device is estimated by computing the displacement between consecutive images in real-time. The proposed system extracts feature points based on SUSAN comer detector which has low computational complexity. We generate Voronoi domain by using the two-pass algorithm to match extracted features. Finally, the displacement of a mobile device is estimated by calculating SAD values between two consecutive images. We evaluated the performance of the proposed algorithm with 1500 images. True matching accuracy of the proposed algorithm is 90% and the computation for each image is conducted in 5m sec.

  • PDF

An Efficient Machine Learning-based Text Summarization in the Malayalam Language

  • P Haroon, Rosna;Gafur M, Abdul;Nisha U, Barakkath
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1778-1799
    • /
    • 2022
  • Automatic text summarization is a procedure that packs enormous content into a more limited book that incorporates significant data. Malayalam is one of the toughest languages utilized in certain areas of India, most normally in Kerala and in Lakshadweep. Natural language processing in the Malayalam language is relatively low due to the complexity of the language as well as the scarcity of available resources. In this paper, a way is proposed to deal with the text summarization process in Malayalam documents by training a model based on the Support Vector Machine classification algorithm. Different features of the text are taken into account for training the machine so that the system can output the most important data from the input text. The classifier can classify the most important, important, average, and least significant sentences into separate classes and based on this, the machine will be able to create a summary of the input document. The user can select a compression ratio so that the system will output that much fraction of the summary. The model performance is measured by using different genres of Malayalam documents as well as documents from the same domain. The model is evaluated by considering content evaluation measures precision, recall, F score, and relative utility. Obtained precision and recall value shows that the model is trustable and found to be more relevant compared to the other summarizers.

Enhancing Robustness of Information Hiding Through Low-Density Parity-Check Codes

  • Yi, Yu;Lee, Moon-Ho;Kim, Ji-Hyun;Hwang, Gi-Yean
    • Journal of Broadcast Engineering
    • /
    • v.8 no.4
    • /
    • pp.437-451
    • /
    • 2003
  • With the rapid growth of internet technologies and wide availability of multimedia computing facilities, the enforcement of multimedia copyright protection becomes an important issue. Digital watermarking is viewed as an effective way to deter content users from illegal distributions. In recent years, digital watermarking has been intensively studied to achieve this goal. However, when the watermarked media is transmitted over the channels modeled as the additive white Gaussian noise (AWGN) channel, the watermark information is often interfered by the channel noise and produces a large number of errors. So many error-correcting codes have been applied in the digital watermarking system to protect the embedded message from the disturbance of the noise, such as BCH codes, Reef-Solomon (RS) codes and Turbo codes. Recently, low-density parity-check (LDPC) codes were demonstrated as good error correcting codes achieving near Shannon limit performance and outperforming turbo codes nth low decoding complexity. In this paper, in order to mitigate the channel conditions and improve the quality of watermark, we proposed the application of LDPC codes on implementing a fairly robust digital image watermarking system. The implemented watermarking system operates in the spectrum domain where a subset of the discrete wavelet transform (DWT) coefficients is modified by the watermark without using original image during watermark extraction. The quality of watermark is evaluated by taking Into account the trade-off between the chip-rate and the rate of LDPC codes. Many simulation results are presented in this paper, these results indicate that the quality of the watermark is improved greatly and the proposed system based on LDPC codes is very robust to attacks.

A Research about Time Domain Estimation Method for Greenhouse Environmental Factors based on Artificial Intelligence (인공지능 기반 온실 환경인자의 시간영역 추정)

  • Lee, JungKyu;Oh, JongWoo;Cho, YongJin;Lee, Donghoon
    • Journal of Bio-Environment Control
    • /
    • v.29 no.3
    • /
    • pp.277-284
    • /
    • 2020
  • To increase the utilization of the intelligent methodology of smart farm management, estimation modeling techniques are required to assess prior examination of crops and environment changes in realtime. A mandatory environmental factor such as CO2 is challenging to establish a reliable estimation model in time domain accounted for indoor agricultural facilities where various correlated variables are highly coupled. Thus, this study was conducted to develop an artificial neural network for reducing time complexity by using environmental information distributed in adjacent areas from a time perspective as input and output variables as CO2. The environmental factors in the smart farm were continuously measured using measuring devices that integrated sensors through experiments. Modeling 1 predicted by the mean data of the experiment period and modeling 2 predicted by the day-to-day data were constructed to predict the correlation of CO2. Modeling 2 predicted by the previous day's data learning performed better than Modeling 1 predicted by the 60-day average value. Until 30 days, most of them showed a coefficient of determination between 0.70 and 0.88, and Model 2 was about 0.05 higher. However, after 30 days, the modeling coefficients of both models showed low values below 0.50. According to the modeling approach, comparing and analyzing the values of the determinants showed that data from adjacent time zones were relatively high performance at points requiring prediction rather than a fixed neural network model.