• Title/Summary/Keyword: block-based coding

Search Result 477, Processing Time 0.025 seconds

Performance Comparison of Space-Time Block Coding in High-speed Railway Channel (고속 철도 채널 환경에서 시공간 블록 부호 성능 비교)

  • Park, Seong-Guen;Lee, Jong-Woo;Jeon, Taehyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.3
    • /
    • pp.291-297
    • /
    • 2014
  • Due to the rapid increase in demand for transportation of human and freight in modern railway systems, the CBTC system has been proposed, which is the solution for improvement of the line capacity that has been limited by the conventional track circuit based train control system. In the CBTC system, higher reliability of the communication system should be guaranteed for the safety of passengers and trains. However, due to the inherent characteristics of the wireless channel environment, performance degradations are inevitable. The diversity techniques can increase the reliability of data transmission using multiple antennas. In this paper, we investigate the performance of the STBC in the railway channel environment. Rician fading model is used for the viaduct scenarios which take important roles in the railway system. Also, considered is the Doppler effect which is an important factor in the mobile communication system. Simulations are performed to analyze the performance of the STBC in various channel environments. Results show that the performance degradation due to the phase error in viaduct scenarios is independent of the diversity order but is affected by the constellation of the modulation.

Connection between Fourier of Signal Processing and Shannon of 5G SmartPhone (5G 스마트폰의 샤논과 신호처리의 푸리에의 표본화에서 만남)

  • Kim, Jeong-Su;Lee, Moon-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.6
    • /
    • pp.69-78
    • /
    • 2017
  • Shannon of the 5G smartphone and Fourier of the signal processing meet in the sampling theorem (2 times the highest frequency 1). In this paper, the initial Shannon Theorem finds the Shannon capacity at the point-to-point, but the 5G shows on the Relay channel that the technology has evolved into Multi Point MIMO. Fourier transforms are signal processing with fixed parameters. We analyzed the performance by proposing a 2N-1 multivariate Fourier-Jacket transform in the multimedia age. In this study, the authors tackle this signal processing complexity issue by proposing a Jacket-based fast method for reducing the precoding/decoding complexity in terms of time computation. Jacket transforms have shown to find applications in signal processing and coding theory. Jacket transforms are defined to be $n{\times}n$ matrices $A=(a_{jk})$ over a field F with the property $AA^{\dot{+}}=nl_n$, where $A^{\dot{+}}$ is the transpose matrix of the element-wise inverse of A, that is, $A^{\dot{+}}=(a^{-1}_{kj})$, which generalise Hadamard transforms and centre weighted Hadamard transforms. In particular, exploiting the Jacket transform properties, the authors propose a new eigenvalue decomposition (EVD) method with application in precoding and decoding of distributive multi-input multi-output channels in relay-based DF cooperative wireless networks in which the transmission is based on using single-symbol decodable space-time block codes. The authors show that the proposed Jacket-based method of EVD has significant reduction in its computational time as compared to the conventional-based EVD method. Performance in terms of computational time reduction is evaluated quantitatively through mathematical analysis and numerical results.

Applications of Regularized Dequantizers for Compressed Images (압축된 영상에서 정규화 된 역양자화기의 응용)

  • Lee, Gun-Ho;Sung, Ju-Seung;Song, Moon-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.5
    • /
    • pp.11-20
    • /
    • 2002
  • Based on regularization principles, we propose a new dequantization scheme on DCT-based transform coding for reducing of blocking artifacts and minimizing the quantization error. The conventional image dequantization is simply to multiply the received quantized DCT coefficients by the quantization matrix. Therefore, for each DCT coefficients, we premise that the quantization noise is as large as half quantizer step size (in DCT domain). Our approach is based on basic constraint that quantization error is bounded to ${\pm}$(quantizer spacing/2) and at least there are not high frequency components corresponding to discontinuities across block boundaries of the images. Through regularization, our proposed dequantization scheme, sharply reduces blocking artifacts in decoded images. Our proposed algorithm guarantees that the dequantization process will map the quantized DCT coefficients will be evaluated against the standard JPEG, MPEG-1 and H.263 (with Annex J deblocking filter) decoding process. The experimental results will show visual improvements as well as numerical improvements in terms of the peak-signal-to-noise ratio (PSNR) and the blockiness measure (BM) to be defined.

A Study on the Activation Plan for Early Childhood SW·AI Education Based on Actual Condition Survey of Kindergarten SW·AI Education (유치원 SW·AI 교육 실태조사를 기초로 한 유아 SW·AI 교육 활성화 방안에 관한 연구)

  • Pyun, Youngshin
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.6
    • /
    • pp.93-97
    • /
    • 2022
  • The purpose of this study is to suggest implications for early childhood SW·AI education considering the characteristics of early childhood education through a survey on SW·AI education in kindergartens. For this study, data were collected from 194 kindergartens through convenience sampling. The data was analyzed using frequency distribution, and it was found that 44% of kindergartens are conducting SW·AI education. 22% are conducting SW·AI education in the form of regular curriculum, and 70% are conducting SW·AI education in the form of special activities after school. SW·AI education was found to be conducted mainly by external instructors (97%) in the classroom (80%). For SW·AI education, block coding-based programs developed by companies such as Naver and the Clova were used, and all of these programs used programs and teaching aids in a package format, including teaching aids and materials developed by companies. 56% answered that they are not currently conducting SW/AI education, and lack of awareness on SW·AI education and lack of human/environmental infrastructure were the main factors. In order to realize SW·AI education considering the characteristics of early childhood education based on this survey, First, SW·AI education programs should be developed to develop play-centered computational thinking skills. Second, systematic teacher education at the national level should be conducted. Finally, the establishment of a department dedicated to early childhood SW·AI consisting of early childhood education experts and SW·AI education experts and financial support at the national level should be provided.

Optimal Scheduling of SAD Algorithm on VLIW-Based High Performance DSP (VLIW 기반 고성능 DSP에서의 SAD 알고리즘 최적화 스케줄링)

  • Yu, Hui-Jae;Jung, Sou-Hwan;Chung, Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.12
    • /
    • pp.262-272
    • /
    • 2007
  • SAD (Sum of Absolute Difference) algorithm is the most frequently executing routine in motion estimation, which is the most demanding process in motion picture encoding. To enhance the performance of motion picture encoding on a VLIW processor, an optimal implementation of SAD algorithm on VLIW processor should be accomplished. In this paper, we propose an implementation of optimal scheduling of SAD algorithm with conditional branch on a VLIW-based high performance DSP. We first transform the nested loop with conditional branch of SAD algorithm into a single loop with conditional branch which has a large enough loop body to utilize fully the ILP capability of VLIW DSP and has a conditional branch to make the escape from loop to be achieved as soon as possible. And then we apply a modulo scheduling technique to the transformed single loop. We test the proposed implementation on TMS320C6713, and analyze the code size and performance with respect to processing time. Through experiments, it is shown that the SAD implementation proposed in this paper has small code size appropriate for embedded applications, and the H.263 encoder with the proposed SAD implementation performs better than other H.263 encoder with other SAD implementations.

A Blind Watermarking Algorithm using CABAC for H.264/AVC Main Profile (H.264/AVC Main Profile을 위한 CABAC-기반의 블라인드 워터마킹 알고리즘)

  • Seo, Young-Ho;Choi, Hyun-Jun;Lee, Chang-Yeul;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2C
    • /
    • pp.181-188
    • /
    • 2007
  • This paper proposed a watermark embedding/extracting method using CABAC(Context-based Adaptive Binary Arithmetic Coding) which is the entropy encoder for the main profile of MPEG-4 Part 10 H.264/AVC. This algorithm selects the blocks and the coefficients in a block on the bases of the contexts extracted from the relationship to the adjacent blocks and coefficients. A watermark bit is embedded without any modification of coefficient or with replacing the LSB(Least Significant Bit) of the coefficient with a watermark bit by considering both the absolute value of the selected coefficient and the watermark bit. Therefore, it makes it hard for an attacker to find out the watermarked locations. By selecting a few coefficients near the DC coefficient according to the contexts, this algorithm satisfies the robustness requirement. From the results from experiments with various kinds and various strengths of attacks the maximum error ratio of the extracted watermark was 5.02% in maximum, which makes certain that the proposed algorithm has very high level of robustness. Because it embeds the watermark during the context modeling and binarization process of CABAC, the additional amount of calculation for locating and selecting the coefficients to embed watermark is very small. Consequently, it is highly expected that it is very useful in the application area that the video must be compressed right after acquisition.

A Fast Motion Estimation Algorithm Based on Multi-Resolution Frame Structure (다 해상도 프레임 구조에 기반한 고속 움직임 추정 기법)

  • Song, Byung-Cheol;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.5
    • /
    • pp.54-63
    • /
    • 2000
  • We present a multi-resolution block matching algorithm (BMA) for fast motion estimation At the coarsest level, a motion vector (MV) having minimum matching error is chosen via a full search, and a MV with minimum matching error is concurrently found among the MVs of the spatially adjacent blocks Here, to examine the spatial MVs accurately, we propose an efficient method for searching full resolution MV s without MV quantization even at the coarsest level The chosen two MV s are used as the initial search centers at the middle level At the middle level, the local search is performed within much smaller search area around each search center If the method used at the coarsest level is adopted here, the local searches can be done at integer-pel accuracy A MV having minimum matching error is selected within the local search areas, and then the final level search is performed around this initial search center Since the local searches are performed at integer-pel accuracy at the middle level, the local search at the finest level does not take an effect on the overall performance So we can skip the final level search without performance degradation, thereby the search speed increases Simulation results show that in comparison with full search BMA, the proposed BMA without the final level search achieves a speed-up factor over 200 with minor PSNR degradation of 02dB at most, under a normal MPEG2 coding environment Furthermore, our scheme IS also suitable for hardware implementation due to regular data-flow.

  • PDF