• Title/Summary/Keyword: Reconstruction error

Search Result 431, Processing Time 0.025 seconds

Rank-weighted reconstruction feature for a robust deep neural network-based acoustic model

  • Chung, Hoon;Park, Jeon Gue;Jung, Ho-Young
    • ETRI Journal
    • /
    • v.41 no.2
    • /
    • pp.235-241
    • /
    • 2019
  • In this paper, we propose a rank-weighted reconstruction feature to improve the robustness of a feed-forward deep neural network (FFDNN)-based acoustic model. In the FFDNN-based acoustic model, an input feature is constructed by vectorizing a submatrix that is created by slicing the feature vectors of frames within a context window. In this type of feature construction, the appropriate context window size is important because it determines the amount of trivial or discriminative information, such as redundancy, or temporal context of the input features. However, we ascertained whether a single parameter is sufficiently able to control the quantity of information. Therefore, we investigated the input feature construction from the perspectives of rank and nullity, and proposed a rank-weighted reconstruction feature herein, that allows for the retention of speech information components and the reduction in trivial components. The proposed method was evaluated in the TIMIT phone recognition and Wall Street Journal (WSJ) domains. The proposed method reduced the phone error rate of the TIMIT domain from 18.4% to 18.0%, and the word error rate of the WSJ domain from 4.70% to 4.43%.

Efficient Sampling of Graph Signals with Reduced Complexity (저 복잡도를 갖는 효율적인 그래프 신호의 샘플링 알고리즘)

  • Kim, Yoon Hak
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.367-374
    • /
    • 2022
  • A sampling set selection algorithm is proposed to reconstruct original graph signals from the sampled signals generated on the nodes in the sampling set. Instead of directly minimizing the reconstruction error, we focus on minimizing the upper bound on the reconstruction error to reduce the algorithm complexity. The metric is manipulated by using QR factorization to produce the upper triangular matrix and the analytic result is presented to enable a greedy selection of the next nodes at iterations by using the diagonal entries of the upper triangular matrix, leading to an efficient sampling process with reduced complexity. We run experiments for various graphs to demonstrate a competitive reconstruction performance of the proposed algorithm while offering the execution time about 3.5 times faster than one of the previous selection methods.

Modified Raised-Cosine Interpolation and Application to Image Processing (변형된 상승여현 보간법의 제안과 영상처리에의 응용)

  • 하영호;김원호;김수중
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.25 no.4
    • /
    • pp.453-459
    • /
    • 1988
  • A new interpolation function, named modified raised-cosine interpolation, is proposed. This function is derived from the linear combination of weighted triangular and raised-cosine functions to reduce the effect of side lobes which incur the interpolation error. Interpolation error reduces significantly for higher-order convolutional interpolation functions of linear operators, but at the expense of resolution error due to the attenuation of main lobe. However, the proposed interpolation function enables us to reduce the side lobes as well as to preserve the main lobe. To prove practicality, this function is applied in image reconstruction and enlargement.

  • PDF

ECG Data Coding Using Piecewise Fractal Interpolation

  • Jun, Young-Il;Jung, Hyun-Meen;Yoon, Young-Ro;Yoon, Hyung-Ro
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1994 no.12
    • /
    • pp.134-137
    • /
    • 1994
  • In this paper, we describe an approach to ECG data coding based on a fractal theory of iterated contractive transformations defined piecewise. The main characteristic of this approach is that it relies on the assumption that signal redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis. The variable range size technique is employed to reduce the reconstruction error. Large ranges are used for encoding the smooth waveform to yield high compression efficiency, and the smaller ranges are used for encoding rapidly varying parts of the signal to preserve the signal quality. The suggested algorithm was evaluated using MIT/BIH arrhythmia database. A high compression ratio is achieved with a relatively low reconstruction error.

  • PDF

Robust Terrain Reconstruction Using Minimal Window Technique (최소 윈도우 기법을 이용한 강인한 지형 복원)

  • Kim Dong-Gyu;Woo Dong-Min;Lee Kyu-Won
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.3
    • /
    • pp.163-172
    • /
    • 2003
  • A stereo matching has been an important tool for the reconstruction of 3D terrain. The current state of stereo matching technology has reached the level where a very elaborate DEM(Digital Elevation Map) can be obtained. However, there still exist many factors causing DEM error in stereo matching. This paper propose a new method to reduce the error caused by the lack of significant features in the correlation window The proposed algorithm keeps the correlation window as small as possible, as long as there is a significant feature in the window. Experimental results indicate that the proposed method increases the DEM accuracy by $72.65\%$ in the plain area and $41.96\%$ in the mountain area over the conventional scheme. Comparisons with Kanade's result show that the proposed method eliminates spike type of errors more efficiently than Kanade's adaptive window technique and produces reliable DEM.

Reconstruction Scheme of lost Blocks in Block Coded Images (블록 코딩 영상에서 손실 블록의 재구성 기법)

  • Yoo Kyeong-Jung;Lee Bu-Kwon
    • Journal of Digital Contents Society
    • /
    • v.6 no.2
    • /
    • pp.113-118
    • /
    • 2005
  • In this paper, the reconstruction scheme of lost block in the transmission of block coded image is proposed. Due to differential coding of dc coefficient in JPEG images when JPEG image are transmitted over wireless communications, the effect of errors can destroy entire blocks of the image. Therefore, we aim to reconstruct the lost information using correlation between the lost block and its neighbor blocks. In order to evaluate the performance of this scheme, we inserted some intentional block error into the test images. And we obtained good results objectively and subjectively by experiments.

  • PDF

Content Based Mesh Motion Estimation in Moving Pictures (동영상에서의 내용기반 메쉬를 이용한 모션 예측)

  • 김형진;이동규;이두수
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.35-38
    • /
    • 2000
  • The method of Content-based Triangular Mesh Image representation in moving pictures makes better performance in prediction error ratio and visual efficiency than that of classical block matching. Specially if background and objects can be separated from image, the objects are designed by Irregular mesh. In this case this irregular mesh design has an advantage of increasing video coding efficiency. This paper presents the techniques of mesh generation, motion estimation using these mesh, uses image warping transform such as Affine transform for image reconstruction, and evaluates the content based mesh design through computer simulation.

  • PDF

Bayesian Image Reconstruction Using Edge Detecting Process for PET

  • Um, Jong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.12
    • /
    • pp.1565-1571
    • /
    • 2005
  • Images reconstructed with Maximum-Likelihood Expectation-Maximization (MLEM) algorithm have been observed to have checkerboard effects and have noise artifacts near edges as iterations proceed. To compensate this ill-posed nature, numerous penalized maximum-likelihood methods have been proposed. We suggest a simple algorithm of applying edge detecting process to the MLEM and Bayesian Expectation-Maximization (BEM) to reduce the noise artifacts near edges and remove checkerboard effects. We have shown by simulation that this algorithm removes checkerboard effects and improves the clarity of the reconstructed image and has good properties based on root mean square error (RMS).

  • PDF

Reconstruction of a 3D Model using the Midpoints of Line Segments in a Single Image (한 장의 영상으로부터 선분의 중점 정보를 이용한 3차원 모델의 재구성)

  • Park Young Sup;Ryoo Seung Taek;Cho Sung Dong;Yoon Kyung Hyun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.4
    • /
    • pp.168-176
    • /
    • 2005
  • We propose a method for 3-dimensionally reconstructing an object using a line that includes the midpoint information from a single image. A pre-defined polygon is used as the primitive and the recovery is processed from a single image. The 3D reconstruction is processed by mapping the correspondence point of the primitive model onto the photo. In the recent work, the reconstructions of camera parameters or error minimizing methods through iterations were used for model-based 3D reconstruction. However, we proposed a method for the 3D reconstruction of primitive that consists of the segments and the center points of the segments for the reconstruction process. This method enables the reconstruction of the primitive model to be processed using only the focal length of various camera parameters during the segment reconstruction process.

Optimized Integer Cosine Transform (최적화 정수형 여현 변환)

  • 이종하;김혜숙;송인준;곽훈성
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.9
    • /
    • pp.1207-1214
    • /
    • 1995
  • We present an optimized integer cosine transform(OICT) as an alternative approach to the conventional discrete cosine transform(DCT), and its fast computational algorithm. In the actual implementation of the OICT, we have used the techniques similar to those of the orthogonal integer transform(OIT). The normalization factors are approximated to single one while keeping the reconstruction error at the best tolerable level. By obtaining a single normalization factor, both forward and inverse transform are performed using only the integers. However, there are so many sets of integers that are selected in the above manner, the best OICT matrix obtained through value minimizing the Hibert-Schmidt norm and achieving fast computational algorithm. Using matrix decomposing, a fast algorithm for efficient computation of the order-8 OICT is developed, which is minimized to 20 integer multiplications. This enables us to implement a high performance 2-D DCT processor by replacing the floating point operations by the integer number operations. We have also run the simulation to test the performance of the order-8 OICT with the transform efficiency, maximum reducible bits, and mean square error for the Wiener filter. When the results are compared to those of the DCT and OIT, the OICT has out-performed them all. Furthermore, when the conventional DCT coefficients are reduced to 7-bit as those of the OICT, the resulting reconstructed images were critically impaired losing the orthogonal property of the original DCT. However, the 7-bit OICT maintains a zero mean square reconstruction error.

  • PDF