• Title/Summary/Keyword: amount of computation

Search Result 603, Processing Time 0.036 seconds

The Computation Reduction Algorithm Independent of the Language for CELP Vocoders (각국 언어 특성에 독립적인 CELP 계열 보코더에서의 계산량 단축 알고리즘)

  • Ju, Sang-Gyu
    • Proceedings of the KAIS Fall Conference
    • /
    • 2010.05a
    • /
    • pp.257-260
    • /
    • 2010
  • In this paper, we propose the computation reduction methods of LSP(Line spectrum pairs) transformation that is mainly used in CELP vocoders. In order to decrease the computational time in real root method the characteristic of four proposed algorithms is as the following. First, scheme to reduce the LSP transformation time uses mel scale. Developed the second scheme is the control of searching order by the distribution characteristic of LSP parameters. Third, scheme to reduce the LSP transformation time uses voice characteristics. Developed the fourth scheme is the control of searching interval and order by the distribution characteristic of LSP parameters. As a result of searching time, computational amount, transformed LSP parameters, SNR, MOS test, waveform of synthesized speech, spectrogram analysis, searching time is reduced about 37.5%, 46.21%, 46.3%, 51.29% in average, computational amount is reduced about 44.76%, 49.44%, 47.03%, 57.40%. But the transformed LSP parameters of the proposed methods were the same as those of real root method.

  • PDF

A new template matching algorithm and its ASIC chip implementation (Template matching을 위한 새로운 알고리즘 및 ASIC 칩 구현)

  • 서승완;선우명훈
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.1
    • /
    • pp.15-24
    • /
    • 1998
  • This paper proposes a new template matching algorithm and its chip design. The CC and SAD algorithms require the massive amount of computation. Hence, several algorithms using quantization schemes have been proposed to reduce the amount of computation and its hardware cost. the proposed algorithm called the EMPPM improves at least 22% of the noise margin compared with the MPPM algorithm. In addition, the proposed architecture can reduce the gate count by more than 60% of that used in the SAD algorithm without usig quantization schemes and 28% of the MPPM algorithm. The VHDL models have been simulated by using the CADANCETEX>$^{TM}$ and logic synthesis has been performed by using the SYNOPSYSTEX>$^{TM}$ with $0.6\mu\textrm{m}$ SOG(sea-of-gate) cell library. The implemented chip consists of 35,829 gates, operates at 100 MHz (worst case 53 MHz) and performs the template maching with the speed of 200 Mpixels/sec.

  • PDF

Finite Element Analysis of Externally Round Grooved Profile Ring Rolling Process (외부에 둥근 홈이 있는 형상환상압연공정의 유한요소해석)

  • 김광희;김병탁;석한길
    • Transactions of Materials Processing
    • /
    • v.12 no.7
    • /
    • pp.631-639
    • /
    • 2003
  • Ring rolling process is simulated by using the general-purpose commercial finite element analysis software, MSC.Superform. Because the deforming region is restricted to the vicinity of the roll gap, only a ring segment spanning the roll gap is analyzed in order to save computation time and cost. First, a plain ring rolling of rectangular cross-section is simulated. Comparisons between computation and experiment show good agreement in the cross-sectional configuration of the deformed ring. Then, a profile ring with an external round groove is analyzed. The rolls with and without groove have been analyzed to compare the amount of side spread. It is found that the grooves in the rolls are effective in reducing the amount of side spread.

A Computational Improvement of Otsu's Algorithm by Estimating Approximate Threshold (근사 임계값 추정을 통한 Otsu 알고리즘의 연산량 개선)

  • Lee, Youngwoo;Kim, Jin Heon
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.163-169
    • /
    • 2017
  • There are various algorithms evaluating a threshold for image segmentation. Among them, Otsu's algorithm sets a threshold based on the histogram. It finds the between-class variance for all over gray levels and then sets the largest one as Otsu's optimal threshold, so we can see that Otsu's algorithm requires a lot of the computation. In this paper, we improved the amount of computational needs by using estimated Otsu's threshold rather than computing for all the threshold candidates. The proposed algorithm is compared with the original one in computation amount and accuracy. we confirm that the proposed algorithm is about 29 times faster than conventional method on single processor and about 4 times faster than on parallel processing architecture machine.

Method of Fast Interpolation of B-Spline Volumes for Reconstructing the Heterogeneous Model of Bones from CT Images (CT 영상에서 뼈의 불균질 모델 생성을 위한 B-스플라인 볼륨의 빠른 보간 방법)

  • Park, Jun Hong;Kim, Byung Chul
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.40 no.4
    • /
    • pp.373-379
    • /
    • 2016
  • It is known that it is expedient to represent the distribution of the properties of a bone with complex heterogeneity as B-spline volume functions. For B-spline-based representation, the pixel values of CT images are interpolated by B-spline volume functions. However, the CT images of a bone are three-dimensional and very large, and hence a large amount of memory and long computation time for the interpolation are required. In this study, a method for resolving these problems is proposed. In the proposed method, the B-spline volume interpolation problem is simplified by using the uniformity of pixel spacing of the image and the properties of B-spline basis functions. This results in a reduction in computation time and the amount of memory used. The proposed method was implemented and it was verified that the computation time and the amount of memory used were reduced.

Symmetric Searchable Encryption with Efficient Conjunctive Keyword Search

  • Jho, Nam-Su;Hong, Dowon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.5
    • /
    • pp.1328-1342
    • /
    • 2013
  • Searchable encryption is a cryptographic protocol for searching a document in encrypted databases. A simple searchable encryption protocol, which is capable of using only one keyword at one time, is very limited and cannot satisfy demands of various applications. Thus, designing a searchable encryption with useful additional functions, for example, conjunctive keyword search, is one of the most important goals. There have been many attempts to construct a searchable encryption with conjunctive keyword search. However, most of the previously proposed protocols are based on public-key cryptosystems which require a large amount of computational cost. Moreover, the amount of computation in search procedure depends on the number of documents stored in the database. These previously proposed protocols are not suitable for extremely large data sets. In this paper, we propose a new searchable encryption protocol with a conjunctive keyword search based on a linked tree structure instead of public-key based techniques. The protocol requires a remarkably small computational cost, particularly when applied to extremely large databases. Actually, the amount of computation in search procedure depends on the number of documents matched to the query, instead of the size of the entire database.

Secure Multiparty Computation of Principal Component Analysis (주성분 분석의 안전한 다자간 계산)

  • Kim, Sang-Pil;Lee, Sanghun;Gil, Myeong-Seon;Moon, Yang-Sae;Won, Hee-Sun
    • Journal of KIISE
    • /
    • v.42 no.7
    • /
    • pp.919-928
    • /
    • 2015
  • In recent years, many research efforts have been made on privacy-preserving data mining (PPDM) in data of large volume. In this paper, we propose a PPDM solution based on principal component analysis (PCA), which can be widely used in computing correlation among sensitive data sets. The general method of computing PCA is to collect all the data spread in multiple nodes into a single node before starting the PCA computation; however, this approach discloses sensitive data of individual nodes, involves a large amount of computation, and incurs large communication overheads. To solve the problem, in this paper, we present an efficient method that securely computes PCA without the need to collect all the data. The proposed method shares only limited information among individual nodes, but obtains the same result as that of the original PCA. In addition, we present a dimensionality reduction technique for the proposed method and use it to improve the performance of secure similar document detection. Finally, through various experiments, we show that the proposed method effectively and efficiently works in a large amount of multi-dimensional data.

Adaptive Matching Scan Algorithm Based on Gradient Magnitude and Sub-blocks in Fast Motion Estimation of Full Search (전영역 탐색의 고속 움직임 예측에서 기울기 크기와 부 블록을 이용한 적응 매칭 스캔 알고리즘)

  • 김종남;최태선
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.1097-1100
    • /
    • 1999
  • Due to the significant computation of full search in motion estimation, extensive research in fast motion estimation algorithms has been carried out. However, most of the algorithms have the degradation in predicted images compared with the full search algorithm. To reduce an amount of significant computation while keeping the same prediction quality of the full search, we propose a fast block-matching algorithm based on gradient magnitude of reference block without any degradation of predicted image. By using Taylor series expansion, we show that the block matching errors between reference block and candidate block are proportional to the gradient magnitude of matching block. With the derived result, we propose fast full search algorithm with adaptively determined scan direction in the block matching. Experimentally, our proposed algorithm is very efficient in terms of computational speedup and has the smallest computation among all the conventional full search algorithms. Therefore, our algorithm is useful in VLSI implementation of video encoder requiring real-time application.

  • PDF

Imprecise Computation based Scheduling for QoS Support in Multimedia Systems (멀티미디어 시스템에서 QoS 지원을 위한 불확정계산 기반의 스케줄링)

  • Kim, Tae-Su;Kim, Yong-Seok
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.995-998
    • /
    • 2005
  • A task in imprecise computation consists of mandatory part and optional part. The optional part can be executed partially and the quality of service is measured by the amount of the execution. Many paper showed that multimedia systems are good applications of imprecise computation. It is important to guarantee QoS which is a critical factor in multimedia systems. Previous works didn't consider QoS and processor slack were assigned randomly to tasks. This paper presented a systemic slack assignment method according to QoS levels of tasks. A simulation result showed that our method can be a good choice for multimedia systems with QoS requirement.

  • PDF

An Application-Level Fault Tolerant System For Synchronous Parallel Computation (동기 병렬연산을 위한 응용수준의 결함 내성 연산시스템)

  • Park, Pil-Seong
    • Journal of Internet Computing and Services
    • /
    • v.9 no.5
    • /
    • pp.185-193
    • /
    • 2008
  • An MTBF(mean time between failures) of large scale parallel systems is known to be only an order of several hours, and large computations sometimes result in a waste of huge amount of CPU time, However. the MPI(Message Passing Interface), a de facto standard for message passing parallel programming, suggests no possibility to handle such a problem. In this paper, we propose an application-level fault tolerant computation system, purely on the basis of the current MPI standard without using any non-standard fault tolerant MPI library, that can be used for general scientific synchronous parallel computation.

  • PDF