• Title/Summary/Keyword: computation time reduction

Search Result 221, Processing Time 0.025 seconds

Reduction of Computing Time through FDM using Implicit Method and Latent Heat Treatment in Solidification Analysis (FDM에 의한 응고해석시 계산시간 단축을 위한 음적해법의 적용과 잠열처리방법)

  • Kim, Tae-Gyu;Choi, Jung-Kil;Hong, Jun-Pyo;Lee, Zin-Hyoung
    • Journal of Korea Foundry Society
    • /
    • v.13 no.4
    • /
    • pp.323-332
    • /
    • 1993
  • An implicit finite difference formulation with three methods of latent heat treatment, such as equivalent specific heat method, temperature recovery method and enthalpy method, was applied to solidification analysis. The Neumann problem was solved to compare the numerical results with the exact solution. The implicit solutions with the equivalent specific heat method and the temperature recovery method were comparatively consistent with the Neumann exact solution for smaller time steps, but its error increased with increasing time step, especially in predicting the solidification beginning time. Although the computing time to solve energy equation using temperature recovery method was shorter than using enthalpy method, the method of releasing latent heat is not realistic and causes error. The implicit formulation of phase change problem requires enthalpy method to treat the release of latent heat reasonably. We have modified the enthalpy formulation in such a way that the enthalpy gradient term is not needed, and as a result of this modification, the computation stability and the computing time were improved.

  • PDF

Reduced Computation Using the DFT Property in the Phase Weighting Method (위상 조절 방법에서 DFT 특성을 이용한 계산량 저감)

  • Ryu Heung-Gyoon;Hieu Nguyen Thanh
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.16 no.10 s.101
    • /
    • pp.1028-1035
    • /
    • 2005
  • OFDM system has high PAPR(Peak-to-Average-Power Ratio) problem. In this paper, we present a low complexity phase weighting method to reduce the computational quantity so that we can cut down the processing time of SPW method. Proposed method is derived from the DFT property of periodical sequences by which PAPR can be reduced efficiently. The simulation results show the same PAPR reduction efficiency of proposed method in comparison with conventional methods. It can reduce 2.15 dB of PAPR with two phase factors and 3.95 dB of PAPR with four phase factors. The computation analysis shows significant improvement in the low complexity phase weighting method.

The Study of Comparison of DCT-based H.263 Quantizer for Computative Quantity Reduction (계산량 감축을 위한 DCT-Based H.263 양자화기의 비교 연구)

  • Shin, Kyung-Cheol
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.3
    • /
    • pp.195-200
    • /
    • 2008
  • To compress the moving picture data effectively, it is needed to reduce spatial and temporal redundancy of input image data. While motion estimation! compensation methods is effectively able to reduce temporal redundancy but it is increased computation complexity because of the prediction between frames. So, the study of algorithm for computation reduction and real time processing is needed. This paper is presenting quantizer effectively able to quantize DCT coefficient considering the human visual sensitivity. As quantizer that proposed DCT-based H.263 could make transmit more frame than TMN5 at a same transfer speed, and it could decrease the frame drop effect. And the luminance signal appeared the difference of $-0.3{\sim}+0.65dB$ in the average PSNR for the estimation of objective image quality and the chrominance signal appeared the improvement in about 1.73dB in comparision with TMN5. The proposed method reduces $30{\sim}31%$ compared with NTSS and $20{\sim}21%$ compared to 4SS in comparition of calculation quantity.

  • PDF

Improved Quality Keyframe Selection Method for HD Video

  • Yang, Hyeon Seok;Lee, Jong Min;Jeong, Woojin;Kim, Seung-Hee;Kim, Sun-Joong;Moon, Young Shik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3074-3091
    • /
    • 2019
  • With the widespread use of the Internet, services for providing large-capacity multimedia data such as video-on-demand (VOD) services and video uploading sites have greatly increased. VOD service providers want to be able to provide users with high-quality keyframes of high quality videos within a few minutes after the broadcast ends. However, existing keyframe extraction tends to select keyframes whose quality as a keyframe is insufficiently considered, and it takes a long computation time because it does not consider an HD class image. In this paper, we propose a keyframe selection method that flexibly applies multiple keyframe quality metrics and improves the computation time. The main procedure is as follows. After shot boundary detection is performed, the first frames are extracted as initial keyframes. The user sets evaluation metrics and priorities by considering the genre and attributes of the video. According to the evaluation metrics and the priority, the low-quality keyframe is selected as a replacement target. The replacement target keyframe is replaced with a high-quality frame in the shot. The proposed method was subjectively evaluated by 23 votes. Approximately 45% of the replaced keyframes were improved and about 18% of the replaced keyframes were adversely affected. Also, it took about 10 minutes to complete the summary of one hour video, which resulted in a reduction of more than 44.5% of the execution time.

A Semi-supervised Dimension Reduction Method Using Ensemble Approach (앙상블 접근법을 이용한 반감독 차원 감소 방법)

  • Park, Cheong-Hee
    • The KIPS Transactions:PartD
    • /
    • v.19D no.2
    • /
    • pp.147-150
    • /
    • 2012
  • While LDA is a supervised dimension reduction method which finds projective directions to maximize separability between classes, the performance of LDA is severely degraded when the number of labeled data is small. Recently semi-supervised dimension reduction methods have been proposed which utilize abundant unlabeled data and overcome the shortage of labeled data. However, matrix computation usually used in statistical dimension reduction methods becomes hindrance to make the utilization of a large number of unlabeled data difficult, and moreover too much information from unlabeled data may not so helpful compared to the increase of its processing time. In order to solve these problems, we propose an ensemble approach for semi-supervised dimension reduction. Extensive experimental results in text classification demonstrates the effectiveness of the proposed method.

Parameter Analysis for Time Reduction in Extracting SIFT Keypoints in the Aspect of Image Stitching (영상 스티칭 관점에서 SIFT 특징점 추출시간 감소를 위한 파라미터 분석)

  • Moon, Won-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.23 no.4
    • /
    • pp.559-573
    • /
    • 2018
  • Recently, one of the most actively applied image media in the most fields such as virtual reality (VR) is omni-directional or panorama image. This image is generated by stitching images obtained by various methods. In this process, it takes the most time to extract keypoints necessary for stitching. In this paper, we analyze the parameters involved in the extraction of SIFT keypoints with the aim of reducing the computation time for extracting the most widely used SIFT keypoints. The parameters considered in this paper are the initial standard deviation of the Gaussian kernel used for Gaussian filtering, the number of gaussian difference image sets for extracting local extrema, and the number of octaves. As the SIFT algorithm, the Lowe scheme, the originally proposed one, and the Hess scheme which is a convolution cascade scheme, are considered. First, the effect of each parameter value on the computation time is analyzed, and the effect of each parameter on the stitching performance is analyzed by performing actual stitching experiments. Finally, based on the results of the two analyses, we extract parameter value set that minimize computation time without degrading.

Computation cost reduction method of EBCOT using upper subband search information in the wavelet domain (웨이블릿 영역에서의 상위 부대역 탐색정보를 이용한 EBCOT의 연산량 감소 방법)

  • Choi, Hyun-Jun;Paik, Yaeung-Min;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.8
    • /
    • pp.1497-1504
    • /
    • 2009
  • This Paper Propose a method to reduce the calculation time in JPEG2000. That is, if a coefficient is estimate a upper-level subband and its descendents skip the scan process. There is a trade-off relationship between the calculation time and the image quality or the amount of output data, the calculation time and the amount of output data decreases, but the image degradation increases. The experimental results showed that in calculation time was 35% in average, which means that ls ge ses. The ein calculation time and output data can be obtls ed with a cost of an acceptlble image quality degradation.

A Scheme of Computational Time Reduction on Back-End Server Using Computational Grid (계산 그리드를 이용한 백엔드 서버의 계산시간 단축 방안)

  • Hong, Seong-Pyo;Han, Seung-Jo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.12
    • /
    • pp.2695-2701
    • /
    • 2012
  • We need privacy protection protocols, that satisfy three essential security requirements; confidentiality, indistinguishability and forward security, in order to protect user's privacy in RFID system. The hash-chain based protocol that Ohkubo et. al proposed is the most secure protocol, that satisfies all of the essential security requirements, among existing protocols. But, this protocol has a disadvantage that it takes very long time to identify a tag in the back-end server. In this paper, we propose a scheme to keep security just as it is and to reduce computation time for identifying a tag in back-end server. The proposed scheme shows the results that the identification time in back-end server is reduced considerably compared to the hash-chain based protocol.

Fast Fuzzy Inference Algorithm for Fuzzy System constructed with Triangular Membership Functions (삼각형 소속함수로 구성된 퍼지시스템의 고속 퍼지추론 알고리즘)

  • Yoo, Byung-Kook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.1
    • /
    • pp.7-13
    • /
    • 2002
  • Almost applications using fuzzy theory are based on the fuzzy inference. However fuzzy inference needs much time in calculation process for the fuzzy system with many input variables or many fuzzy labels defined on each variable. Inference time is dependent on the number of arithmetic Product in computation Process. Especially, the inference time is a primary constraint to fuzzy control applications using microprocessor or PC-based controller. In this paper, a simple fast fuzzy inference algorithm(FFIA), without loss of information, was proposed to reduce the inference time based on the fuzzy system with triangular membership functions in antecedent part of fuzzy rule. The proposed algorithm was induced by using partition of input state space and simple geometrical analysis. By using this scheme, we can take the same effect of the fuzzy rule reduction.

Comparison of Projection-Based Model Order Reduction for Frequency Responses (주파수응답에 대한 투영기반 모델차수축소법의 비교)

  • Won, Bo Reum;Han, Jeong Sam
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.38 no.9
    • /
    • pp.933-941
    • /
    • 2014
  • This paper provides a comparison between the Krylov subspace method (KSM) and modal truncation method (MTM), which are typical projection-based model order reduction methods. The frequency responses are compared to determine the numerical accuracies and efficiencies. In order to compare the numerical accuracies of the KSM and MTM, the frequency responses and relative errors according to the order of the reduced model and frequency of interest are studied. Subsequently, a numerical examination shows whether a reduced order can be determined automatically with the help of an error convergence indicator. As for the numerical efficiency, the computation time needed to generate the projection matrix and the solution time to perform a frequency response analysis are compared according to the reduced order. A finite element model for a car suspension is considered as an application example of the numerical comparison.