• Title/Summary/Keyword: Pre-computation

Search Result 174, Processing Time 0.03 seconds

Pre-Computation Based Selective Probing (PCSP) Scheme for Distributed Quality of Service (QoS) Routing with Imprecise State Information

  • Lee Won-Ick;Lee Byeong-Gi
    • Journal of Communications and Networks
    • /
    • v.8 no.1
    • /
    • pp.70-84
    • /
    • 2006
  • We propose a new distributed QoS routing scheme called pre-computation based selective probing (PCSP). The PCSP scheme is designed to provide an exact solution to the constrained optimization problem with moderate overhead, considering the practical environment where the state information available for the routing decision is not exact. It does not limit the number of probe messages, instead, employs a qualitative (or conditional) selective probing approach. It considers both the cost and QoS metrics of the least-cost and the best-QoS paths to calculate the end-to-end cost of the found feasible paths and find QoS-satisfying least-cost paths. It defines strict probing condition that excludes not only the non-feasible paths but also the non-optimal paths. It additionally pre-computes the QoS variation taking into account the impreciseness of the state information and applies two modified QoS-satisfying conditions to the selection rules. This strict probing condition and carefully designed probing approaches enable to strictly limit the set of neighbor nodes involved in the probing process, thereby reducing the message overhead without sacrificing the optimal properties. However, the PCSP scheme may suffer from high message overhead due to its conservative search process in the worst case. In order to bound such message overhead, we extend the PCSP algorithm by applying additional quantitative heuristics. Computer simulations reveal that the PCSP scheme reduces message overhead and possesses ideal success ratio with guaranteed optimal search. In addition, the quantitative extensions of the PCSP scheme turn out to bound the worst-case message overhead with slight performance degradation.

Improving Scalability using Parallelism in RFID Privacy Protection (RFID 프라이버시 보호에서 병행성을 이용한 확장성 개선)

  • Shin Myeong-Sook;Lee Joon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.8
    • /
    • pp.1428-1434
    • /
    • 2006
  • In this paper, we propose the scheme solving privacy infringement in RFID systems with improving the scalability of back-end server. With RFID/USN becoming important subject, many approaches have been proposed and applied. However, limits of RFID, low computation power and storage, make the protection of privacy difficult. The Hash Chain scheme has been known as one guaranteeing forward security, confidentiality and indistinguishability. In spite of that, it is a problem that requires much of computation to identify tags in Back-End server. In this paper, we introduce an efficient key search method, the Hellman Method, to reduce computing complexity in Back-End server. Hellman Method algorism progresses pre-computation and (re)search. In this paper, after applying Hellman Method to Hash chain theory, We compared Preservation and key reference to analyze and apply to parallel With guaranteeing requistes of security for existing privacy protecting Comparing key reference reduced computation time of server to reduce computation complex from O(m) to $O(\frac{m{^2/3}}{w})$ than the existing form.

Relighting 3D Scenes with a Continuously Moving Camera

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • ETRI Journal
    • /
    • v.31 no.4
    • /
    • pp.429-437
    • /
    • 2009
  • This paper proposes a novel technique for 3D scene relighting with interactive viewpoint changes. The proposed technique is based on a deep framebuffer framework for fast relighting computation which adopts image-based techniques to provide arbitrary view-changing. In the preprocessing stage, the shading parameters required for the surface shaders, such as surface color, normal, depth, ambient/diffuse/specular coefficients, and roughness, are cached into multiple deep framebuffers generated by several caching cameras which are created in an automatic manner. When the user designs the lighting setup, the relighting renderer builds a map to connect a screen pixel for the current rendering camera to the corresponding deep framebuffer pixel and then computes illumination at each pixel with the cache values taken from the deep framebuffers. All the relighting computations except the deep framebuffer pre-computation are carried out at interactive rates by the GPU.

The Motion Estimator Implementation with Efficient Structure for Full Search Algorithm of Variable Block Size (다양한 블록 크기의 전역 탐색 알고리즘을 위한 효율적인 구조를 갖는 움직임 추정기 설계)

  • Hwang, Jong-Hee;Choe, Yoon-Sik
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.11
    • /
    • pp.66-76
    • /
    • 2009
  • The motion estimation in video encoding system occupies the biggest part. So, we require the motion estimator with efficient structure for real-time operation. And for motion estimator's implementation, it is desired to design hardware module of an exclusive use that perform the encoding process at high speed. This paper proposes motion estimation detection block(MED), 41 SADs(Sum of Absolute Difference) calculation block, minimum SAD calculation and motion vector generation block based on parallel processing. The parallel processing can reduce effectively the amount of the operation. The minimum SAD calculation and MED block uses the pre-computation technique for reducing switching activity of the input signal. It results in high-speed operation. The MED and 41 SADs calculation blocks are composed of adder tree which causes the problem of critical path. So, the structure of adder tree has changed the most commonly used ripple carry adder(RCA) with carry skip adder(CSA). It enables adder tree to operate at high speed. In addition, as we enabled to easily control key variables such as control signal of search range from the outside, the efficiency of hardware structure increased. Simulation and FPGA verification results show that the delay of MED block generating the critical path at the motion estimator is reduced about 19.89% than the conventional strukcture.

A Fast Normalized Cross-Correlation Computation for WSOLA-based Speech Time-Scale Modification (WSOLA 기반의 음성 시간축 변환을 위한 고속의 정규상호상관도 계산)

  • Lim, Sangjun;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.7
    • /
    • pp.427-434
    • /
    • 2012
  • The overlap-add technique based on waveform similarity (WSOLA) method is known to be an efficient high-quality algorithm for time scaling of speech signal. The computational load of WSOLA is concentrated on the repeated normalized cross-correlation (NCC) calculation to evaluate the similarity between two signal waveforms. To reduce the computational complexity of WSOLA, this paper proposes a fast NCC computation method, in which NCC is obtained through pre-calculated sum tables to eliminate redundancy of repeated NCC calculations in the adjacent regions. While the denominator part of NCC has much redundancy irrespective of the time-scale factor, the numerator part of NCC has less redundancy and the amount of redundancy is dependent on both the time-scale factor and optimal shift value, thereby requiring more sophisticated algorithm for fast computation. The simulation results show that the proposed method reduces about 40%, 47% and 52% of the WSOLA execution time for the time-scale compression, 2 and 3 times time-scale expansions, respectively, while maintaining exactly the same speech quality of the conventional WSOLA.

A study of vehicle structure analysis (자동차의 차체강도 해석)

  • 이종원;조영호;박관흠
    • Journal of the korean Society of Automotive Engineers
    • /
    • v.5 no.1
    • /
    • pp.54-62
    • /
    • 1983
  • This paper presents structural analyses performed on the white body of vehicle using the most competitive analyzer, Finite Element Method, and attempts to obtain design criteria of body. By applying the substructure and restart technique to structural model, computation time is reduced. The synthetic processing from modelling to graphic visualization is accomplished by several subprograms, viz., various pre-post processors. On the basis of home-made vehicle modeling, typical cases of accident and service load is analyzed and discussed. The results obtained will guide the designer to design the structure optimally.

  • PDF

Robust Constrained Predictive Control without On-line Optimizations

  • Lee, Y. I.;B. Kouvaritakis
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.27.4-27
    • /
    • 2001
  • A stabilizing control method for linear systems with model uncertainties and hard input constraints is developed, which does not require on-line optimizations. This work is motivated by the constrained robust MPC(CRMPC) approach [3] which adopts the dual mode prediction strategy (i.e. free control moves and invariant set) and minimizes a worst case performance criterion. Based on the observation that, a feasible control sequence for a particular state can be found as a linear combination of feasible sequences for other states, we suggest a stabilizing control algorithm providing sub-optimal and feasible control sequences using pre-computed optimal sequences for some canonical states. The on-line computation of the proposed method reduces to simple matrix multiplication.

  • PDF

A Simple Tandem Method for Clustering of Multimodal Dataset

  • Cho C.;Lee J.W.;Lee J.W.
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2003.05a
    • /
    • pp.729-733
    • /
    • 2003
  • The presence of local features within clusters incurred by multi-modal nature of data prohibits many conventional clustering techniques from working properly. Especially, the clustering of datasets with non-Gaussian distributions within a cluster can be problematic when the technique with implicit assumption of Gaussian distribution is used. Current study proposes a simple tandem clustering method composed of k-means type algorithm and hierarchical method to solve such problems. The multi-modal dataset is first divided into many small pre-clusters by k-means or fuzzy k-means algorithm. The pre-clusters found from the first step are to be clustered again using agglomerative hierarchical clustering method with Kullback- Leibler divergence as the measure of dissimilarity. This method is not only effective at extracting the multi-modal clusters but also fast and easy in terms of computation complexity and relatively robust at the presence of outliers. The performance of the proposed method was evaluated on three generated datasets and six sets of publicly known real world data.

  • PDF

Parametric Macro for Two-Dimensional Layout on the Auto-CAD System

  • Kim, Yunyong;Park, Jewoong
    • Proceedings of the Korea Committee for Ocean Resources and Engineering Conference
    • /
    • 2000.10a
    • /
    • pp.253-260
    • /
    • 2000
  • In recent years, a number of successful nesting approaches have been developed by using the various heuristic algorithms, and due to their application potential several commercial CAD/CAM packages include a nesting module for solving the layout problem. Since a large portion of the complexity of the part nesting problem results from the overlapping computation, the geometric representation is one of the most important factors to reduce the complexity of the problem. The proposed part representation method can easily handle parts and raw materials with widely varying geometrical shape by using the redesigning modules. This considerably reduces the amount of processed data and consequently the run time of the computer. The aim of this research is to develop parametric macro for two-dimensional layout on the Auto-CAD system. Therefore, this research can be called "pre-nesting".

  • PDF

A Study on the Multidisciplinary Design Optimization Using Collaborative Optimization Approach (협동 최적화 접근 방법에 의한 타분야 최적 설계에 관한 연구)

  • 노명일;이규열
    • Korean Journal of Computational Design and Engineering
    • /
    • v.5 no.3
    • /
    • pp.263-275
    • /
    • 2000
  • Multidisciplinary design optimization(MDO) can yield optimal design considering all the disciplinary requirements concurrently. A method to implement the collaborative optimization(CO) approach, one of the MDO methodologies, is developed using a pre-compiler “EzpreCompiler”, a design optimization library “EzOptimizer”, and a common object request broker architecture(CORBA) in distributed computing environment. The CO approach is applied to a mathematical example to show its applicability and equivalence to standard optimization(SO) formulation. In a realistic engineering problem such as optimal design of a two-member hub frame, optimal design of a speed reducer and initial design of a bulk carrier, the CO yields better results than the SO. Furthermore, the CO allows the distributed processing using the CORBA, which leads to reduction of overall computation time.

  • PDF