• Title/Summary/Keyword: sequential update

Search Result 49, Processing Time 0.028 seconds

A Finite Element Formulation for the Inverse Estimation of an Isothermal Boundary in Two-Dimensional Slab (상단 등온조건을 갖는 이차원 슬랩에서의 경계위치 역추정을 위한 유한요소 정식화)

  • Kim, Sun-Kyoung;Hurh, Hoon;Lee, Woo-Il
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.25 no.6
    • /
    • pp.829-836
    • /
    • 2001
  • A dependable boundary reconstruction technique is proposed. The finite element method is used for the analysis of the direct heat conduction problem to realize the deformable grid system. An appropriate strategy for grid update is suggested. A complete sensitivity analysis is performed to obtain the derivatives required for restoration of the optimal boundary. With the result of the sensitivity analysis, the unknown boundary is sought using the sequential quadratic programming. The method is applied to reconstruction of boundaries with sinusoidal, step, and cavity form. The overall performance of the proposed method is examined by comparison between the estimated the exact boundaries.

The Study for Construction of the Improved Optimization Algorithm by the Response Surface Method (반응표면법의 향상된 최적화 알고리즘 구성에 관한 연구)

  • Park, J.S.;Lee, D.J.;Im, J.B.
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.13 no.3
    • /
    • pp.22-33
    • /
    • 2005
  • Response Surface Method (RSM) constructs approximate response surfaces using sample data from experiments or simulations and finds optimum levels of process variables within the fitted response surfaces of the interest region. It will be necessary to get the most suitable response surface for the accuracy of the optimization. The application of RSM plan experimental designs. The RSM is used in the sequential optimization process. The first goal of this study is to improve the plan of central composite designs of experiments with various locations of axial points. The second is to increase the optimal efficiency applying a modified method to update interest regions.

  • PDF

A Potts Automata algorithm for Edge detection (Potts Automata를 이용한 영상의 에지 추출)

  • Lee, Seok-Ki;Kim, Seok-Tae;Cho, Sung-Jin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.10a
    • /
    • pp.767-770
    • /
    • 2001
  • Edge detection is one of issues with essential importance in the area of image process. An edge in image is a boundary or contour which a significant change occurs in image intensity. In the paper, we process edge detection algorithms which are based on Potts automata. The dynamical behavior of these automata is completely determined by Lyapunov operators for sequential and parallel update. If Potts Automata convergence to fixed points, then it can be used to image processing. From the generalized Potts automata point of view, we propose a Potts Automata technique for detecting edge. Based on the experimental results we discuss the advantage and efficiency.

  • PDF

Development of an Operation Software for the ASRI-FMS/CIM (ASRI-FMS/CIM 을 위한 운용 소프트웨어의 구축)

  • Park, Chan-Kwon;Park, Jin-Woo;Kang, Suk-Ho
    • IE interfaces
    • /
    • v.6 no.2
    • /
    • pp.53-65
    • /
    • 1993
  • This paper deals with the development of a software module for production planning and scheduling activities of an existing Flexible Machining and Assembly System (FMAS). The Production Planning Module uses the hierarchical and sequential scheme based on "divide and conquer" philosophy. In this module, routes are determined based on the production order, orders are screened, tools are allocated, and order adjustments are executed according to the allocated tools. The Scheduling Module allocates the resources, determines the task priority and the start and completion times of tasks. Re-scheduling can be done to handle unforeseen situations such as lumpy demands and machine breakdowns. Since all modules are integrated with a central database and they interface independently, it is easy to append new modules or update the existing modules. The result of this study is used for operating the real FMAS consisting of a machining cell with 2 domestic NC machines and a part feeding robot, an assembly cell with a conveyor and 3 robots, an inspection cell, an AGV, an AS/RS, and a central control computer.

  • PDF

A Study on Adjustment Optimization for Dynamic Balancing Test of Helicopter Main Rotor Blade (헬리콥터 주로터 블레이드 동적밸런싱 시험을 위한 조절변수 최적화 연구)

  • Song, KeunWoong;Choi, JongSoo
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.26 no.6_spc
    • /
    • pp.736-743
    • /
    • 2016
  • This study describes optimization methods for adjustment of helicopter main rotor tracking and balancing (RTB). RTB is a essential process for helicopter operation and maintenance. Linear and non-linear models were developed with past RTB test results for estimation of RTB adjustment. Then global and sequential optimization methods were applied to the each of models. Utilization of the individual optimization method with each model is hard to fulfill the RTB requirements because of different characteristics of each blade. Therefore an ensemble model was used to integrate every estimated adjustment result, and an adaptive method was also applied to adjustment values of the linear model to update for next estimations. The goal of this developed RTB adjustment optimization program is to achieve the requirements within 2 run. Additional tests for comparison of weight factor of the ensemble model are however necessary.

A Design of Pipelined-parallel CABAC Decoder Adaptive to HEVC Syntax Elements (HEVC 구문요소에 적응적인 파이프라인-병렬 CABAC 복호화기 설계)

  • Bae, Bong-Hee;Kong, Jin-Hyeung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.5
    • /
    • pp.155-164
    • /
    • 2015
  • This paper describes a design and implementation of CABAC decoder, which would handle HEVC syntax elements in adaptively pipelined-parallel computation manner. Even though CABAC offers the high compression rate, it is limited in decoding performance due to context-based sequential computation, and strong data dependency between context models, as well as decoding procedure bin by bin. In order to enhance the decoding computation of HEVC CABAC, the flag-type syntax elements are adaptively pipelined by precomputing consecutive flag-type ones; and multi-bin syntax elements are decoded by processing bins in parallel up to three. Further, in order to accelerate Binary Arithmetic Decoder by reducing the critical path delay, the update and renormalization of context modeling are precomputed parallel for the cases of LPS as well as MPS, and then the context modeling renewal is selected by the precedent decoding result. It is simulated that the new HEVC CABAC architecture could achieve the max. performance of 1.01 bins/cycle, which is two times faster with respect to the conventional approach. In ASIC design with 65nm library, the CABAC architecture would handle 224 Mbins/sec, which could decode QFHD HEVC video data in real time.

Implementation of LDPC Decoder using High-speed Algorithms in Standard of Wireless LAN (무선 랜 규격에서의 고속 알고리즘을 이용한 LDPC 복호기 구현)

  • Kim, Chul-Seung;Kim, Min-Hyuk;Park, Tae-Doo;Jung, Ji-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.12
    • /
    • pp.2783-2790
    • /
    • 2010
  • In this paper, we first review LDPC codes in general and a belief propagation algorithm that works in logarithm domain. LDPC codes, which is chosen 802.11n for wireless local access network(WLAN) standard, require a large number of computation due to large size of coded block and iteration. Therefore, we presented three kinds of low computational algorithms for LDPC codes. First, sequential decoding with partial group is proposed. It has the same H/W complexity, and fewer number of iterations are required with the same performance in comparison with conventional decoder algorithm. Secondly, we have apply early stop algorithm. This method reduces number of unnecessary iterations. Third, early detection method for reducing the computational complexity is proposed. Using a confidence criterion, some bit nodes and check node edges are detected early on during decoding. Through the simulation, we knew that the iteration number are reduced by half using subset algorithm and early stop algorithm is reduced more than one iteration and computational complexity of early detected method is about 30% offs in case of check node update, 94% offs in case of check node update compared to conventional scheme. The LDPC decoder have been implemented in Xilinx System Generator and targeted to a Xilinx Virtx5-xc5vlx155t FPGA. When three algorithms are used, amount of device is about 45% off and the decoding speed is about two times faster than convectional scheme.

Fast XML Encoding Scheme Using Reuse of Deleted Nodes (삭제된 노드의 재사용을 이용한 Fast XML 인코딩 기법)

  • Hye-Kyeong Ko
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.835-843
    • /
    • 2023
  • Given the structure of XML data, path and tree pattern matching algorithms play an important role in XML query processing. To facilitate decisions or relationships between nodes, nodes in an XML tree are typically labeled in a way that can quickly establish an ancestor-descendant on relationship between two nodes. However, these techniques have the disadvantage of re-labeling existing nodes or recalculating certain values if insertion occurs due to sequential updates. Therefore, in current labeling techniques, the cost of updating labels is very high. In this paper, we propose a new labeling technique called Fast XML encoding, which supports the update of order-sensitive XML documents without re-labeling or recalculation. It also controls the length of the label by reusing deleted labels at the same location in the XML tree. The proposed reuse algorithm can reduce the length of the label when all deleted labels are inserted in the same location. The proposed technique in the experimental results can efficiently handle order-sensitive queries and updates.

A Study on Interaction Modes among Populations in Cooperative Coevolutionary Algorithm for Supply Chain Network Design (공급사슬 네트워크 설계를 위한 협력적 공진화 알고리즘에서 집단들간 상호작용방식에 관한 연구)

  • Han, Yongho
    • Korean Management Science Review
    • /
    • v.31 no.3
    • /
    • pp.113-130
    • /
    • 2014
  • Cooperative coevolutionary algorithm (CCEA) has proven to be a very powerful means of solving optimization problems through problem decomposition. CCEA implies the use of several populations, each population having the aim of finding a partial solution for a component of the considered problem. Populations evolve separately and they interact only when individuals are evaluated. Interactions are made to obtain complete solutions by combining partial solutions, or collaborators, from each of the populations. In this respect, we can think of various interaction modes. The goal of this research is to develop a CCEA for a supply chain network design (SCND) problem and identify which interaction mode gives the best performance for this problem. We present general design principle of CCEA for the SCND problem, which require several co-evolving populations. We classify these populations into two groups and classify the collaborator selection scheme into two types, the random-based one and the best fitness-based one. By combining both two groups of population and two types of collaborator selection schemes, we consider four possible interaction modes. We also consider two modes of updating populations, the sequential mode and the parallel mode. Therefore, by combining both four possible interaction modes and two modes of updating populations, we investigate seven possible solution algorithms. Experiments for each of these solution algorithms are conducted on a few test problems. The results show that the mode of the best fitness-based collaborator applied to both groups of populations combined with the sequential update mode outperforms the other modes for all the test problems.

Context-data Generation Model using Probability functions and Situation Propagation Network (확률 함수와 상황 전파 네트워크를 결합한 상황 데이터 생성 모델)

  • Cheon, Seong-Pyo;Kim, Sung-Shin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.7
    • /
    • pp.1444-1452
    • /
    • 2009
  • Probabilistic distribution functions based data generation method is very effective. Probabilistic distribution functions are defined under the assumption that daily routine contexts are mainly depended on a time-based schedule. However, daily life contexts are frequently determined by previous contexts because contexts have consistency and/or sequential flows. In order to refect previous contexts effect, a situation propagation network is proposed in this paper. As proposed situation propagation network make parameters of related probabilistic distribution functions update, generated contexts can be more realistic and natural. Through the simulation study, proposed context-data generation model generated general outworker's data about 11 daily contexts at home. Generated data are evaluated with respect to reduction of ambiguity and confliction using newly defined indexes of ambiguity and confliction of sequential contexts. In conclusion, in case of combining situation propagation network with probabilistic distribution functions, ambiguity and confliction of data can be reduced 6.45% and 4.60% respectively.