• Title/Summary/Keyword: approximate algorithm

Search Result 698, Processing Time 0.023 seconds

Exploring the Issues and Improvements of the Quotient and the Reminder of the Decimal Division (소수 나눗셈의 몫과 나머지에 대한 논점과 개선 방안)

  • Lee, Hwayoung
    • Education of Primary School Mathematics
    • /
    • v.24 no.2
    • /
    • pp.103-114
    • /
    • 2021
  • In this study I recognized the problems with the use of the terms 'quotient' and 'reminder' in the division of decimal and explored ways to improve them. The prior studies and current textbooks critically analyzed because each researcher has different views on the use of the terms 'quotient' and 'reminder' because of the same view of the values in the division calculation. As a result of this study, I proposed to view the result 'q' and 'r' of division of decimals by division algorithms b=a×q+r as 'quotient' and 'reminder', and the amount equal to or smaller to q the problem context as a final 'result value' and the residual value as 'remained value'. It was also proposed that the approximate value represented by rounding the quotient should not be referred to as 'quotient'.

Balancing assembly line in an electronics company

  • 박경철;강석훈;박성수;김완희
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1993.10a
    • /
    • pp.12-19
    • /
    • 1993
  • In general, the line balancing problem is defined as of finding an assignment of the given jobs to the workstations under the precedence constraints given to the set of jobs. Usually, the objective is either minimizing the cycle time under the given number of workstations or minimizing the number of workstations under the given cycle time. In this paper, we present a new type of an assembly line balancing problem which occurs in an electronics company manufacturing home appliances. The main difference of the problem compared to the general line balancing problem lies in the structure of the precedence given to the set of jobs. In the problem, the set of jobs is partitioned into two disjoint subjects. One is called the set of fixed jobs and the other, the set of floating jobs. The fixed jobs should be processed in the linear order and some pair of the jobs should not be assigned to the same workstations. Whereas, to each floating job, a set of ranges is given. The range is given in terms of two fixed jobs and it means that the floating job can be processed after the first job is processed and before the second job is processed. There can be more than one range associated to a floating job. We present a procedure to find an approximate solution to the problem. The procedure consists of two major parts. One is to find the assignment of the floating jobs under the given (feasible) assignment of the fixed jobs. The problem can be viewed as a constrained bin packing problem. The other is to find the assignment of the whole jobs under the given linear precedence on the set of the floating jobs. First problem is NP-hard and we devise a heuristic procedure to the problem based on the transportation problem and matching problem. The second problem can be solved in polynomial time by the shortest path method. The algorithm works in iterative manner. One step is composed of two phases. In the first phase, we solve the constrained bin packing problem. In the second phase, the shortest path problem is solved using the phase 1 result. The result of the phase 2 is used as an input to the phase 1 problem at the next step. We test the proposed algorithm on the set of real data found in the washing machine assembly line.

  • PDF

Comparison of Artificial Neural Network and Empirical Models to Determine Daily Reference Evapotranspiration (기준 일증발산량 산정을 위한 인공신경망 모델과 경험모델의 적용 및 비교)

  • Choi, Yonghun;Kim, Minyoung;O'Shaughnessy, Susan;Jeon, Jonggil;Kim, Youngjin;Song, Weon Jung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.60 no.6
    • /
    • pp.43-54
    • /
    • 2018
  • The accurate estimation of reference crop evapotranspiration ($ET_o$) is essential in irrigation water management to assess the time-dependent status of crop water use and irrigation scheduling. The importance of $ET_o$ has resulted in many direct and indirect methods to approximate its value and include pan evaporation, meteorological-based estimations, lysimetry, soil moisture depletion, and soil water balance equations. Artificial neural networks (ANNs) have been intensively implemented for process-based hydrologic modeling due to their superior performance using nonlinear modeling, pattern recognition, and classification. This study adapted two well-known ANN algorithms, Backpropagation neural network (BPNN) and Generalized regression neural network (GRNN), to evaluate their capability to accurately predict $ET_o$ using daily meteorological data. All data were obtained from two automated weather stations (Chupungryeong and Jangsu) located in the Yeongdong-gun (2002-2017) and Jangsu-gun (1988-2017), respectively. Daily $ET_o$ was calculated using the Penman-Monteith equation as the benchmark method. These calculated values of $ET_o$ and corresponding meteorological data were separated into training, validation and test datasets. The performance of each ANN algorithm was evaluated against $ET_o$ calculated from the benchmark method and multiple linear regression (MLR) model. The overall results showed that the BPNN algorithm performed best followed by the MLR and GRNN in a statistical sense and this could contribute to provide valuable information to farmers, water managers and policy makers for effective agricultural water governance.

Optimal Active-Control & Development of Optimization Algorithm for Reduction of Drag in Flow Problems(3) -Construction of the Formulation for True Newton Method and Application to Viscous Drag Reduction of Three-Dimensional Flow (드래그 감소를 위한 유체의 최적 엑티브 제어 및 최적화 알고리즘의 개발(3) - 트루 뉴턴법을 위한 정식화 개발 및 유체의 3차원 최적 엑티브 제어)

  • Bark, Jai-Hyeong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.20 no.6
    • /
    • pp.751-759
    • /
    • 2007
  • We have developed several methods for the optimization problem having large-scale and highly nonlinear system. First, step by step method in optimization process was employed to improve the convergence. In addition, techniques of furnishing good initial guesses for analysis using sensitivity information acquired from optimization iteration, and of manipulating analysis/optimization convergency criterion motivated from simultaneous technique were used. We applied them to flow control problem and verified their efficiency and robustness. However, they are based on quasi-Newton method that approximate the Hessian matrix using exact first derivatives. However solution of the Navier-Stokes equations are very cost, so we want to improve the efficiency of the optimization algorithm as much as possible. Thus we develop a true Newton method that uses exact Hessian matrix. And we apply that to the three-dimensional problem of flow around a sphere. This problem is certainly intractable with existing methods for optimal flow control. However, we can attack such problems with the methods that we developed previously and true Newton method.

Automated Finite Element Analyses for Structural Integrated Systems (통합 구조 시스템의 유한요소해석 자동화)

  • Chongyul Yoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.1
    • /
    • pp.49-56
    • /
    • 2024
  • An automated dynamic structural analysis module stands as a crucial element within a structural integrated mitigation system. This module must deliver prompt real-time responses to enable timely actions, such as evacuation or warnings, in response to the severity posed by the structural system. The finite element method, a widely adopted approximate structural analysis approach globally, owes its popularity in part to its user-friendly nature. However, the computational efficiency and accuracy of results depend on the user-provided finite element mesh, with the number of elements and their quality playing pivotal roles. This paper introduces a computationally efficient adaptive mesh generation scheme that optimally combines the h-method of node movement and the r-method of element division for mesh refinement. Adaptive mesh generation schemes automatically create finite element meshes, and in this case, representative strain values for a given mesh are employed for error estimates. When applied to dynamic problems analyzed in the time domain, meshes need to be modified at each time step, considering a few hundred or thousand steps. The algorithm's specifics are demonstrated through a standard cantilever beam example subjected to a concentrated load at the free end. Additionally, a portal frame example showcases the generation of various robust meshes. These examples illustrate the adaptive algorithm's capability to produce robust meshes, ensuring reasonable accuracy and efficient computing time. Moreover, the study highlights the potential for the scheme's effective application in complex structural dynamic problems, such as those subjected to seismic or erratic wind loads. It also emphasizes its suitability for general nonlinear analysis problems, establishing the versatility and reliability of the proposed adaptive mesh generation scheme.

Development of Optimum Traffic Safety Evaluation Model Using the Back-Propagation Algorithm (역전파 알고리즘을 이용한 최적의 교통안전 평가 모형개발)

  • Kim, Joong-Hyo;Kwon, Sung-Dae;Hong, Jeong-Pyo;Ha, Tae-Jun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.35 no.3
    • /
    • pp.679-690
    • /
    • 2015
  • The need to remove the cause of traffic accidents by improving the engineering system for a vehicle and the road in order to minimize the accident hazard. This is likely to cause traffic accident continue to take a large and significant social cost and time to improve the reliability and efficiency of this generally poor road, thereby generating a lot of damage to the national traffic accident caused by improper environmental factors. In order to minimize damage from traffic accidents, the cause of accidents must be eliminated through technological improvements of vehicles and road systems. Generally, it is highly probable that traffic accident occurs more often on roads that lack safety measures, and can only be improved with tremendous time and costs. In particular, traffic accidents at intersections are on the rise due to inappropriate environmental factors, and are causing great losses for the nation as a whole. This study aims to present safety countermeasures against the cause of accidents by developing an intersection Traffic safety evaluation model. It will also diagnose vulnerable traffic points through BPA (Back -propagation algorithm) among artificial neural networks recently investigated in the area of artificial intelligence. Furthermore, it aims to pursue a more efficient traffic safety improvement project in terms of operating signalized intersections and establishing traffic safety policies. As a result of conducting this study, the mean square error approximate between the predicted values and actual measured values of traffic accidents derived from the BPA is estimated to be 3.89. It appeared that the BPA appeared to have excellent traffic safety evaluating abilities compared to the multiple regression model. In other words, The BPA can be effectively utilized in diagnosing and practical establishing transportation policy in the safety of actual signalized intersections.

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

Development of Ideal Model Based Optimization Procedure with Heuristic Knowledge (정위적 방사선 수술에서의 이상표적모델과 경험적 지식을 활용한 수술계획 최적화 방법 개발)

  • 오승종;송주영;최경식;김문찬;이태규;서태석
    • Progress in Medical Physics
    • /
    • v.15 no.2
    • /
    • pp.84-93
    • /
    • 2004
  • Stereotactic radiosurgery (SRS) is a technique that delivers a high dose to a target legion and a low dose to a critical organ through only one or a few irradiations. For this purpose, many mathematical methods for optimization have been proposed. There are some limitations to using these methods: the long calculation time and difficulty in finding a unique solution due to different tumor shapes. In this study, many clinical target shapes were examined to find a typical pattern of tumor shapes from which some possible ideal geometrical shapes, such as spheres, cylinders, cones or a combination, are assumed to approximate real tumor shapes. Using the arrangement of multiple isocenters, optimum variables, such as isocenter positions or collimator size, were determined. A database was formed from these results. The optimization procedure consisted of the following steps: Any shape of tumor was first assumed to an ideal model through a geometry comparison algorithm, then optimum variables for ideal geometry chosen from the predetermined database, followed by a final adjustment of the optimum parameters using the real tumor shape. Although the result of applying the database to other patients was not superior to the result of optimization in each case, it can be acceptable as a plan starling point.

  • PDF

Time- and Frequency-Domain Block LMS Adaptive Digital Filters: Part Ⅱ - Performance Analysis (시간영역 및 주파수영역 블럭적응 여파기에 관한 연구 : 제 2 부- 성능분석)

  • Lee, Jae-Chon;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.7 no.4
    • /
    • pp.54-76
    • /
    • 1988
  • In Part Ⅰ of the paper, we have developed various block least mean-square (BLMS) adaptive digital filters (ADF's) based on a unified matrix treatment. In Part Ⅱ we analyze the convergence behaviors of the self-orthogonalizing frequency-domain BLMS (FBLMS) ADF and the unconstrained FBLMS (UFBLMS) ADF both for the overlap-save and overlap-add sectioning methods. We first show that, unlike the FBLMS ADF with a constant convergence factor, the convergence behavior of the self-orthogonalizing FBLMS ADF is governed by the same autocorrelation matrix as that of the UFBLMS ADF. We then show that the optimum solution of the UFBLMS ADF is the same as that of the constrained FBLMS ADF when the filter length is sufficiently long. The mean of the weight vector of the UFBLMS ADF is also shown to converge to the optimum Wiener weight vector under a proper condition. However, the steady-state mean-squared error(MSE) of the UFBLMS ADF turns out to be slightly worse than that of the constrained algorithm if the same convergence constant is used in both cases. On the other hand, when the filter length is not sufficiently long, while the constrained FBLMS ADF yields poor performance, the performance of the UFBLMS ADF can be improved to some extent by utilizing its extended filter-length capability. As for the self-orthogonalizing FBLMS ADF, we study how we can approximate the autocorrelation matrix by a diagonal matrix in the frequency domain. We also analyze the steady-state MSE's of the self-orthogonalizing FBLMS ADF's with and without the constant. Finally, we present various simulation results to verify our analytical results.

  • PDF

Performance Evaluation of Output Queueing ATM Switch with Finite Buffer Using Stochastic Activity Networks (SAN을 이용한 제한된 버퍼 크기를 갖는 출력큐잉 ATM 스위치 성능평가)

  • Jang, Kyung-Soo;Shin, Ho-Jin;Shin, Dong-Ryeol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.8
    • /
    • pp.2484-2496
    • /
    • 2000
  • High speed switches have been developing to interconnect a large number of nodes. It is important to analyze the switch performance under various conditions to satisfy the requirements. Queueing analysis, in general, has the intrinsic problem of large state space dimension and complex computation. In fact, The petri net is a graphical and mathematical model. It is suitable for various applications, in particular, manufacturing systems. It can deal with parallelism, concurrence, deadlock avoidance, and asynchronism. Currently it has been applied to the performance of computer networks and protocol verifications. This paper presents a framework for modeling and analyzing ATM switch using stochastic activity networks (SANs). In this paper, we provide the ATM switch model using SANs to extend easily and an approximate analysis method to apply A TM switch models, which significantly reduce the complexity of the model solution. Cell arrival process in output-buffered Queueing A TM switch with finite buffer is modeled as Markov Modulated Poisson Process (MMPP), which is able to accurately represent real traffic and capture the characteristics of bursty traffic. We analyze the performance of the switch in terms of cell-loss ratio (CLR), mean Queue length and mean delay time. We show that the SAN model is very useful in A TM switch model in that the gates have the capability of implementing of scheduling algorithm.

  • PDF