In this study I recognized the problems with the use of the terms 'quotient' and 'reminder' in the division of decimal and explored ways to improve them. The prior studies and current textbooks critically analyzed because each researcher has different views on the use of the terms 'quotient' and 'reminder' because of the same view of the values in the division calculation. As a result of this study, I proposed to view the result 'q' and 'r' of division of decimals by division algorithms b=a×q+r as 'quotient' and 'reminder', and the amount equal to or smaller to q the problem context as a final 'result value' and the residual value as 'remained value'. It was also proposed that the approximate value represented by rounding the quotient should not be referred to as 'quotient'.
Proceedings of the Korean Operations and Management Science Society Conference
/
1993.10a
/
pp.12-19
/
1993
In general, the line balancing problem is defined as of finding an assignment of the given jobs to the workstations under the precedence constraints given to the set of jobs. Usually, the objective is either minimizing the cycle time under the given number of workstations or minimizing the number of workstations under the given cycle time. In this paper, we present a new type of an assembly line balancing problem which occurs in an electronics company manufacturing home appliances. The main difference of the problem compared to the general line balancing problem lies in the structure of the precedence given to the set of jobs. In the problem, the set of jobs is partitioned into two disjoint subjects. One is called the set of fixed jobs and the other, the set of floating jobs. The fixed jobs should be processed in the linear order and some pair of the jobs should not be assigned to the same workstations. Whereas, to each floating job, a set of ranges is given. The range is given in terms of two fixed jobs and it means that the floating job can be processed after the first job is processed and before the second job is processed. There can be more than one range associated to a floating job. We present a procedure to find an approximate solution to the problem. The procedure consists of two major parts. One is to find the assignment of the floating jobs under the given (feasible) assignment of the fixed jobs. The problem can be viewed as a constrained bin packing problem. The other is to find the assignment of the whole jobs under the given linear precedence on the set of the floating jobs. First problem is NP-hard and we devise a heuristic procedure to the problem based on the transportation problem and matching problem. The second problem can be solved in polynomial time by the shortest path method. The algorithm works in iterative manner. One step is composed of two phases. In the first phase, we solve the constrained bin packing problem. In the second phase, the shortest path problem is solved using the phase 1 result. The result of the phase 2 is used as an input to the phase 1 problem at the next step. We test the proposed algorithm on the set of real data found in the washing machine assembly line.
Journal of The Korean Society of Agricultural Engineers
/
v.60
no.6
/
pp.43-54
/
2018
The accurate estimation of reference crop evapotranspiration ($ET_o$) is essential in irrigation water management to assess the time-dependent status of crop water use and irrigation scheduling. The importance of $ET_o$ has resulted in many direct and indirect methods to approximate its value and include pan evaporation, meteorological-based estimations, lysimetry, soil moisture depletion, and soil water balance equations. Artificial neural networks (ANNs) have been intensively implemented for process-based hydrologic modeling due to their superior performance using nonlinear modeling, pattern recognition, and classification. This study adapted two well-known ANN algorithms, Backpropagation neural network (BPNN) and Generalized regression neural network (GRNN), to evaluate their capability to accurately predict $ET_o$ using daily meteorological data. All data were obtained from two automated weather stations (Chupungryeong and Jangsu) located in the Yeongdong-gun (2002-2017) and Jangsu-gun (1988-2017), respectively. Daily $ET_o$ was calculated using the Penman-Monteith equation as the benchmark method. These calculated values of $ET_o$ and corresponding meteorological data were separated into training, validation and test datasets. The performance of each ANN algorithm was evaluated against $ET_o$ calculated from the benchmark method and multiple linear regression (MLR) model. The overall results showed that the BPNN algorithm performed best followed by the MLR and GRNN in a statistical sense and this could contribute to provide valuable information to farmers, water managers and policy makers for effective agricultural water governance.
Journal of the Computational Structural Engineering Institute of Korea
/
v.20
no.6
/
pp.751-759
/
2007
We have developed several methods for the optimization problem having large-scale and highly nonlinear system. First, step by step method in optimization process was employed to improve the convergence. In addition, techniques of furnishing good initial guesses for analysis using sensitivity information acquired from optimization iteration, and of manipulating analysis/optimization convergency criterion motivated from simultaneous technique were used. We applied them to flow control problem and verified their efficiency and robustness. However, they are based on quasi-Newton method that approximate the Hessian matrix using exact first derivatives. However solution of the Navier-Stokes equations are very cost, so we want to improve the efficiency of the optimization algorithm as much as possible. Thus we develop a true Newton method that uses exact Hessian matrix. And we apply that to the three-dimensional problem of flow around a sphere. This problem is certainly intractable with existing methods for optimal flow control. However, we can attack such problems with the methods that we developed previously and true Newton method.
Journal of the Computational Structural Engineering Institute of Korea
/
v.37
no.1
/
pp.49-56
/
2024
An automated dynamic structural analysis module stands as a crucial element within a structural integrated mitigation system. This module must deliver prompt real-time responses to enable timely actions, such as evacuation or warnings, in response to the severity posed by the structural system. The finite element method, a widely adopted approximate structural analysis approach globally, owes its popularity in part to its user-friendly nature. However, the computational efficiency and accuracy of results depend on the user-provided finite element mesh, with the number of elements and their quality playing pivotal roles. This paper introduces a computationally efficient adaptive mesh generation scheme that optimally combines the h-method of node movement and the r-method of element division for mesh refinement. Adaptive mesh generation schemes automatically create finite element meshes, and in this case, representative strain values for a given mesh are employed for error estimates. When applied to dynamic problems analyzed in the time domain, meshes need to be modified at each time step, considering a few hundred or thousand steps. The algorithm's specifics are demonstrated through a standard cantilever beam example subjected to a concentrated load at the free end. Additionally, a portal frame example showcases the generation of various robust meshes. These examples illustrate the adaptive algorithm's capability to produce robust meshes, ensuring reasonable accuracy and efficient computing time. Moreover, the study highlights the potential for the scheme's effective application in complex structural dynamic problems, such as those subjected to seismic or erratic wind loads. It also emphasizes its suitability for general nonlinear analysis problems, establishing the versatility and reliability of the proposed adaptive mesh generation scheme.
Kim, Joong-Hyo;Kwon, Sung-Dae;Hong, Jeong-Pyo;Ha, Tae-Jun
KSCE Journal of Civil and Environmental Engineering Research
/
v.35
no.3
/
pp.679-690
/
2015
The need to remove the cause of traffic accidents by improving the engineering system for a vehicle and the road in order to minimize the accident hazard. This is likely to cause traffic accident continue to take a large and significant social cost and time to improve the reliability and efficiency of this generally poor road, thereby generating a lot of damage to the national traffic accident caused by improper environmental factors. In order to minimize damage from traffic accidents, the cause of accidents must be eliminated through technological improvements of vehicles and road systems. Generally, it is highly probable that traffic accident occurs more often on roads that lack safety measures, and can only be improved with tremendous time and costs. In particular, traffic accidents at intersections are on the rise due to inappropriate environmental factors, and are causing great losses for the nation as a whole. This study aims to present safety countermeasures against the cause of accidents by developing an intersection Traffic safety evaluation model. It will also diagnose vulnerable traffic points through BPA (Back -propagation algorithm) among artificial neural networks recently investigated in the area of artificial intelligence. Furthermore, it aims to pursue a more efficient traffic safety improvement project in terms of operating signalized intersections and establishing traffic safety policies. As a result of conducting this study, the mean square error approximate between the predicted values and actual measured values of traffic accidents derived from the BPA is estimated to be 3.89. It appeared that the BPA appeared to have excellent traffic safety evaluating abilities compared to the multiple regression model. In other words, The BPA can be effectively utilized in diagnosing and practical establishing transportation policy in the safety of actual signalized intersections.
Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.
Stereotactic radiosurgery (SRS) is a technique that delivers a high dose to a target legion and a low dose to a critical organ through only one or a few irradiations. For this purpose, many mathematical methods for optimization have been proposed. There are some limitations to using these methods: the long calculation time and difficulty in finding a unique solution due to different tumor shapes. In this study, many clinical target shapes were examined to find a typical pattern of tumor shapes from which some possible ideal geometrical shapes, such as spheres, cylinders, cones or a combination, are assumed to approximate real tumor shapes. Using the arrangement of multiple isocenters, optimum variables, such as isocenter positions or collimator size, were determined. A database was formed from these results. The optimization procedure consisted of the following steps: Any shape of tumor was first assumed to an ideal model through a geometry comparison algorithm, then optimum variables for ideal geometry chosen from the predetermined database, followed by a final adjustment of the optimum parameters using the real tumor shape. Although the result of applying the database to other patients was not superior to the result of optimization in each case, it can be acceptable as a plan starling point.
In Part Ⅰ of the paper, we have developed various block least mean-square (BLMS) adaptive digital filters (ADF's) based on a unified matrix treatment. In Part Ⅱ we analyze the convergence behaviors of the self-orthogonalizing frequency-domain BLMS (FBLMS) ADF and the unconstrained FBLMS (UFBLMS) ADF both for the overlap-save and overlap-add sectioning methods. We first show that, unlike the FBLMS ADF with a constant convergence factor, the convergence behavior of the self-orthogonalizing FBLMS ADF is governed by the same autocorrelation matrix as that of the UFBLMS ADF. We then show that the optimum solution of the UFBLMS ADF is the same as that of the constrained FBLMS ADF when the filter length is sufficiently long. The mean of the weight vector of the UFBLMS ADF is also shown to converge to the optimum Wiener weight vector under a proper condition. However, the steady-state mean-squared error(MSE) of the UFBLMS ADF turns out to be slightly worse than that of the constrained algorithm if the same convergence constant is used in both cases. On the other hand, when the filter length is not sufficiently long, while the constrained FBLMS ADF yields poor performance, the performance of the UFBLMS ADF can be improved to some extent by utilizing its extended filter-length capability. As for the self-orthogonalizing FBLMS ADF, we study how we can approximate the autocorrelation matrix by a diagonal matrix in the frequency domain. We also analyze the steady-state MSE's of the self-orthogonalizing FBLMS ADF's with and without the constant. Finally, we present various simulation results to verify our analytical results.
The Transactions of the Korea Information Processing Society
/
v.7
no.8
/
pp.2484-2496
/
2000
High speed switches have been developing to interconnect a large number of nodes. It is important to analyze the switch performance under various conditions to satisfy the requirements. Queueing analysis, in general, has the intrinsic problem of large state space dimension and complex computation. In fact, The petri net is a graphical and mathematical model. It is suitable for various applications, in particular, manufacturing systems. It can deal with parallelism, concurrence, deadlock avoidance, and asynchronism. Currently it has been applied to the performance of computer networks and protocol verifications. This paper presents a framework for modeling and analyzing ATM switch using stochastic activity networks (SANs). In this paper, we provide the ATM switch model using SANs to extend easily and an approximate analysis method to apply A TM switch models, which significantly reduce the complexity of the model solution. Cell arrival process in output-buffered Queueing A TM switch with finite buffer is modeled as Markov Modulated Poisson Process (MMPP), which is able to accurately represent real traffic and capture the characteristics of bursty traffic. We analyze the performance of the switch in terms of cell-loss ratio (CLR), mean Queue length and mean delay time. We show that the SAN model is very useful in A TM switch model in that the gates have the capability of implementing of scheduling algorithm.
본 웹사이트에 게시된 이메일 주소가 전자우편 수집 프로그램이나
그 밖의 기술적 장치를 이용하여 무단으로 수집되는 것을 거부하며,
이를 위반시 정보통신망법에 의해 형사 처벌됨을 유념하시기 바랍니다.
[게시일 2004년 10월 1일]
이용약관
제 1 장 총칙
제 1 조 (목적)
이 이용약관은 KoreaScience 홈페이지(이하 “당 사이트”)에서 제공하는 인터넷 서비스(이하 '서비스')의 가입조건 및 이용에 관한 제반 사항과 기타 필요한 사항을 구체적으로 규정함을 목적으로 합니다.
제 2 조 (용어의 정의)
① "이용자"라 함은 당 사이트에 접속하여 이 약관에 따라 당 사이트가 제공하는 서비스를 받는 회원 및 비회원을
말합니다.
② "회원"이라 함은 서비스를 이용하기 위하여 당 사이트에 개인정보를 제공하여 아이디(ID)와 비밀번호를 부여
받은 자를 말합니다.
③ "회원 아이디(ID)"라 함은 회원의 식별 및 서비스 이용을 위하여 자신이 선정한 문자 및 숫자의 조합을
말합니다.
④ "비밀번호(패스워드)"라 함은 회원이 자신의 비밀보호를 위하여 선정한 문자 및 숫자의 조합을 말합니다.
제 3 조 (이용약관의 효력 및 변경)
① 이 약관은 당 사이트에 게시하거나 기타의 방법으로 회원에게 공지함으로써 효력이 발생합니다.
② 당 사이트는 이 약관을 개정할 경우에 적용일자 및 개정사유를 명시하여 현행 약관과 함께 당 사이트의
초기화면에 그 적용일자 7일 이전부터 적용일자 전일까지 공지합니다. 다만, 회원에게 불리하게 약관내용을
변경하는 경우에는 최소한 30일 이상의 사전 유예기간을 두고 공지합니다. 이 경우 당 사이트는 개정 전
내용과 개정 후 내용을 명확하게 비교하여 이용자가 알기 쉽도록 표시합니다.
제 4 조(약관 외 준칙)
① 이 약관은 당 사이트가 제공하는 서비스에 관한 이용안내와 함께 적용됩니다.
② 이 약관에 명시되지 아니한 사항은 관계법령의 규정이 적용됩니다.
제 2 장 이용계약의 체결
제 5 조 (이용계약의 성립 등)
① 이용계약은 이용고객이 당 사이트가 정한 약관에 「동의합니다」를 선택하고, 당 사이트가 정한
온라인신청양식을 작성하여 서비스 이용을 신청한 후, 당 사이트가 이를 승낙함으로써 성립합니다.
② 제1항의 승낙은 당 사이트가 제공하는 과학기술정보검색, 맞춤정보, 서지정보 등 다른 서비스의 이용승낙을
포함합니다.
제 6 조 (회원가입)
서비스를 이용하고자 하는 고객은 당 사이트에서 정한 회원가입양식에 개인정보를 기재하여 가입을 하여야 합니다.
제 7 조 (개인정보의 보호 및 사용)
당 사이트는 관계법령이 정하는 바에 따라 회원 등록정보를 포함한 회원의 개인정보를 보호하기 위해 노력합니다. 회원 개인정보의 보호 및 사용에 대해서는 관련법령 및 당 사이트의 개인정보 보호정책이 적용됩니다.
제 8 조 (이용 신청의 승낙과 제한)
① 당 사이트는 제6조의 규정에 의한 이용신청고객에 대하여 서비스 이용을 승낙합니다.
② 당 사이트는 아래사항에 해당하는 경우에 대해서 승낙하지 아니 합니다.
- 이용계약 신청서의 내용을 허위로 기재한 경우
- 기타 규정한 제반사항을 위반하며 신청하는 경우
제 9 조 (회원 ID 부여 및 변경 등)
① 당 사이트는 이용고객에 대하여 약관에 정하는 바에 따라 자신이 선정한 회원 ID를 부여합니다.
② 회원 ID는 원칙적으로 변경이 불가하며 부득이한 사유로 인하여 변경 하고자 하는 경우에는 해당 ID를
해지하고 재가입해야 합니다.
③ 기타 회원 개인정보 관리 및 변경 등에 관한 사항은 서비스별 안내에 정하는 바에 의합니다.
제 3 장 계약 당사자의 의무
제 10 조 (KISTI의 의무)
① 당 사이트는 이용고객이 희망한 서비스 제공 개시일에 특별한 사정이 없는 한 서비스를 이용할 수 있도록
하여야 합니다.
② 당 사이트는 개인정보 보호를 위해 보안시스템을 구축하며 개인정보 보호정책을 공시하고 준수합니다.
③ 당 사이트는 회원으로부터 제기되는 의견이나 불만이 정당하다고 객관적으로 인정될 경우에는 적절한 절차를
거쳐 즉시 처리하여야 합니다. 다만, 즉시 처리가 곤란한 경우는 회원에게 그 사유와 처리일정을 통보하여야
합니다.
제 11 조 (회원의 의무)
① 이용자는 회원가입 신청 또는 회원정보 변경 시 실명으로 모든 사항을 사실에 근거하여 작성하여야 하며,
허위 또는 타인의 정보를 등록할 경우 일체의 권리를 주장할 수 없습니다.
② 당 사이트가 관계법령 및 개인정보 보호정책에 의거하여 그 책임을 지는 경우를 제외하고 회원에게 부여된
ID의 비밀번호 관리소홀, 부정사용에 의하여 발생하는 모든 결과에 대한 책임은 회원에게 있습니다.
③ 회원은 당 사이트 및 제 3자의 지적 재산권을 침해해서는 안 됩니다.
제 4 장 서비스의 이용
제 12 조 (서비스 이용 시간)
① 서비스 이용은 당 사이트의 업무상 또는 기술상 특별한 지장이 없는 한 연중무휴, 1일 24시간 운영을
원칙으로 합니다. 단, 당 사이트는 시스템 정기점검, 증설 및 교체를 위해 당 사이트가 정한 날이나 시간에
서비스를 일시 중단할 수 있으며, 예정되어 있는 작업으로 인한 서비스 일시중단은 당 사이트 홈페이지를
통해 사전에 공지합니다.
② 당 사이트는 서비스를 특정범위로 분할하여 각 범위별로 이용가능시간을 별도로 지정할 수 있습니다. 다만
이 경우 그 내용을 공지합니다.
제 13 조 (홈페이지 저작권)
① NDSL에서 제공하는 모든 저작물의 저작권은 원저작자에게 있으며, KISTI는 복제/배포/전송권을 확보하고
있습니다.
② NDSL에서 제공하는 콘텐츠를 상업적 및 기타 영리목적으로 복제/배포/전송할 경우 사전에 KISTI의 허락을
받아야 합니다.
③ NDSL에서 제공하는 콘텐츠를 보도, 비평, 교육, 연구 등을 위하여 정당한 범위 안에서 공정한 관행에
합치되게 인용할 수 있습니다.
④ NDSL에서 제공하는 콘텐츠를 무단 복제, 전송, 배포 기타 저작권법에 위반되는 방법으로 이용할 경우
저작권법 제136조에 따라 5년 이하의 징역 또는 5천만 원 이하의 벌금에 처해질 수 있습니다.
제 14 조 (유료서비스)
① 당 사이트 및 협력기관이 정한 유료서비스(원문복사 등)는 별도로 정해진 바에 따르며, 변경사항은 시행 전에
당 사이트 홈페이지를 통하여 회원에게 공지합니다.
② 유료서비스를 이용하려는 회원은 정해진 요금체계에 따라 요금을 납부해야 합니다.
제 5 장 계약 해지 및 이용 제한
제 15 조 (계약 해지)
회원이 이용계약을 해지하고자 하는 때에는 [가입해지] 메뉴를 이용해 직접 해지해야 합니다.
제 16 조 (서비스 이용제한)
① 당 사이트는 회원이 서비스 이용내용에 있어서 본 약관 제 11조 내용을 위반하거나, 다음 각 호에 해당하는
경우 서비스 이용을 제한할 수 있습니다.
- 2년 이상 서비스를 이용한 적이 없는 경우
- 기타 정상적인 서비스 운영에 방해가 될 경우
② 상기 이용제한 규정에 따라 서비스를 이용하는 회원에게 서비스 이용에 대하여 별도 공지 없이 서비스 이용의
일시정지, 이용계약 해지 할 수 있습니다.
제 17 조 (전자우편주소 수집 금지)
회원은 전자우편주소 추출기 등을 이용하여 전자우편주소를 수집 또는 제3자에게 제공할 수 없습니다.
제 6 장 손해배상 및 기타사항
제 18 조 (손해배상)
당 사이트는 무료로 제공되는 서비스와 관련하여 회원에게 어떠한 손해가 발생하더라도 당 사이트가 고의 또는 과실로 인한 손해발생을 제외하고는 이에 대하여 책임을 부담하지 아니합니다.
제 19 조 (관할 법원)
서비스 이용으로 발생한 분쟁에 대해 소송이 제기되는 경우 민사 소송법상의 관할 법원에 제기합니다.
[부 칙]
1. (시행일) 이 약관은 2016년 9월 5일부터 적용되며, 종전 약관은 본 약관으로 대체되며, 개정된 약관의 적용일 이전 가입자도 개정된 약관의 적용을 받습니다.