• Title/Summary/Keyword: iterative processes

Search Result 132, Processing Time 0.028 seconds

An XPDL-Based Workflow Control-Structure and Data-Sequence Analyzer

  • Kim, Kwanghoon Pio
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1702-1721
    • /
    • 2019
  • A workflow process (or business process) management system helps to define, execute, monitor and manage workflow models deployed on a workflow-supported enterprise, and the system is compartmentalized into a modeling subsystem and an enacting subsystem, in general. The modeling subsystem's functionality is to discover and analyze workflow models via a theoretical modeling methodology like ICN, to graphically define them via a graphical representation notation like BPMN, and to systematically deploy those graphically defined models onto the enacting subsystem by transforming into their textual models represented by a standardized workflow process definition language like XPDL. Before deploying those defined workflow models, it is very important to inspect its syntactical correctness as well as its structural properness to minimize the loss of effectiveness and the depreciation of efficiency in managing the corresponding workflow models. In this paper, we are particularly interested in verifying very large-scale and massively parallel workflow models, and so we need a sophisticated analyzer to automatically analyze those specialized and complex styles of workflow models. One of the sophisticated analyzers devised in this paper is able to analyze not only the structural complexity but also the data-sequence complexity, especially. The structural complexity is based upon combinational usages of those control-structure constructs such as subprocesses, exclusive-OR, parallel-AND and iterative-LOOP primitives with preserving matched pairing and proper nesting properties, whereas the data-sequence complexity is based upon combinational usages of those relevant data repositories such as data definition sequences and data use sequences. Through the devised and implemented analyzer in this paper, we are able eventually to achieve the systematic verifications of the syntactical correctness as well as the effective validation of the structural properness on those complicate and large-scale styles of workflow models. As an experimental study, we apply the implemented analyzer to an exemplary large-scale and massively parallel workflow process model, the Large Bank Transaction Workflow Process Model, and show the structural complexity analysis results via a series of operational screens captured from the implemented analyzer.

The Credit Information Feature Selection Method in Default Rate Prediction Model for Individual Businesses (개인사업자 부도율 예측 모델에서 신용정보 특성 선택 방법)

  • Hong, Dongsuk;Baek, Hanjong;Shin, Hyunjoon
    • Journal of the Korea Society for Simulation
    • /
    • v.30 no.1
    • /
    • pp.75-85
    • /
    • 2021
  • In this paper, we present a deep neural network-based prediction model that processes and analyzes the corporate credit and personal credit information of individual business owners as a new method to predict the default rate of individual business more accurately. In modeling research in various fields, feature selection techniques have been actively studied as a method for improving performance, especially in predictive models including many features. In this paper, after statistical verification of macroeconomic indicators (macro variables) and credit information (micro variables), which are input variables used in the default rate prediction model, additionally, through the credit information feature selection method, the final feature set that improves prediction performance was identified. The proposed credit information feature selection method as an iterative & hybrid method that combines the filter-based and wrapper-based method builds submodels, constructs subsets by extracting important variables of the maximum performance submodels, and determines the final feature set through prediction performance analysis of the subset and the subset combined set.

Multi-channel analyzer based on a novel pulse fitting analysis method

  • Wang, Qingshan;Zhang, Xiongjie;Meng, Xiangting;Wang, Bao;Wang, Dongyang;Zhou, Pengfei;Wang, Renbo;Tang, Bin
    • Nuclear Engineering and Technology
    • /
    • v.54 no.6
    • /
    • pp.2023-2030
    • /
    • 2022
  • A novel pulse fitting analysis (PFA) method is presented for the acquisition of nuclear spectra. The charging process of the feedback capacitor in the resistive feedback charge-sensitive preamplifier is equivalent to the impulsive pulse, and its impulse response function (IRF) can be obtained by non-linear fitting of the falling edge of the nuclear pulse. The integral of the IRF excluding the baseline represents the energy deposition of the particles in the detector. In addition, since the non-linear fitting process in PFA method is difficult to achieve in the conventional architecture of spectroscopy system, a new multi-channel analyzer (MCA) based on Zynq SoC is proposed, which transmits all the data of nuclear pulses from the programmable logic (PL) to the processing system (PS) by high-speed AXI-Stream in order to implement PFA method with precision. The linearity of new MCA has been tested. The spectrum of 137Cs was obtained using LaBr3(Ce) scintillator detector, and was compared with commercial MCA by ORTEC. The results of tests indicate that the MCA based on PFA method has the same performance as the commercial MCA based on pulse height analysis (PHA) method and excellent linearity for γ-rays with different energies, which infers that PFA method is an effective and promising method for the acquisition of spectra. Furthermore, it provides a new solution for nuclear pulse processing algorithms involving regression and iterative processes.

A Study on the Automatic Detection of Railroad Power Lines Using LiDAR Data and RANSAC Algorithm (LiDAR 데이터와 RANSAC 알고리즘을 이용한 철도 전력선 자동탐지에 관한 연구)

  • Jeon, Wang Gyu;Choi, Byoung Gil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.4
    • /
    • pp.331-339
    • /
    • 2013
  • LiDAR has been one of the widely used and important technologies for 3D modeling of ground surface and objects because of its ability to provide dense and accurate range measurement. The objective of this research is to develop a method for automatic detection and modeling of railroad power lines using high density LiDAR data and RANSAC algorithms. For detecting railroad power lines, multi-echoes properties of laser data and shape knowledge of railroad power lines were employed. Cuboid analysis for detecting seed line segments, tracking lines, connecting and labeling are the main processes. For modeling railroad power lines, iterative RANSAC and least square adjustment were carried out to estimate the lines parameters. The validation of the result is very challenging due to the difficulties in determining the actual references on the ground surface. Standard deviations of 8cm and 5cm for x-y and z coordinates, respectively are satisfactory outcomes. In case of completeness, the result of visual inspection shows that all the lines are detected and modeled well as compare with the original point clouds. The overall processes are fully automated and the methods manage any state of railroad wires efficiently.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

A Survey of Elementary school teachers' perceptions of mathematics instruction (수학수업에 대한 초등교사의 인식 조사)

  • Kwon, Sungyong
    • Education of Primary School Mathematics
    • /
    • v.20 no.4
    • /
    • pp.253-266
    • /
    • 2017
  • The purpose of the study was to investigate the perceptions of Elementary school teachers on mathematics instruction. To do this, 7 test items were developed to obtain data on teacher's perception of mathematics instruction and 73 teachers who take mathematical lesson analysis lectures were selected and conducted a survey. Since the data obtained are all qualitative data, they were analyzed through coding and similar responses were grouped into the same category. As a result of the survey, several facts were found as follow; First, When teachers thought about 'mathematics', the first words that come to mind were 'calculation', 'difficult', and 'logic'. It is necessary for the teacher to have positive thoughts on mathematics and mathematics learning, and this needs to be stressed enough in teacher education and teacher retraining. Second, the reason why mathematics is an important subject is 'because it is related to the real life', followed by 'because it gives rise to logical thinking ability' and 'because it gives rise to mathematical thinking ability'. These ideas are related to the cultivating mind value and the practical value of mathematics. In order for students to understand the various values of mathematics, teachers must understand the various values of mathematics. Third, the responses for reasons why elementary school students hate mathematics and are hard are because teachers demand 'thinking', 'because they repeat simple calculations', 'children hate complicated things', 'bother', 'Because mathematics itself is difficult', 'the level of curriculum and textbooks is high', and 'the amount of time and activity is too much'. These problems are likely to be improved by the implementation of revised 2015 national curriculum that emphasize core competence and process-based evaluation including mathematical processes. Fourth, the most common reason for failing elementary school mathematics instruction was 'because the process was difficult' and 'because of the results-based evaluation'. In addition, 'Results-oriented evaluation,' 'iterative calculation,' 'infused education,' 'failure to consider the level difference,' 'lack of conceptual and principle-centered education' were mentioned as a failure factor. Most of these factors can be changed by improving and changing teachers' teaching practice. Fifth, the responses for what does a desirable mathematics instruction look like are 'classroom related to real life', 'easy and fun mathematics lessons', 'class emphasizing understanding of principle', etc. Therefore, it is necessary to deeply deal with the related contents in the training courses for the improvement of the teachers' teaching practice, and it is necessary to support not only the one-time training but also the continuous professional development of teachers.

Case of assembly process review and improvement for mega-diameter slurry shield TBM through the launching area (발진부지를 이용한 초대구경 이수식 쉴드TBM 조립공정 검토 및 개선 사례)

  • Park, Jinsoo;Jun, Samsu
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.6
    • /
    • pp.637-658
    • /
    • 2022
  • TBM tunnel is simple with the iterative process of excavating the ground, building a segment ring-build, and backfilling. Drill & Blast, a conventional tunnel construction method, is more complicated than the TBM tunnel and has some restrictions because it repeats the inspection, drilling, charging, blasting, ventilation, muck treatment, and installation of support materials. However, the preparation work for excavation requires time and cost based on a very detailed plan compared to Drill & Blasting, which reinforces the ground and forms a tunnel after the formation of tunnel portal. This is because the TBM equipment for excavating the target ground determines the success or failure of the construction. If the TBM, an expensive order-made equipment, is incorrectly configured at the assembly stage, it becomes difficult to excavate from the initial stage as well as the main excavation stage. When the assembled shield TBM equipment is dismantled again, and a situation of re-assembly occurs, it is difficult throughout the construction period due to economic loss as well as time. Therefore, in this study, the layout and plan of the site and the assembly process for each major part of the TBM equipment were reviewed for the assembly of slurry shield TBM to construct the largest diameter road tunnel in domestic passing through the Han River and minimized interference with other processes and the efficiency of cutter head assembly and transport were analyzed and improved to suit the site conditions.

Unsupervised Non-rigid Registration Network for 3D Brain MR images (3차원 뇌 자기공명 영상의 비지도 학습 기반 비강체 정합 네트워크)

  • Oh, Donggeon;Kim, Bohyoung;Lee, Jeongjin;Shin, Yeong-Gil
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.64-74
    • /
    • 2019
  • Although a non-rigid registration has high demands in clinical practice, it has a high computational complexity and it is very difficult for ensuring the accuracy and robustness of registration. This study proposes a method of applying a non-rigid registration to 3D magnetic resonance images of brain in an unsupervised learning environment by using a deep-learning network. A feature vector between two images is produced through the network by receiving both images from two different patients as inputs and it transforms the target image to match the source image by creating a displacement vector field. The network is designed based on a U-Net shape so that feature vectors that consider all global and local differences between two images can be constructed when performing the registration. As a regularization term is added to a loss function, a transformation result similar to that of a real brain movement can be obtained after the application of trilinear interpolation. This method enables a non-rigid registration with a single-pass deformation by only receiving two arbitrary images as inputs through an unsupervised learning. Therefore, it can perform faster than other non-learning-based registration methods that require iterative optimization processes. Our experiment was performed with 3D magnetic resonance images of 50 human brains, and the measurement result of the dice similarity coefficient confirmed an approximately 16% similarity improvement by using our method after the registration. It also showed a similar performance compared with the non-learning-based method, with about 10,000 times speed increase. The proposed method can be used for non-rigid registration of various kinds of medical image data.

A Study on Designing an Innovation Model for Communication-centered Public Services: Focusing on KOMIPO (소통중심의 공공서비스디자인 혁신 모델 연구: 한국중부발전(주) 사례를 중심으로)

  • Hyemi Hwang;DonHee Lee
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.2
    • /
    • pp.169-190
    • /
    • 2024
  • The purpose of this study is to design an effective innovation model for communication-centered public services, based on the case of Korea Midland Power Co., Ltd (KOMIPO). This study analyzed customer and employee participation activities at KOMIPO, focusing on communication activities to derive the best practices. The study comprised of the following stages: (1) Preparation stage to assess the current situation and promote change management; (2) Problem-solving stage for improving public services; (3) Problem-solving stage for improving work processes; (4) Problem-solving stage for strengthening collaboration, and (5) Design stage for an innovation model. Based on the results of this study, an innovation model was developed for public services by applying the double-diamond design process. The proposed model presents a process structure, which is derived through an iterative process of primary divergence (discovery) and convergence (definition), and secondary divergence (development) and convergence (delivery). This study also proposed the possibility of applying the Idea-Power-Plant activities of KOMIPO's best practices to the proposed model. While this study proposed an innovation model for communication-centered public services through the analysis of a specific public company, the results of the study provide broad insights for effective operations management through efficient communication based on the participation of customers and employees in public institutions.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.