• Title/Summary/Keyword: Process Re-execution

Search Result 20, Processing Time 0.029 seconds

Execution Technology for Collaborative Business Process among Manufacturing Enterprises (제조기업 간 협업프로세스 실행 기술)

  • Kim, Hyun-Woo;Kim, Bo-Hyun;Baek, Jae-Yong;Jung, So-Young;Choi, Hon-Zong
    • Korean Journal of Computational Design and Engineering
    • /
    • v.15 no.3
    • /
    • pp.204-211
    • /
    • 2010
  • Recently, business process management has become an important concept to define and execute business process. During the execution of the collaborative business processes defined by the consensus with manufacturing enterprises, a lot of variations can be occurred by various internal and external factors related to business. From this reason, manufacturing enterprises have tried to seek for a technology to define and execute the collaborative business process systematically under the dynamic situations approving process variation. This study defines the collaborative business process among manufacturing enterprises at first and proposes its execution technology under the dynamic situations. Here, the proposed execution technology includes the authority management of each process, sub-process, and activity for security, the forced execution of the incomplete process containing the undefined sub-process, the re-execution in a certain range of business process for correcting errors, and the dynamic selection of sub-process. Furthermore, this study implements a prototype system to check the validity of its application under the dynamic situations.

The study of a full cycle semi-automated business process re-engineering: A comprehensive framework

  • Lee, Sanghwa;Sutrisnowati, Riska A.;Won, Seokrae;Woo, Jong Seong;Bae, Hyerim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.11
    • /
    • pp.103-109
    • /
    • 2018
  • This paper presents an idea and framework to automate a full cycle business process management and re-engineering by integrating traditional business process management systems, process mining, data mining, machine learning, and simulation. We build our framework on the cloud-based platform such that various data sources can be incorporated. We design our systems to be extensible so that not only beneficial for practitioners of BPM, but also for researchers. Our framework can be used as a test bed for researchers without the complication of system integration. The automation of redesigning phase and selecting a baseline process model for deployment are the two main contributions of this study. In the redesigning phase, we deal with both the analysis of the existing process model and what-if analysis on how to improve the process at the same time, Additionally, improving a business process can be applied in a case by case basis that needs a lot of trial and error and huge data. In selecting the baseline process model, we need to compare many probable routes of business execution and calculate the most efficient one in respect to production cost and execution time. We also discuss the challenges and limitation of the framework, including the systems adoptability, technical difficulties and human factors.

Probabilistic Soft Error Detection Based on Anomaly Speculation

  • Yoo, Joon-Hyuk
    • Journal of Information Processing Systems
    • /
    • v.7 no.3
    • /
    • pp.435-446
    • /
    • 2011
  • Microprocessors are becoming increasingly vulnerable to soft errors due to the current trends of semiconductor technology scaling. Traditional redundant multi-threading architectures provide perfect fault tolerance by re-executing all the computations. However, such a full re-execution technique significantly increases the verification workload on the processor resources, resulting in severe performance degradation. This paper presents a pro-active verification management approach to mitigate the verification workload to increase its performance with a minimal effect on overall reliability. An anomaly-speculation-based filter checker is proposed to guide a verification priority before the re-execution process starts. This technique is accomplished by exploiting a value similarity property, which is defined by a frequent occurrence of partially identical values. Based on the biased distribution of similarity distance measure, this paper investigates further application to exploit similar values for soft error tolerance with anomaly speculation. Extensive measurements prove that the majority of instructions produce values, which are different from the previous result value, only in a few bits. Experimental results show that the proposed scheme accelerates the processor to be 180% faster than traditional fully-fault-tolerant processor with a minimal impact on overall soft error rate.

A Life Cycle-Based Performance-Centric Business Process Management Framework For Continuous Process Improvement (지속적 프로세스 개선을 위한 성과 중심의 생애 주기 기반 비즈니스 프로세스 관리 프레임워크)

  • Han, Kwan Hee
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.7
    • /
    • pp.44-55
    • /
    • 2017
  • Many enterprises have recently been pursuing process innovation or improvement to attain their performance goal. To comprehensively support business process execution, the concept of business process management (BPM) has been widely adopted. A life cycle of BPM is composed of process diagnosis, (re)design, and enactment. For aligning with enterprise strategies, all BPM activities must be closely related to performance metrics because the metrics are the drivers and evaluators of business process operations. The objective of this paper is to propose a life cycle-based BPM framework integrated with the process-based performance measurement model, in which business processes are systematically interrelated with key performance indicators (KPIs) during an entire BPM life cycle. By using the proposed BPM framework, company practitioners involved in process innovation projects can easily and efficiently find the most influencing processes upon enterprise performance in the process diagnosis phase, evaluate the performance of newly designed process in the process (re)design phase, monitor the KPIs of new business process, and adjust business process activities in the process execution phase through the BPM life cycle.

A Case Study on R&D Process Innovation Using PI6sigma Methodology (PI6sigma를 이용한 R&D 프로세스 혁신 사례 연구)

  • Kim, Young-Jin;Jeong, Woo-Cheol;Choi, Young-Keun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.33 no.1
    • /
    • pp.17-23
    • /
    • 2010
  • The corporate R&D(Research and Development) has a primary role of new product development and its potential is the most crucial factor to estimate corporate future value. However, its systemic inadequacies and inefficiencies, the shorten product life-cycle to satisfy customer needs, the global operations by outsourcing strategy, and the reduction of product cost, are starting to expose to R&D business processes. The three-phased execution strategy for R&D innovation is introduced to establish master plan for new R&D model. From information technology point of view, PLM(Product Life-cycle Management) is one of the business total solutions in product development area. It is not a system, but the strategic business approach that collaboratively manage the product from beginning stage to end of life in all business areas PLM functions and capabilities are usually used as references to re-design new R&D process. BPA(Business Process Assessment) and 5DP(Design Parameters) in PI6sigma developed by Samsung SDS Consulting division are introduced to establish R&D master plan and re-design process respectively. This research provides a case study for R&D process innovation. How process assessment and PMM(Process Maturity Model) can be applied in business processes, and also it explains process re-design by 5DP method.

Implementation Strategy and Effect Analysis of MES for a Small and Medium PCB Production Company based on BPR Methodology (BPR 방법론에 기반한 중소 PCB 제조업체의 MES 구축 전략과 효과분석)

  • Kim, Gun-Yeon;Jin, Yoo-Eui;Noh, Sang-Do;Choi, Sang-Su;Jo, Yong-Ju;Choi, Seog-Ou
    • IE interfaces
    • /
    • v.24 no.3
    • /
    • pp.231-240
    • /
    • 2011
  • Manufacturing enterprises have been doing their best endeavors to obtain competitiveness using various methodologies, such as information technology. In order to achieve competitiveness, they are adopting manufacturing execution system (MES). MES is a total management system that manages from the beginning of the production by product order until the quality inspection of the finished product. And MES is an inter-mediator for supplementation of information gap between ERP and inspection machine and equipment. This paper describes on establishment of effective strategy based on BPR methodology and implementation of MES small and medium PCB manufacturing company with multiple-types of products and mixed process flows. And then we proposed evaluation model based on balanced score card (BSC) for considering non-finance elements as well as finance elements. With evaluation model, we analyzed benefits and effects of MES.

Task failure resilience technique for improving the performance of MapReduce in Hadoop

  • Kavitha, C;Anita, X
    • ETRI Journal
    • /
    • v.42 no.5
    • /
    • pp.748-760
    • /
    • 2020
  • MapReduce is a framework that can process huge datasets in parallel and distributed computing environments. However, a single machine failure during the runtime of MapReduce tasks can increase completion time by 50%. MapReduce handles task failures by restarting the failed task and re-computing all input data from scratch, regardless of how much data had already been processed. To solve this issue, we need the computed key-value pairs to persist in a storage system to avoid re-computing them during the restarting process. In this paper, the task failure resilience (TFR) technique is proposed, which allows the execution of a failed task to continue from the point it was interrupted without having to redo all the work. Amazon ElastiCache for Redis is used as a non-volatile cache for the key-value pairs. We measured the performance of TFR by running different Hadoop benchmarking suites. TFR was implemented using the Hadoop software framework, and the experimental results showed significant performance improvements when compared with the performance of the default Hadoop implementation.

A Study on Data Caching and Updates for Efficient Spatial Query Processing in Client/Server Environments (클라이언트/서버 환경에서 효율적인 공간질의 처리를 위한 데이터 캐싱과 변경에 관한 연구)

  • 문상호
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.6
    • /
    • pp.1269-1275
    • /
    • 2003
  • This paper addresses several issues on data caching and consistency of cached data in order to process client's queries efficiently in client/server environments. For the purpose, first of all, materialized spatial views are adapted in a client side for data caching, which is called client views. Also, an incremental update scheme using derivation relationships is applied to keep cached data of clients consistent with the rest of server databases. Materialized views support efficient query processing in a client side, however, it is difficult to keep consistent their contents by the update of a server database. In this paper, we devise cost functions on query execution and view maintenance based the cost of spatial operators so as to process client's queries efficiently. When request the client's query, in our query processing scheme, the server determines whether or not materialize it as a view due to evaluation using the related cost functions. Since the scheme supports a hybrid approach based on both view materialization and re-execution, hence, it should improve query execution times in client/server environments.

Re-interpretation on the Making of the Guro Exporting Industrial Complex (구로 수출산업공단 조성의 재해석)

  • Chang, Sehoon
    • Journal of the Korean Geographical Society
    • /
    • v.49 no.2
    • /
    • pp.160-177
    • /
    • 2014
  • The Guro Exporting Industrial Complex has become a core of success story of Korean economy in 1960s. Re-examining the making process of Guro Complex, this paper intends to disclose the real and fictional aspects of this myth. For this purpose, this study tries to inquire into this process which is divided as dimensions of conception, execution and evaluation from a view of political sociology. Its results are as follows: The making of Guro Complex was not propelled by the state unilaterally, but passed through the process of conflicts and conciliations among various social forces such as state, business groups and local communities etc. As this complex was built on the basis of state's full supports, it is difficult to conclude it as a case of 'parasitic industrialization'. And in spite of its ostensible success, it is difficult to evaluate that its original goal which means a building of the bonded exporting complex with Japanese Koreans' investment was accomplished. Therefore it is needed to discover its whole aspect from the comprehensive perspective, not to be enchanted by its official results.

  • PDF

An Efficient Coordinator Election Algorithm in Synchronous Distributed Systems (동기적 분산 시스템에서 효율적인 조정자 선출 알고리즘)

  • 박성훈
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.10
    • /
    • pp.553-561
    • /
    • 2004
  • Leader election is an important problem in developing fault-tolerant distributed systems. As a classic solution for leader election, there is Garcia-Molina's Bully Algorithm based on time-outs in synchronous systems. In this paper, we re-write the Bully Algorithm to use a failure detector instead of explicit time-outs. We show that this algorithm is more efficient than the Garcia-Molina's one in terms of the processing time. That is because the Bully_FD uses FD to know whether the process is up or down so fast and it speed up its execution time. Especially, where many processes are connected in the system and crash and recovery of processes are frequent, the Bully_FD algorithm is much more efficient than the classical Bully algorithm in terms of the processing time.