• Title/Summary/Keyword: Workflow management system

Search Result 210, Processing Time 0.027 seconds

A Context-Aware System for Reliable RFID-based Logistics Management (RFID 기반 물류관리의 신뢰성 향상을 위한 상황인지 시스템 개발)

  • Jin, Hee-Ju;Kim, Hoontae;Lee, Yong-Han
    • The Journal of Society for e-Business Studies
    • /
    • v.18 no.2
    • /
    • pp.223-240
    • /
    • 2013
  • RFID(Radio Frequency Identification) is use of an RFID tag applied to object for the purpose of identification and tracking using radio waves. Recently, it is being actively researched and introduced in logistics and manufacturing. RFID portals in supply chains are meant to identify all the tags within a given interrogation zone. Hence the hardware and software mechanisms for RFID tag identification mostly focus on successful read of multiple tags simultaneously. Such mechanisms, however, are inefficient for determining moving direction of tags, sequence of consecutive tags, and validity of the tag reads from the viewpoint of workflow. These types of problems usually cause many difficulties in RFID portal implementation in manufacturing environment, there by having RFID-system developers waste a considerable amount of time. In this research, we designated an RFID portal system with SDO(Sequence, Direction, and Object-flow)-perception capability by using fundamental data supplied by ordinary RFID readers. Using our work, RFID system developers can save a great amount of time building RFID data-capturing applications in manufacturing environment.

Design of MAHA Supercomputing System for Human Genome Analysis (대용량 유전체 분석을 위한 고성능 컴퓨팅 시스템 MAHA)

  • Kim, Young Woo;Kim, Hong-Yeon;Bae, Seungjo;Kim, Hag-Young;Woo, Young-Choon;Park, Soo-Jun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.81-90
    • /
    • 2013
  • During the past decade, many changes and attempts have been tried and are continued developing new technologies in the computing area. The brick wall in computing area, especially power wall, changes computing paradigm from computing hardwares including processor and system architecture to programming environment and application usage. The high performance computing (HPC) area, especially, has been experienced catastrophic changes, and it is now considered as a key to the national competitiveness. In the late 2000's, many leading countries rushed to develop Exascale supercomputing systems, and as a results tens of PetaFLOPS system are prevalent now. In Korea, ICT is well developed and Korea is considered as a one of leading countries in the world, but not for supercomputing area. In this paper, we describe architecture design of MAHA supercomputing system which is aimed to develop 300 TeraFLOPS system for bio-informatics applications like human genome analysis and protein-protein docking. MAHA supercomputing system is consists of four major parts - computing hardware, file system, system software and bio-applications. MAHA supercomputing system is designed to utilize heterogeneous computing accelerators (co-processors like GPGPUs and MICs) to get more performance/$, performance/area, and performance/power. To provide high speed data movement and large capacity, MAHA file system is designed to have asymmetric cluster architecture, and consists of metadata server, data server, and client file system on top of SSD and MAID storage servers. MAHA system softwares are designed to provide user-friendliness and easy-to-use based on integrated system management component - like Bio Workflow management, Integrated Cluster management and Heterogeneous Resource management. MAHA supercomputing system was first installed in Dec., 2011. The theoretical performance of MAHA system was 50 TeraFLOPS and measured performance of 30.3 TeraFLOPS with 32 computing nodes. MAHA system will be upgraded to have 100 TeraFLOPS performance at Jan., 2013.

Dynamic Memory Allocation for Scientific Workflows in Containers (컨테이너 환경에서의 과학 워크플로우를 위한 동적 메모리 할당)

  • Adufu, Theodora;Choi, Jieun;Kim, Yoonhee
    • Journal of KIISE
    • /
    • v.44 no.5
    • /
    • pp.439-448
    • /
    • 2017
  • The workloads of large high-performance computing (HPC) scientific applications are steadily becoming "bursty" due to variable resource demands throughout their execution life-cycles. However, the over-provisioning of virtual resources for optimal performance during execution remains a key challenge in the scheduling of scientific HPC applications. While over-provisioning of virtual resources guarantees peak performance of scientific application in virtualized environments, it results in increased amounts of idle resources that are unavailable for use by other applications. Herein, we proposed a memory resource reconfiguration approach that allows the quick release of idle memory resources for new applications in OS-level virtualized systems, based on the applications resource-usage pattern profile data. We deployed a scientific workflow application in Docker, a light-weight OS-level virtualized system. In the proposed approach, memory allocation is fine-tuned to containers at each stage of the workflows execution life-cycle. Thus, overall memory resource utilization is improved.

A Method of Applying Work Relationships for a Linear Scheduling Model (선형 공정계획 모델의 작업 관계성 적용 방법)

  • Ryu, Han-Guk
    • Journal of the Korea Institute of Building Construction
    • /
    • v.10 no.4
    • /
    • pp.31-39
    • /
    • 2010
  • As the linear scheduling method has been used since the Empire State Building linear schedule in 1929, it is being applied in various fields, such as construction and manufacturing. When addressing concurrent critical paths occurring in a linear construction schedule, empirical researches have stressed resource management, which should be applied for optimizing workflow, ensuring flexible work productivity and continuous resource allocation. However, work relationships have been usually overlooked in making the linear schedule from an existing network schedule. Therefore, this research analyzes the previous researches related to the linear scheduling model, and then proposes a method that can be applied for adopting the relationships of a network schedule to the linear schedule. To this end, this research considers the work relationships occurring in changing a network schedule into a linear schedule, and then confirms the activities movement phenomenon of linear schedule due to workspace change, such as physical floors change. As a result, this research can be used as a basic research in order to develop a system generating a linear schedule from a network schedule.

A Conceptual Approach for Discovering Proportions of Disjunctive Routing Patterns in a Business Process Model

  • Kim, Kyoungsook;Yeon, Moonsuk;Jeong, Byeongsoo;Kim, Kwanghoon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1148-1161
    • /
    • 2017
  • The success of a business process management system stands or falls on the quality of the business processes. Many experiments therefore have been devoting considerable attention to the modeling and analysis of business processes in process-centered organizations. One of those experiments is to apply the probabilistic theories to the analytical evaluations of business process models in order to improve their qualities. In this paper, we excogitate a conceptual way of applying a probability theory of proportions into modeling business processes. There are three types of routing patterns such as sequential, disjunctive, conjunctive and iterative routing patterns in modeling business processes, into which the proportion theory is applicable. This paper focuses on applying the proportion theory to the disjunctive routing patterns, in particular, and formally named proportional information control net that is the formal representation of a corresponding business process model. In this paper, we propose a conceptual approach to discover a proportional information control net from the enactment event histories of the corresponding business process, and describe the details of a series of procedural frameworks and operational mechanisms formally and graphically supporting the proposed approach. We strongly believe that the conceptual approach with the proportional information control net ought to be very useful to improve the quality of business processes by adapting to the reengineering and redesigning the corresponding business processes.

Effect of Patient Safety Training Program of Nurses in Operating Room

  • Zhang, Peijia;Liao, Xin;Luo, Jie
    • Journal of Korean Academy of Nursing
    • /
    • v.52 no.4
    • /
    • pp.378-390
    • /
    • 2022
  • Purpose: This study developed an in-service training program for patient safety and aimed to evaluate the impact of the program on nurses in the operating room (OR). Methods: A pretest-posttest self-controlled survey was conducted on OR nurses from May 6 to June 14, 2020. An in-service training program for patient safety was developed on the basis of the knowledge-attitude-practice (KAP) theory through various teaching methods. The levels of safety attitude, cognition, and attitudes toward the adverse event reporting of nurses were compared to evaluate the effect of the program. Nurses who attended the training were surveyed one week before the training (pretest) and two weeks after the training (posttest). Results: A total of 84 nurses participated in the study. After the training, the scores of safety attitude, cognition, and attitudes toward adverse event reporting of nurses showed a significant increase relative to the scores before the training (p < .001). The effects of safety training on the total score and the dimensions of safety attitude, cognition, and attitudes toward nurses' adverse event reporting were above the moderate level. Conclusion: The proposed patient safety training program based on KAP theory improves the safety attitude of OR nurses. Further studies are required to develop an interprofessional patient safety training program. In addition to strength training, hospital managers need to focus on the aspects of workflow, management system, department culture, and other means to promote safety culture.

A Study on the Implement of AI-based Integrated Smart Fire Safety (ISFS) System in Public Facility

  • Myung Sik Lee;Pill Sun Seo
    • International Journal of High-Rise Buildings
    • /
    • v.12 no.3
    • /
    • pp.225-234
    • /
    • 2023
  • Even at this point in the era of digital transformation, we are still facing many problems in the safety sector that cannot prevent the occurrence or spread of human casualties. When you are in an unexpected emergency, it is often difficult to respond only with human physical ability. Human casualties continue to occur at construction sites, manufacturing plants, and multi-use facilities used by many people in everyday life. If you encounter a situation where normal judgment is impossible in the event of an emergency at a life site where there are still many safety blind spots, it is difficult to cope with the existing manual guidance method. New variable guidance technology, which combines artificial intelligence and digital twin, can make it possible to prevent casualties by processing large amounts of data needed to derive appropriate countermeasures in real time beyond identifying what safety accidents occurred in unexpected crisis situations. When a simple control method that divides and monitors several CCTVs is digitally converted and combined with artificial intelligence and 3D digital twin control technology, intelligence augmentation (IA) effect can be achieved that strengthens the safety decision-making ability required in real time. With the enforcement of the Serious Disaster Enterprise Punishment Act, the importance of distributing a smart location guidance system that urgently solves the decision-making delay that occurs in safety accidents at various industrial sites and strengthens the real-time decision-making ability of field workers and managers is highlighted. The smart location guidance system that combines artificial intelligence and digital twin consists of AIoT HW equipment, wireless communication NW equipment, and intelligent SW platform. The intelligent SW platform consists of Builder that supports digital twin modeling, Watch that meets real-time control based on synchronization between real objects and digital twin models, and Simulator that supports the development and verification of various safety management scenarios using intelligent agents. The smart location guidance system provides on-site monitoring using IoT equipment, CCTV-linked intelligent image analysis, intelligent operating procedures that support workflow modeling to immediately reflect the needs of the site, situational location guidance, and digital twin virtual fencing access control technology. This paper examines the limitations of traditional fixed passive guidance methods, analyzes global technology development trends to overcome them, identifies the digital transformation properties required to switch to intelligent variable smart location guidance methods, explains the characteristics and components of AI-based public facility smart fire safety integrated system (ISFS).

Patent data analysis using clique analysis in a keyword network (키워드 네트워크의 클릭 분석을 이용한 특허 데이터 분석)

  • Kim, Hyon Hee;Kim, Donggeon;Jo, Jinnam
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.5
    • /
    • pp.1273-1284
    • /
    • 2016
  • In this paper, we analyzed the patents on machine learning using keyword network analysis and clique analysis. To construct a keyword network, important keywords were extracted based on the TF-IDF weight and their association, and network structure analysis and clique analysis was performed. Density and clustering coefficient of the patent keyword network are low, which shows that patent keywords on machine learning are weakly connected with each other. It is because the important patents on machine learning are mainly registered in the application system of machine learning rather thant machine learning techniques. Also, our results of clique analysis showed that the keywords found by cliques in 2005 patents are the subjects such as newsmaker verification, product forecasting, virus detection, biomarkers, and workflow management, while those in 2015 patents contain the subjects such as digital imaging, payment card, calling system, mammogram system, price prediction, etc. The clique analysis can be used not only for identifying specialized subjects, but also for search keywords in patent search systems.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

An Adaptive Business Process Mining Algorithm based on Modified FP-Tree (변형된 FP-트리 기반의 적응형 비즈니스 프로세스 마이닝 알고리즘)

  • Kim, Gun-Woo;Lee, Seung-Hoon;Kim, Jae-Hyung;Seo, Hye-Myung;Son, Jin-Hyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.3
    • /
    • pp.301-315
    • /
    • 2010
  • Recently, competition between companies has intensified and so has the necessity of creating a new business value inventions has increased. A numbers of Business organizations are beginning to realize the importance of business process management. Processes however can often not go the way they were initially designed or non-efficient performance process model could be designed. This can be due to a lack of cooperation and understanding between business analysts and system developers. To solve this problem, business process mining which can be used as the basis of the business process re-engineering has been recognized to an important concept. Current process mining research has only focused their attention on extracting workflow-based process model from competed process logs. Thus there have a limitations in expressing various forms of business processes. The disadvantage in this method is process discovering time and log scanning time in itself take a considerable amount of time. This is due to the re-scanning of the process logs with each new update. In this paper, we will presents a modified FP-Tree algorithm for FP-Tree based business processes, which are used for association analysis in data mining. Our modified algorithm supports the discovery of the appropriate level of process model according to the user's need without re-scanning the entire process logs during updated.