• Title/Summary/Keyword: data process

Search Result 23,656, Processing Time 0.053 seconds

Data Extraction of Manufacturing Process for Data Mining (데이터 마이닝을 위한 생산공정 데이터 추출)

  • Park H.K.;Lee G.A.;Choi S.;Lee H.W.;Bae S.M.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.118-122
    • /
    • 2005
  • Data mining is the process of autonomously extracting useful information or knowledge from large data stores or sets. For analyzing data of manufacturing processes obtained from database using data mining, source data should be collected form production process and transformed to appropriate form. To extract those data from database, a computer program should be made for each database. This paper presents a program to extract easily data form database in industry. The advantage of this program is that user can extract data from all types of database and database table and interface with Teamcenter Manufacturing.

  • PDF

The use of Local API(Anomaly Process Instances) Detection for Analyzing Container Terminal Event (로컬 API(Anomaly Process Instances) 탐지법을 이용한 컨테이너 터미널 이벤트 분석)

  • Jeon, Daeuk;Bae, Hyerim
    • The Journal of Society for e-Business Studies
    • /
    • v.20 no.4
    • /
    • pp.41-59
    • /
    • 2015
  • Information systems has been developed and used in various business area, therefore there are abundance of history data (log data) stored, and subsequently, it is required to analyze those log data. Previous studies have been focusing on the discovering of relationship between events and no identification of anomaly instances. Previously, anomaly instances are treated as noise and simply ignored. However, this kind of anomaly instances can occur repeatedly. Hence, a new methodology to detect the anomaly instances is needed. In this paper, we propose a methodology of LAPID (Local Anomaly Process Instance Detection) for discriminating an anomalous process instance from the log data. We specified a distance metric from the activity relation matrix of each instance, and use it to detect API (Anomaly Process Instance). For verifying the suggested methodology, we discovered characteristics of exceptional situations from log data. To demonstrate our proposed methodology, we performed our experiment on real data from a domestic port terminal.

A Selection of the Point Rainfall Process Model Considered on Temporal Clustering Characteristics (시간적 군집특성을 고려한 강우모의모형의 선정)

  • Kim, Kee-Wook;Yoo, Chul-Sang
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.7
    • /
    • pp.747-759
    • /
    • 2008
  • This study, a point rainfall process model, which could represent appropriately observed rainfall data, was to select. The point process models-rectangular pulses Poisson process model(RPPM), Neyman-Scott rectangular pulses Poisson process model(NS-RPPM), and modified Neyman-Scott rectangular pulses Poisson process model(modified NS-RPPM)-all based on Poisson process were considered as possible rainfall models, whose statistical analyses were performed with their simulation rainfall data. As results, simulated rainfall data using the NS-RPPM and the modified NS-RPPM represent appropriately statistics of observed data for several aggregation levels. Also, simulated rainfall data using the modified NS-RPPM shows similar characteristics of rainfall occurrence to the observed rainfall data. Especially, the modified NS-RPPM reproduces high-intensity rainfall events that contribute largely to occurrence of natural harzard such as flood and landslides most similarly. Also, the modified NS-RPPM shows the best results with respect to the total rainfall amount, duration, and inter-event time. In conclusions, the modified NS-RPPM was found to be the most appropriate model for the long-term simulation of rainfall.

Process analysis in Supply Chain Management with Process Mining: A Case Study (프로세스 마이닝 기법을 활용한 공급망 분석: 사례 연구)

  • Lee, Yonghyeok;Yi, Hojeong;Song, Minseok;Lee, Sang-Jin;Park, Sera
    • The Journal of Bigdata
    • /
    • v.1 no.2
    • /
    • pp.65-78
    • /
    • 2016
  • In the rapid change of business environment, it is crucial that several companies with core competence cooperate together in order to deliver competitive products to the market faster. Thus a lot of companies are participating in supply chains and SCM (Supply Chain Management) become more important. To efficiently manage supply chains, the analysis of data from SCM systems is required. In this paper, we explain how to analyze SCM related data with process mining techniques. After discussing the data requirement for process mining, several process mining techniques for the data analysis are explained. To show the applicability of the techniques, we have performed a case study with a company in South Korea. The case study shows that process mining is useful tool to analyze SCM data. On specifically, an overall process, several performance measures, and social networks can be easily discovered and analyzed with the techniques.

  • PDF

Analysis of Drought Spatial Distribution Using Poisson Process (포아송과정을 이용한 가뭄의 공간분포 분석)

  • Yoo, Chul-Sang;Ahn, Jae-Hyun;Ryoo, So-Ra
    • Journal of Korea Water Resources Association
    • /
    • v.37 no.10
    • /
    • pp.813-822
    • /
    • 2004
  • This study quantifies and compares the drought return and duration characteristics by applying the Poisson process as well as based on by analyzing the observed data directly. The drought spatial distributions derived for the Gyunggi province are also compared. The monthly rainfall data are used to construct the SPI as a drought index. Especially, this study focuses on the evaluation of the Poisson process model when applying it to various data lengths such as in the spatial analysis 'of drought. Summarizing the results are as follows. (1) The Poisson process is found to be effective for the quantification of drought, especially when the data length is short. When applying the Poisson process, two neighboring sites are found insensitive to the data length to show similar drought characteristics, so the overall drought pattern becomes smoother than that derived directly from the observed data. (2) When the data length is very different site by site, the spatial analysis of drought based on a model application seems better than that based on the direct data analysis. This study also found more obvious spatial pattern of drought occurrence and duration when applying the Poisson process.

An XPDL-Based Workflow Control-Structure and Data-Sequence Analyzer

  • Kim, Kwanghoon Pio
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1702-1721
    • /
    • 2019
  • A workflow process (or business process) management system helps to define, execute, monitor and manage workflow models deployed on a workflow-supported enterprise, and the system is compartmentalized into a modeling subsystem and an enacting subsystem, in general. The modeling subsystem's functionality is to discover and analyze workflow models via a theoretical modeling methodology like ICN, to graphically define them via a graphical representation notation like BPMN, and to systematically deploy those graphically defined models onto the enacting subsystem by transforming into their textual models represented by a standardized workflow process definition language like XPDL. Before deploying those defined workflow models, it is very important to inspect its syntactical correctness as well as its structural properness to minimize the loss of effectiveness and the depreciation of efficiency in managing the corresponding workflow models. In this paper, we are particularly interested in verifying very large-scale and massively parallel workflow models, and so we need a sophisticated analyzer to automatically analyze those specialized and complex styles of workflow models. One of the sophisticated analyzers devised in this paper is able to analyze not only the structural complexity but also the data-sequence complexity, especially. The structural complexity is based upon combinational usages of those control-structure constructs such as subprocesses, exclusive-OR, parallel-AND and iterative-LOOP primitives with preserving matched pairing and proper nesting properties, whereas the data-sequence complexity is based upon combinational usages of those relevant data repositories such as data definition sequences and data use sequences. Through the devised and implemented analyzer in this paper, we are able eventually to achieve the systematic verifications of the syntactical correctness as well as the effective validation of the structural properness on those complicate and large-scale styles of workflow models. As an experimental study, we apply the implemented analyzer to an exemplary large-scale and massively parallel workflow process model, the Large Bank Transaction Workflow Process Model, and show the structural complexity analysis results via a series of operational screens captured from the implemented analyzer.

Analyzing Production Data using Data Mining Techniques (데이터마이닝 기법의 생산공정데이터에의 적용)

  • Lee H.W.;Lee G.A.;Choi S.;Bae K.W.;Bae S.M.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.143-146
    • /
    • 2005
  • Many data mining techniques have been proved useful in revealing important patterns from large data sets. Especially, data mining techniques play an important role in a customer data analysis in a financial industry and an electronic commerce. Also, there are many data mining related research papers in a semiconductor industry and an automotive industry. In addition, data mining techniques are applied to the bioinformatics area. To satisfy customers' various requirements, each industry should develop new processes with more accurate production criteria. Also, they spend more money to guarantee their products' quality. In this manner, we apply data mining techniques to the production-related data such as a test data, a field claim data, and POP (point of production) data in the automotive parts industry. Data collection and transformation techniques should be applied to enhance the analysis results. Also, we classify various types of manufacturing processes and proposed an analysis scheme according to the type of manufacturing process. As a result, we could find inter- or intra-process relationships and critical features to monitor the current status of the each process. Finally, it helps an industry to raise their profit and reduce their failure cost.

  • PDF

XML-Based Network Services for Real-Time Process Data (실시간 공정 데이터를 위한 XML 기반 네트워크 서비스)

  • Choo, Young-Yeol;Song, Myoung-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.184-190
    • /
    • 2008
  • This paper describes a message model based on XML (eXtensible Markup Language) to present real-time data from sensors and instruments at manufacturing processes for web service. HTML (Hyper Text Markup Language) is inadequate for describing real-time data from process control plants while it is suitable for displaying non-real-time multimedia data on web. For XML-based web service of process data, XML format for the data presentation was proposed after investigating data of various instruments at steel-making plants. Considering transmission delay inevitably caused from increased message length and processing delay from transformation of raw data into defined format, which was critical for operation of a real-time system, its performance was evaluated by simulation. In the simulation, we assumed two implementation models for conducting the transformation function. In one model, transformation was done at an SCC (Supervisory Control Computer) after receiving real-time data from instruments. In the other model, transformation had been carried out at instruments before the data were transmitted to the SCC. Various tests had been conducted under different conditions of offered loads and data lengths and their results were described.

Delaunay triangulation for efficient reduction of measured point data (측정데이터의 효율적 감소를 위한 De Iaunay 삼각형 분할의 적용)

  • 허성민;김호찬;이석희
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2001.04a
    • /
    • pp.53-56
    • /
    • 2001
  • Reverse engineering has been widely used for the shape reconstruction of an object without CAD data and it includes some steps such as scanning of a clay or wood model, and generating some manufacturing data in an STL file. A new approach to remove point data with Delaunay triangulation is introduced to deal with the size problems of STL file and the difficulties in the operation of RP process. This approach can be used to reduce a number of measuring data from laser scanner within a specified tolerance, thus it can avoid the time for handing point data during modeling process and the time for verifying and slicing STL model during RP process. Developed software enables the user to specify the criteria for the selection of group of triangles either by the angle between triangles or the percentage of triangles reduced, and thus RP models with accuracy will be helpful to automated process.

  • PDF

The Development of Monitoring System in the Scrubber of Semiconductor Manufacture Processing (반도체 공정의 SCRUBBER 감시 시스템 개발)

  • Kim, Joohn-Hwan;Kim, Sang-Woo;Kim, Beung-Jin;Moon, Hak-Yong;Jeon, Hee-Jong
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2390-2392
    • /
    • 1998
  • In this paper, we discuss the development of monitoring system with data process equipment which transfers data from Remote Terminal Unit(RTU) to monitoring computer. The RTUs sense temperature, pressure and PLC(Programmable Logic Controller) nodes conditions of scrubber in semiconductor manufacture processing. The data Process equipment is connected every RTU and a monitoring computer through serial communication. This equipment receives informations from RTU, process data, and transfers them to monitoring computer. To avoid congestion in data communication, task scheduling algorithm used RT O/S(Real-Time Operating System) is embedded in ROM which is a part of data Process equipment.

  • PDF