• Title/Summary/Keyword: Large Scale Data

Search Result 2,796, Processing Time 0.035 seconds

An XPDL-Based Workflow Control-Structure and Data-Sequence Analyzer

  • Kim, Kwanghoon Pio
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1702-1721
    • /
    • 2019
  • A workflow process (or business process) management system helps to define, execute, monitor and manage workflow models deployed on a workflow-supported enterprise, and the system is compartmentalized into a modeling subsystem and an enacting subsystem, in general. The modeling subsystem's functionality is to discover and analyze workflow models via a theoretical modeling methodology like ICN, to graphically define them via a graphical representation notation like BPMN, and to systematically deploy those graphically defined models onto the enacting subsystem by transforming into their textual models represented by a standardized workflow process definition language like XPDL. Before deploying those defined workflow models, it is very important to inspect its syntactical correctness as well as its structural properness to minimize the loss of effectiveness and the depreciation of efficiency in managing the corresponding workflow models. In this paper, we are particularly interested in verifying very large-scale and massively parallel workflow models, and so we need a sophisticated analyzer to automatically analyze those specialized and complex styles of workflow models. One of the sophisticated analyzers devised in this paper is able to analyze not only the structural complexity but also the data-sequence complexity, especially. The structural complexity is based upon combinational usages of those control-structure constructs such as subprocesses, exclusive-OR, parallel-AND and iterative-LOOP primitives with preserving matched pairing and proper nesting properties, whereas the data-sequence complexity is based upon combinational usages of those relevant data repositories such as data definition sequences and data use sequences. Through the devised and implemented analyzer in this paper, we are able eventually to achieve the systematic verifications of the syntactical correctness as well as the effective validation of the structural properness on those complicate and large-scale styles of workflow models. As an experimental study, we apply the implemented analyzer to an exemplary large-scale and massively parallel workflow process model, the Large Bank Transaction Workflow Process Model, and show the structural complexity analysis results via a series of operational screens captured from the implemented analyzer.

Analysis on Economies of Scale in Offshore Fishery Using a Translog Cost Function (초월대수비용함수를 이용한 근해어업의 규모의 경제성 분석)

  • Shin, Yongmin;Sim, Seonghyun
    • Ocean and Polar Research
    • /
    • v.39 no.1
    • /
    • pp.61-71
    • /
    • 2017
  • This study estimates the cost function through offshore fishery cost data and analyzed the economies of scale of Korea's offshore fishery. For the estimation of the cost function, translog cost function was used, and the analysis implemented the panel analysis of the panel data. Also, annual economies of scale of the offshore fishery and economies of scale of 14 offshore fisheries in 2015 were analyzed using translog cost function coefficient estimation. The analysis result of economies of scale of Korea's offshore fishery showed that with the exception of 2003, economies of scale exist in all periods of time. However, as it almost reaches the minimum efficient scale, it was revealed that further scale expansion will bring inefficiency. Thus, according to the analysis result, Korea's offshore fishery requires a scale reduction policy rather than scale expansion policy, and this seems to coincide with the current government's fishery reconstruction policy and its practice such as the fishing vessel buyback program. The analysis result of economies of scale of each offshore fishery in 2015 showed that economies of scale of each offshore fishery exists with the exception of five trawl fisheries such as large pair-trawl and large otter trawl and large purse seines. This strongly suggests that the five fisheries and Large Purse Seines with non performing economies of scale need urgent scale reduction and should be the first target for the government's fishery reconstruction policy.

Android phone app. development for large scale classes (대규모 강의를 위한 안드로이드 폰 앱 개발)

  • Kim, Il-Min
    • Journal of Digital Convergence
    • /
    • v.9 no.6
    • /
    • pp.343-354
    • /
    • 2011
  • The interactions between a professor and students are very important in humanities classes A professor could not exchange opinions with students especially in large-scale classes. We have developed a smart phone app. in order to overcome the lirnit of interaction between a professor and students in large-scale classes. The app we have developed in this paper have exploited smart phone's data processing and wireless communication ability. In this paper, we showed that smart phone could be an useful tool for increasing the education effect of large-scale classes.

EVALUATING CRITICAL SUCCESS FACTORS FOR ACCURATE FIRST COST ESTIMATES OF LARGE-SCALE CONSTRUCTION PROJECTS

  • Jin-Lee Kim;Ok-Kyue Kim
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.354-360
    • /
    • 2009
  • The demands for large-scale construction projects such as Mega-projects are largely increasing due to the rapid growth of increasing populations as well as the need to replace existing buildings and infrastructure. Increasing costs of materials, supplies, and labors require the first cost estimates at the preliminary planning stage to be as accurate as possible. This paper presents the results obtained from the survey on evaluating nine critical success factors that influence the accurate first cost estimates for large-scale projects from practical experiences. It then examines the current cost structures of construction companies for large-scale projects, followed by the causes for cost and schedule overrun. Twenty completed surveys were collected and the Analytic Hierarchy Process was applied to analyze the data. The results indicate that technology issues, the contract type, and social and environmental impacts are the significant leading factors for accurate first cost estimates of large-scale construction projects.

  • PDF

Parameter Tuning in Support Vector Regression for Large Scale Problems (대용량 자료에 대한 서포트 벡터 회귀에서 모수조절)

  • Ryu, Jee-Youl;Kwak, Minjung;Yoon, Min
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.1
    • /
    • pp.15-21
    • /
    • 2015
  • In support vector machine, the values of parameters included in kernels affect strongly generalization ability. It is often difficult to determine appropriate values of those parameters in advance. It has been observed through our studies that the burden for deciding the values of those parameters in support vector regression can be reduced by utilizing ensemble learning. However, the straightforward application of the method to large scale problems is too time consuming. In this paper, we propose a method in which the original data set is decomposed into a certain number of sub data set in order to reduce the burden for parameter tuning in support vector regression with large scale data sets and imbalanced data set, particularly.

A Study on Light-weight Algorithm of Large scale BIM data for Visualization on Web based GIS Platform (웹기반 GIS 플랫폼 상 가시화 처리를 위한 대용량 BIM 데이터의 경량화 알고리즘 제시)

  • Kim, Ji Eun;Hong, Chang Hee
    • Spatial Information Research
    • /
    • v.23 no.1
    • /
    • pp.41-48
    • /
    • 2015
  • BIM Technology contains data from the life cycle of facility through 3D modeling. For these, one building products the huge file because of massive data. One of them is IFC which is the standard format, and there are issues that large scale data processing based on geometry and property information of object. It increases the rendering speed and constitutes the graphic card, so large scale data is inefficient for screen visualization to user. The light weighting of large scale BIM data has to solve for process and quality of program essentially. This paper has been searched and confirmed about light weight techniques from domestic and abroad researches. To control and visualize the large scale BIM data effectively, we proposed and verified the technique which is able to optimize the BIM character. For operating the large scale data of facility on web based GIS platform, the quality of screen switch from user phase and the effective memory operation were secured.

Sampled-Data Observer-Based Decentralized Fuzzy Control for Nonlinear Large-Scale Systems

  • Koo, Geun Bum;Park, Jin Bae;Joo, Young Hoon
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.3
    • /
    • pp.724-732
    • /
    • 2016
  • In this paper, a sampled-data observer-based decentralized fuzzy control technique is proposed for a class of nonlinear large-scale systems, which can be represented to a Takagi-Sugeno fuzzy system. The premise variable is assumed to be measurable for the design of the observer-based fuzzy controller, and the closed-loop system is obtained. Based on an exact discretized model of the closed-loop system, the stability condition is derived for the closed-loop system. Also, the stability condition is converted into the linear matrix inequality (LMI) format. Finally, an example is provided to verify the effectiveness of the proposed techniques.

Development of the design methodology for large-scale database based on MongoDB

  • Lee, Jun-Ho;Joo, Kyung-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.11
    • /
    • pp.57-63
    • /
    • 2017
  • The recent sudden increase of big data has characteristics such as continuous generation of data, large amount, and unstructured format. The existing relational database technologies are inadequate to handle such big data due to the limited processing speed and the significant storage expansion cost. Thus, big data processing technologies, which are normally based on distributed file systems, distributed database management, and parallel processing technologies, have arisen as a core technology to implement big data repositories. In this paper, we propose a design methodology for large-scale database based on MongoDB by extending the information engineering methodology based on E-R data model.

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

A High-rate GPS Data Processing for Large-scale Structure Monitoring (대형구조물 모니터링을 위한 high-rate GPS 자료처리)

  • Bae, Tea-Suk
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2010.04a
    • /
    • pp.181-182
    • /
    • 2010
  • For real-time displacement monitoring of large-scale structures, the high-rate (>1 Hz) GPS data processing is necessary, which is not possible even for the scientific GPS data processing softwares. Since the baseline is generally very short in this case, most of the atmospheric effects are removed, resulting in the unknowns of position and integer ambiguity. The number of unknowns in real-time kinematic GPS positioning makes the positioning impossible with usual approach, thus two-step approach is tested in this study.

  • PDF