• Title/Summary/Keyword: workflow system

Search Result 408, Processing Time 0.023 seconds

Design of Standard Metadata Schema for Computing Resource Management (컴퓨팅 리소스 관리를 위한 표준 메타데이터 스키마 설계)

  • Lee, Mikyoung;Cho, Minhee;Song, Sa-Kwang;Yim, Hyung-Jun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.433-435
    • /
    • 2022
  • In this paper, we introduce a computing resource standard metadata schema design plan for registering, retrieving, and managing computing resources used for research data analysis and utilization in the Korea Research Data Commons(KRDC). KRDC is a joint utilization system of research data and computing resources to maximize the sharing and utilization of research data. Computing resources refer to all resources in the computing environment, such as analysis infrastructure and analysis software, necessary to analyze and utilize research data used in the entire research process. The standard metadata schema for KRDC computing resource management is designed by considering common attributes for computing resource management and other attributes according to each computing resource feature. The standard metadata schema for computing resource management consists of a computing resource metadata schema and a computing resource provider metadata schema. In addition, the metadata schema of computing resources and providers was designed as a service schema and a system schema group according to their characteristics. The standard metadata schema designed in this paper is used for computing resource registration, retrieval, management, and workflow services for computing resource providers and computing resource users through the KRDC web service, and is designed in a scalable form for various computing resource links.

  • PDF

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

A Design and Analysis of Pressure Predictive Model for Oscillating Water Column Wave Energy Converters Based on Machine Learning (진동수주 파력발전장치를 위한 머신러닝 기반 압력 예측모델 설계 및 분석)

  • Seo, Dong-Woo;Huh, Taesang;Kim, Myungil;Oh, Jae-Won;Cho, Su-Gil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.11
    • /
    • pp.672-682
    • /
    • 2020
  • The Korea Nowadays, which is research on digital twin technology for efficient operation in various industrial/manufacturing sites, is being actively conducted, and gradual depletion of fossil fuels and environmental pollution issues require new renewable/eco-friendly power generation methods, such as wave power plants. In wave power generation, however, which generates electricity from the energy of waves, it is very important to understand and predict the amount of power generation and operational efficiency factors, such as breakdown, because these are closely related by wave energy with high variability. Therefore, it is necessary to derive a meaningful correlation between highly volatile data, such as wave height data and sensor data in an oscillating water column (OWC) chamber. Secondly, the methodological study, which can predict the desired information, should be conducted by learning the prediction situation with the extracted data based on the derived correlation. This study designed a workflow-based training model using a machine learning framework to predict the pressure of the OWC. In addition, the validity of the pressure prediction analysis was verified through a verification and evaluation dataset using an IoT sensor data to enable smart operation and maintenance with the digital twin of the wave generation system.

BPEL Engine Generator for adding New Functions to BPEL based on Attribute Grammar and Aspect-Oriented Programming (속성문법과 관점지향 프로그래밍 기법을 이용한 BPEL에 새로운 기능을 추가하는 BPEL 엔진 생성기)

  • Kwak, Dongkyu;Kim, Jongho;Choi, Jaeyoung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.5
    • /
    • pp.209-218
    • /
    • 2015
  • BPEL is used in various domains since it can describe the flow of works according to conditions and rules, and it can call web services in service-oriented computing environments. However, new functions that are not provided by BPEL grammar are required in specific domains. Generally, when new functions are required, the domain-specific language should be newly defined and developed, which requires high development cost. In this regard, a new function needs to be defined and added instead of developing domain-specific language with the new functions added. However, such methods only allow an addition of a single function, and it is difficult to design and add new functions according to the needs. This paper defines XAS4B document, which extends the BPEL grammar function through XML schema in order to add new functions, and proposes BPEL engine generator that generates BPEL engine with the new functions added by processing the document. The XAS4B document enables the creation of a new grammar added to BPEL using XML schema. It also shows the process of adding new functions to BPEL engine using AspectJ, JAVA implementation of aspect-oriented programming. The proposed system can add new functions using AspectJ without modifying BPEL engine. This allows the provision of new functions at low cost in various domains.

Implementation of KV Cone Beam CT for Image Guided Radiation Therapy (영상유도 방사선치료에서의 KV 콘빔CT 이용)

  • Yoo, Young-Seung;Lee, Hwa-Jung;Kim, Dae-Young;Yu, Ri
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.19 no.1
    • /
    • pp.43-49
    • /
    • 2007
  • Purpose: The aim of this study was the clinical implementation of IGRT using KV CBCT for setup correction in radiation therapy. Materials and Methods: We selected 9 patients (3 patient for each region; head, body, pelvis)and acquired 135 CBCT images with CLINAC iX (Varian medical system, USA). During the scan, the required time was measured. We analyzed the result in 3 direction; vertical, longitudinal, lateral. Results: The mean setup errors at the couch position of vertical, lateral, and longitudinal direction were 0.07, 0.12, and 0.1 cm in the head region, 0.3, 0.26, and 0.22 cm in the body region, 0.21, 0.18, and 0.15 cm in the pelvis region respectively. The mean time required for CBCT was $6{\sim}7$ minute. Conclusion: The CBCT on the LINAC provides the capacity for soft tissue imaging in the treatment position and real time monitoring during treatment delivery. With presented workflow, the setup correction within reasonable time for more accurate radiation therapy is possible. And it's image can be very useful for adaptive radiation therapy(ART) in the future with improved image quality.

  • PDF

Workcase based Very Large Scale Workflow System Architecture (워크케이스 기반의 초대형 워크플로우 시스템 아키텍쳐)

  • 심성수;김광훈
    • Proceedings of the Korea Database Society Conference
    • /
    • 2002.10a
    • /
    • pp.403-416
    • /
    • 2002
  • 워크플로우 관리 시스템은 정부나 기업과 같은 조직의 작업을 처리하기 위한 비즈니스 프로세스를 컴퓨터를 기반으로 자동화함으로서 작업의 효율을 높이고 비용을 절감한다. 현재에 이르러 이런 워크플로우 시스템을 사용하는 조직들이 점차 거대화되어 가고 네트워크의 발달과 인터넷의 출현으로 인하여 워크플로우 시스템이 처리하여야 하는 작업의 수와 고객과 작업자 수 등이 빠른 속도로 증가하는 추세이다. 이런 추세에서 워크플로우 시스템은 거대 조직 환경에 적합한 워크플로우 시스템 아키텍쳐를 필요하게 된다. 이에 본 논문은 거대 조직 환경을 관리할 수 있는 워크플로우 관리 시스템으로 워크케이스 기반의 초대형 워크플로우 시스템의 아키텍쳐를 설계 및 구현 하고자 한다. 그리고 워크플로우 시스템 아키텍쳐를 분류, 분석하여 장단점을 가려내어 이를 기반으로 워크플로우 시스템 아키텍쳐의 성능을 예측하여 워크케이스 기반 워크플로우 시스템 아키텍쳐가 본 논문에서 제안하는 초대형 워크플로우 시스템의 아키텍쳐라는 것을 예측하여 본다. 또한 초대형 워크플로우 시스템을 위하하부 구조로 EJB(Enterprise Java Beans)를 사용하고 사용 이유를 기술한다. 본 논문에서는 이런 워크케이스 기반의 초대형 워크플로우 시스템 아키텍쳐를 위하여 개념적인 단계와 설계 단계, 구현 단계로 나누어 설계 및 구현을 하며 개념적인 단계에서는 워크케이스 기반 워크플로우 시스템 아키텍쳐에 대하여 상세히 기술하고 설계단계에서는 전체적인 기능 정의와 초대형 워크플로우 시스템의 구조를 설계한다. 그리고 구현 단계에서는 워크케이스 기반의 초대형 워크플로우 시스템 아키텍쳐를 실제 구현하기 위한 환경을 선택하고 구현 단계의 문제점들과 해결책을 기술한다. 다 솔레노이드방식 감압건조장치로 건조한 표고버섯으로 품위에 대한 유의성 검증결과, 표고버섯의 경우 온도별로는 색택과 복원률, 건조실 내부 압력별로는 수축률, 복원률에서 유의차가 있는 것으로 나타났다. 라. 본 연구에서 구명된 감압건조특성을 기초로 하여 배치식 감압건조기를 설계 제작에 활용하고자 한다.ational banks. Several financial interchange standards which are involved in B2B business of e-procurement, e-placement, e-payment are also investigated.. monocytogenes, E. coli 및 S. enteritidis에 대한 키토산의 최소저해농도는 각각 0.1461 mg/mL, 0.2419 mg/mL, 0.0980 mg/mL 및 0.0490 mg/mL로 측정되었다. 또한 2%(v/v) 초산 자체의 최소저해농도를 측정한 결과, B. cereus, L. mosocytogenes, E. eoli에 대해서는 control과 비교시 유의적인 항균효과는 나타나지 않았다. 반면에 S. enteritidis의 경우는 배양시간 4시간까지는 항균활성을 나타내었지만, 8시간 이후부터는 S. enteritidis의 성장이 control 보다 높아져 배양시간 20시간에서는 control 보다 약 2배 이상 균주의 성장을 촉진시켰다.차에 따른 개별화 학습을 가능하게 할 뿐만 아니라 능동적인 참여를 유도하여 학습효율을 높일 수 있을 것으로 기대된다.향은 패션마케팅의 정의와 적용범위를 축소시킬 수 있는 위험을 내재한 것으로 보여진다. 그런가 하면, 많이 다루어진 주제라 할지라도 개념이나 용어가 통일되지 않고

  • PDF

Study on image quality improvement using Non-Linear Look-Up Table (비선형 Look-Up Table을 통한 영상 화질 개선에 관한 연구)

  • Kim, Sun-Chil;Lee, Jun-Il
    • Korean Journal of Digital Imaging in Medicine
    • /
    • v.5 no.1
    • /
    • pp.32-44
    • /
    • 2002
  • The role of radiology department has been greatly increased in the past few years as the technology in the medical imaging devices improved and the introduction of PACS (Picture Archiving and Communications System) to the conventional film-based diagnostic structure is a truly remarkable factor to the medical history. In addition, the value of using digital information in medical imaging is highly expected to grow as the technology over the computer and the network improves. However, the current medical practice, using PACS is somewhat limited compared to the film-based conventional one due to a poor image quality. The image quality is the most important and inevitable factor in the PACS environment and it is one of the most necessary steps to more wide practice of digital imaging. The existing image quality control tools are limited in controlling images produced from the medical modalities, because they cannot display the real image changing status. Thus, the image quality is distorted and the ability to diagnosis becomes hindered compared to the one of the film-based practice. In addition, the workflow of the radiologist greatly increases; as every doctor has to perform his or her own image quality control every time they view images produced from the medical modalities. To resolve these kinds of problems and enhance current medical practice under the PACS environment, we have developed a program to display a better image quality by using the ROI optical density of the existing gray level values. When the LUT is used properly, small detailed regions, which cannot be seen by using the existing image quality controls are easily displayed and thus, greatly improves digital medical practice. The purpose of this study is to provide an easier medical practice to physicians, by applying the technology of converting the H-D curves of the analog film screen to the digital imaging technology and to preset image quality control values to each exposed body part, modality and group of physicians for a better and easier medical practice. We have asked to 5 well known professional physicians to compare image quality of the same set of exam by using the two different methods: existing image quality control and the LUT technology. As the result, the LUT technology was enormously favored over the existing image quality control method. All the physicians have pointed out the far more superiority of the LUT over the existing image quality control method and highly praised its ability to display small detailed regions, which cannot be displayed by existing image quality control tools. Two physicians expressed the necessity of presetting the LUT values for each exposed body part. Overall, the LUT technology yielded a great interest among the physicians and highly praised for its ability to overcome currently embedded problems of PACS. We strongly believe that the LUT technology can enhance the current medical practice and open a new beginning in the future medical imaging.

  • PDF

An Adaptive Business Process Mining Algorithm based on Modified FP-Tree (변형된 FP-트리 기반의 적응형 비즈니스 프로세스 마이닝 알고리즘)

  • Kim, Gun-Woo;Lee, Seung-Hoon;Kim, Jae-Hyung;Seo, Hye-Myung;Son, Jin-Hyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.3
    • /
    • pp.301-315
    • /
    • 2010
  • Recently, competition between companies has intensified and so has the necessity of creating a new business value inventions has increased. A numbers of Business organizations are beginning to realize the importance of business process management. Processes however can often not go the way they were initially designed or non-efficient performance process model could be designed. This can be due to a lack of cooperation and understanding between business analysts and system developers. To solve this problem, business process mining which can be used as the basis of the business process re-engineering has been recognized to an important concept. Current process mining research has only focused their attention on extracting workflow-based process model from competed process logs. Thus there have a limitations in expressing various forms of business processes. The disadvantage in this method is process discovering time and log scanning time in itself take a considerable amount of time. This is due to the re-scanning of the process logs with each new update. In this paper, we will presents a modified FP-Tree algorithm for FP-Tree based business processes, which are used for association analysis in data mining. Our modified algorithm supports the discovery of the appropriate level of process model according to the user's need without re-scanning the entire process logs during updated.