• Title/Summary/Keyword: 사용자 웹 인터페이스

Search Result 675, Processing Time 0.024 seconds

A Design of Integrated Scientific Workflow Execution Environment for A Computational Scientific Application (계산 과학 응용을 위한 과학 워크플로우 통합 수행 환경 설계)

  • Kim, Seo-Young;Yoon, Kyoung-A;Kim, Yoon-Hee
    • Journal of Internet Computing and Services
    • /
    • v.13 no.1
    • /
    • pp.37-44
    • /
    • 2012
  • Numerous scientists who are engaged in compute-intensive researches require more computing facilities than before, while the computing resource and techniques are increasingly becoming more advanced. For this reason, many works for e-Science environment have been actively invested and established around the world, but still the scientists look for an intuitive experimental environment, which is guaranteed the improved environmental facilities without additional configurations or installations. In this paper, we present an integrated scientific workflow execution environment for Scientific applications supporting workflow design with high performance computing infrastructure and accessibility for web browser. This portal supports automated consecutive execution of computation jobs in order of the form defined by workflow design tool and execution service concerning characteristics of each job to batch over distributed grid resources. Workflow editor of the portal presents a high-level frontend and easy-to-use interface with monitoring service, which shows the status of workflow execution in real time so that user can check the intermediate data during experiments. Therefore, the scientists can take advantages of the environment to improve the productivity of study based on HTC.

Design and Implementation of Web GIS Server Using Node.js (Node.js를 활용한 웹GIS 서버의 설계와 구현)

  • Jun, Sang Hwan;Doh, Kyoung Tae
    • Spatial Information Research
    • /
    • v.21 no.3
    • /
    • pp.45-53
    • /
    • 2013
  • Web GIS, based on the latest web-technology, has evolved to provide efficient and accurate spatial information to users. Furthermore, Web GIS Server has improved the performance constantly to respond user web requests and to offer spatial information service. This research aims to create a designed and implemented Web GIS Server that is named as Nodemap which uses the emergent technology, Node.js, which has been issued for an event-oriented, non-blocking I/O model framework for coding JavaScript on the server development. Basically, NodeMap is Web GIS Server that supports OGC implementation specification. It is designed to process GIS data by using DBMS, which supports spatial index and standard spatial query function. And NodeMap uses Node-Canvas module supported HTML5 canvas to render spatial information on tile map. Lastly, NodeMap uses Express module based connect module framework. NodaMap performance demonstration confirmed a possibility of applying Node.js as a (next/future) Web GIS Server development technology through the benchmarking. Having completed its quality test of NodeMap, this study has shown the compatibility and potential for Node.js as a Web GIS server development technology, and has shown the bright future of internet GIS service.

자가 생성 지도 학습 알고리즘을 이용한 컨테이너 식별자 인식

  • Kim, Jae-Yong;Park, Chung-Sik;Kim, Gwang-Baek
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2005.11a
    • /
    • pp.500-506
    • /
    • 2005
  • 본 논문에서는 자가 생성 지도 학습 알고리즘을 이용한 운송 컨테이너 식별자 인식 시스템을 제안한다. 일반적으로 운송 컨테이너의 식별자들은 글자의 색이 검정색 또는 흰색으로 이루어져 있는 특정이 있다. 이러한 특성을 고려하여 원 컨테이너 영상에 대해 검은색과 흰색을 제외하고는 모든 부분을 잡음으로 처리하기 위해 퍼지 추론 방법을 이용하여 식별자 영역과 바탕영역을 구별한다. 식별자 영역으로 구분 된 영역은 그대로 두고, 바탕 영역으로 구분된 영역 은 전체 영상의 평균 픽셀 값으로 대체시킨다. 그리고 Sobel 마스크를 이용하여 에지를 검출하고, 추출된 에지를 이용하여 수직 블록과 수평 블록을 검출 하여 컨테이너의 식별자 영역을 추출하고 이진화한다. 이진화 된 식별자 영역에 대해 검정색의 빈도수를 이용하여 흰바탕과 민바탕을 구분하고 4 방향 윤곽선 추적 알고리즘을 적용하여 개별 식별자를 추출 한다. 개별 식별자 인식을 위해 자가 생성 지도 학습 알고리즘을 제안하여 개별 식별자 인식에 적용한다. 제안된 자가 생성 지도 학습 알고리즘은 입력층과 은닉층 사이의 구조를 ART-l을 개선하여 적용하고 은닉층과 출력층 사이에는 일반화된 델타 학습 방법과 Delta-bar-Delta 알고리즘을 적용하여 학습 및 인식 성능을 개선한다. 실제 80 개의 컨테이너 영상을 대상으로 실험한 결과, 제안된 식별자 추출 방법이 이전의 개별 추출 방법보다 추출률이 개선되었고 FCM 기반 자가 생성 지도 학습 알고리즘보다 제안된 자가 생성 지도 학습 알고리즘이 컨테이너 식별자의 학습 및 인식에 있어서 개선된 것을 확인하였다.색 문제를 해결하고자 하는 것이 연구의 목적이다. 정보추출은 사용자의 관심사에 적합한 문서들로부터 어떤 구체적인 사실이나 관계를 정확히 추출하는 작업을 가리킨다.앞으로 e-메일, 매신저, 전자결재, 지식관리시스템, 인터넷 방송 시스템의 기반 구조 역할을 할 수 있다. 현재 오픈웨어에 적용하기 위한 P2P 기반의 지능형 BPM(Business Process Management)에 관한 연구와 X인터넷 기술을 이용한 RIA (Rich Internet Application) 기반 웹인터페이스 연구를 진행하고 있다.태도와 유아의 창의성간에는 상관이 없는 것으로 나타났고, 일반 유아의 아버지 양육태도와 유아의 창의성간의 상관에서는 아버지 양육태도의 성취-비성취 요인에서와 창의성제목의 추상성요인에서 상관이 있는 것으로 나타났다. 따라서 창의성이 높은 아동의 아버지의 양육태도는 일반 유아의 아버지와 보다 더 애정적이며 자율성이 높지만 창의성이 높은 아동의 집단내에서 창의성에 특별한 영향을 더 미치는 아버지의 양육방식은 발견되지 않았다. 반면 일반 유아의 경우 아버지의 성취지향성이 낮을 때 자녀의 창의성을 향상시킬 수 있는 것으로 나타났다. 이상에서 자녀의 창의성을 향상시키는 중요한 양육차원은 애정성이나 비성취지향성으로 나타나고 있어 정서적인 측면의 지원인 것으로 밝혀졌다.징에서 나타나는 AD-SR맥락의 반성적 탐구가 자주 나타났다. 반성적 탐구 척도 두 그룹을 비교 했을 때 CON 상호작용의 특징이 낮게 나타나는 N그룹이 양적으로 그리고 내용적으로 더 의미 있는 반성적 탐구를 했다용을 지원하는 홈페이지를 만들어 자료

  • PDF

Development of Pre-Service and In-Service Information Management System (iSIMS) (원전 가동전/중 검사정보관리 시스템 개발)

  • Yoo, H.J.;Choi, S.N.;Kim, H.N.;Kim, Y.H.;Yang, S.H.
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.24 no.4
    • /
    • pp.390-395
    • /
    • 2004
  • The iSTMS is a web-based integrated information system supporting Pre-Service and In-Service Inspection(PSI/ISI) processes for the nuclear power plants of KHNP(Korea Hydro & Nuclear Power Co. Ltd.). The system provides a full spectrum coverage of the inspection processes from the planning stage to the final report of examination in accordance with applicable codes, standards, and regulatory requirements. The major functions of the system includes the inspection planning, examination, reporting, project control and status reporting, resource management as well as objects search and navigation. The system also provides two dimensional or three dimensional visualization interface to identify the location and geometry of components and weld areas subject to examination in collaboration with database applications. The iSIMS is implemented with commercial software packages such as database management system, 2-D and 3-D visualization tool, etc., which provide open, updated and verified foundations. This paper describes the key functions and the technologies for the implementation of the iSIMS.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.