• Title/Summary/Keyword: Web Structure Optimization

Search Result 26, Processing Time 0.027 seconds

Development of Web-based High Throughput Computing Environment and Its Applications (웹기반 대용량 계산환경 구축 및 응용연구)

  • Jeong, Min-Joong;Kim, Byung-Sang
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.20 no.3
    • /
    • pp.365-370
    • /
    • 2007
  • Many engineering problems often require the large amount of computing resources for iterative simulations of problems treating many parameters and input files. In order to overcome the situation, this paper proposes an e-Science based computational system. The system exploits the Grid computing technology to establish an integrated web service environment which supports distributed high throughput computational simulations and remote executions. The proposed system provides an easy-to-use parametric study service where a computational service includes real time monitoring. To verify usability of the proposed system, two kinds of applications were introduced. The first application is an Aerospace Integrated Research System (e-AIRS). The e-AIRS adapts the proposed computational system to solve CFD problems. The second one is design and optimization of protein 3-dimensional structures in structural biology.

A Study on Function which supported GPU and Function Structure Optimization for AI Inference (서버리스 플랫폼에서 GPU 지원 및 인공지능 모델 추론 에 적합한 함수 구조에 관한 연구)

  • Hwang, Dong-Hyun;Kim, Dongmin;Choi, Young-Yoon;Han, Seung-Ho;Jeon, Gi-Man;Son, Jae-Gi
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.19-20
    • /
    • 2019
  • 서버리스 프레임워크(Serverless Framework)는 마이크로서비스 아키텍처의 이론을 클라우드와 컨테이너를 기반으로 구현한 것으로 아마존의 AWS(Amazon Web Service)와 같은 퍼블릭 클라우드 플랫폼이 서비스됨에 따라 활용도 높아지고 있다. 하지만 현재까지의 플랫폼들은 GPU 와 같은 하드웨어의 의존성을 가진 인공지능 모델의 서비스에는 지원이 부족하다. 이에 본 논문에서는 컨테이너 기반의 오픈소스 서버리스 플랫폼을 대상으로 엔비디어-도커와 k8s-device-plugin 을 적용하여 GPU 활용이 가능한 서버리스 플랫폼을 구현하였다. 또한 인공지능 모델이 컨테이너에서 구동될 때 반복되는 가중치 로드를 줄이기 위한 구조를 제안한다. 본 논문에서 구현된 서버리스 플랫폼은 객체 검출 모델인 SSD(Single Shot Multibox Detector) 모델을 이용하여 성능 비교 실험을 진행하였으며, 그 결과 인공지능 모델이 적용된 서버리스 플랫폼의 함수 응답 시간이 개선되었음을 확인하였다.

XML View Indexing Using an RDBMS based XML Storage System (관계 DBMS 기반 XML 저장시스템 상에서의 XML 뷰 인덱싱)

  • Park Dae-Sung;Kim Young-Sung;Kang Hyunchul
    • Journal of Internet Computing and Services
    • /
    • v.6 no.4
    • /
    • pp.59-73
    • /
    • 2005
  • Caching query results and reusing them in processing of subsequent queries is an important query optimization technique. Materialized view and view indexing are the representative examples of such a technique. The two schemes had received much attention for relational databases, and have been investigated for XML data since XML emerged as the standard for data exchange on the Web. In XML view indexing, XML view xv which is the result of an XML query is represented as an XML view index(XVI), a structure containing the identifiers of xv's underlying XML elements as well as the information on xv. Since XVI for xv stores just the identifiers of the XML elements not the elements themselves, when xv is requested, its XVI should be materialized against xv's underlying XML documents. In this paper, we address the problem of integrating an XML view index management system with an RDBMS based XML storage system. The proposed system was implemented in Java on Windows 2000 Server with each of two different commercial RDBMSs, and used in evaluating performance improvement through XML view indexing as well as its overheads. The experimental results revealed that XML view indexing was very effective with an RDBMS based XML storage system while its overhead was negligible.

  • PDF

A Comparative Study of Classification Systems for Organizing a KOS Registry (KOS 레지스트리 구조화를 위한 분류체계 비교 연구)

  • Ziyoung Park
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.58 no.2
    • /
    • pp.269-288
    • /
    • 2024
  • To structure the KOS registry, it is necessary to select a classification system that suits the characteristics of the collected KOS. This study aimed to classify domestic KOS collected through various classification schems, and based on these results, provide insights for selecting a classification system when structuring the KOS registry. A total of 313 KOS data collected via web searches were categorized using five types of classification systems and a thesaurus, and the results were analyzed. The analysis indicated that for international linkage of the KOS registry, foreign classification systems should be applied, and for optimization with domestic knowledge resources or to cater to domestic researchers, domestic classification systems need to be applied. Additionally, depending on the field-specific characteristics of the KOS, research area KOS should apply classification systems based on academic disciplines, while public sector KOS should consider classification systems based on government functions. Lastly, it is necessary to strengthen the linkage between domestic and international KOS, which also requires the application of multiple classification systems.

Optimum Design of Two Hinged Steel Arches with I Sectional Type (SUMT법(法)에 의(依)한 2골절(滑節) I형(形) 강재(鋼材) 아치의 최적설계(最適設計))

  • Jung, Young Chae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.12 no.3
    • /
    • pp.65-79
    • /
    • 1992
  • This study is concerned with the optimal design of two hinged steel arches with I cross sectional type and aimed at the exact analysis of the arches and the safe and economic design of structure. The analyzing method of arches which introduces the finite difference method considering the displacements of structure in analyzing process is used to eliminate the error of analysis and to determine the sectional force of structure. The optimizing problems of arches formulate with the objective functions and the constraints which take the sectional dimensions(B, D, $t_f$, $t_w$) as the design variables. The object functions are formulated as the total weight of arch and the constraints are derived by using the criteria with respect to the working stress, the minimum dimension of flange and web based on the part of steel bridge in the Korea standard code of road bridge and including the economic depth constraint of the I sectional type, the upper limit dimension of the depth of web and the lower limit dimension of the breadth of flange. The SUMT method using the modified Newton Raphson direction method is introduced to solve the formulated nonlinear programming problems which developed in this study and tested out throught the numerical examples. The developed optimal design programming of arch is tested out and examined throught the numerical examples for the various arches. And their results are compared and analyzed to examine the possibility of optimization, the applicablity, the convergency of this algorithm and with the results of numerical examples using the reference(30). The correlative equations between the optimal sectional areas and inertia moments are introduced from the various numerical optimal design results in this study.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.