• Title/Summary/Keyword: 평가 DB

Search Result 735, Processing Time 0.027 seconds

Design and Implementation of Performance Diagnosis Tool for DB Connection Pool Management (DB Connection Pool 관리를 위한 성능 진단도구의 설계 및 구현)

  • Lee, Jae-Hwan;Jung, In-Hwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.05c
    • /
    • pp.1507-1510
    • /
    • 2003
  • 웹 어플리케이션 개발 시 데이터베이스 시스템의 사용이 증가함에 따라 데이터베이스 시스템에 접속하는 커넥션 리소스 관리에 대한 중요성이 부각되고 있다. 본 논문은 웹 어플리케이션 구축 시 사용하는 데이터베이스 접속 풀(Database connection Pool)의 성능을 평가하고 진단하는 도구를 제안한다. 본 도구는 성능 및 진단을 통하여 웹 어플리케이션에 가장 적합한 최적화된 DB 커넥션 풀을 최적화하는 방법을 제시한다. 아울러 제안된 도구를 사용한 효과적인 데이터베이스 접속 풀(Database Connection Pool)의 관리 결과에 대하여 기술한다.

  • PDF

A Study on Providing Relative Keyword using The Social Network Analysis Technique in Academic Database (학술DB에서 SNA(Social Network Analysis) 기법을 이용한 연관검색어 제공방안 연구)

  • Kim, Kyoung-Yong;Seo, Jung-Yun;Seon, Choong-Nyoung
    • Annual Conference on Human and Language Technology
    • /
    • 2011.10a
    • /
    • pp.79-82
    • /
    • 2011
  • 본 논문은 다양한 주제 분야의 연구 성과물을 제공하는 학술DB에서 주제어(Keyword) 정보를 바탕으로 SNA(Social Network Analysis)기법을 적용해 검색어와 연관도가 높은 연관검색어를 제공하는 것을 그 목적으로 한다. 이를 위해 주제어들 간의 가중치(Weight)를 계산한 뒤 Ego Network 분석을 통해 검색어와 연관된 연관주제어를 추출하고 이를 기존 학술DB에서 제공한 연관검색어와 비교 정리하였다. 그리고 정리된 결과를 연관규칙 마이닝기법, 유사계수를 적용해 연관도측면에서 비교 평가하였다.

  • PDF

Project Performance Comparison Based on Different Types of Project Delivery System (사례연구 분석을 통한 발주방식별 성과비교)

  • Lee, Soo-Kyong;Jung, Young-Soo
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2011.05a
    • /
    • pp.207-209
    • /
    • 2011
  • Numerous reports show that alternative project delivery systems (PDSs) such as design-build (DB), construction management at risk (CMR), and design-build-maintain (DBM) are increasingly used in many countries. This study compared characteristics of each PDS (design-bid-build (DBB), DB, CMR, or DBM) by analyzing quantitative data from 9 research articles. In order to compare characteristics between DBB and alternative PDSs, the study is based on principal 3 factors - Time, Cost, and Quality. DB shows the best performance in the time part and also the cost performance depending on facility type and project size. The performance of quality has minor difference among different PDSs. These results support the fact that using an appropriate PDS by the characteristic of a project makes high value of efficiency and productivity.

  • PDF

A Study on the Advancement Planning of the Total DB for the Urban Regeneration (도시재생종합 DB의 고도화 방안 연구)

  • Yang, Dong-Suk;Yu, Yeong-Hwa
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.1429-1432
    • /
    • 2011
  • 도시재생 업무분야에서 정보체계의 실질적 활용을 도모하기 위해서는 특화된 정보생산 및 관리 모형이 필요하다. 도시재생 종합 DB는 도시 쇠퇴진단, 잠재력 평가, 우선대상지 선정 등 업무 수행을 위해 요구되는 정보수요와 전문가 및 일반인을 대상으로 하는 다양한 서비스 수요에 기반하고 있다. 본 연구에서는 기존의 연구 성과 및 현재 진행 중인 시스템 모형을 분석하여 구축대상 정보를 구성하고 이를 공간정보 기반에서 재구성하여 효과적으로 활용할 수 있도록 정보 모형을 수립하고 구현방안을 제안하였다. 제시된 정보 모형을 통해 도시재생 정보의 활용성을 높이고 도시재생종합정보시스템의 기능을 확장시킬 수 있을 것으로 기대된다.

A Study on Developing and Applying of Integrated Performance Evaluation Model for Public Information Projects (공공 정보화사업에 대한 통합성과 평가모델 개발과 적용에 관한 연구)

  • Yoo, Si-Hyeong;Yoo, Hae-Young
    • The KIPS Transactions:PartD
    • /
    • v.15D no.3
    • /
    • pp.387-398
    • /
    • 2008
  • Recently, an evaluation for information system performance analysis an investment effect quantitatively and qualitatively. So various performance evaluation methods are applied to overcome a limitation which is based on traditional investment returns. However a performance evaluation in public service only derives that a result which is qualitatively evaluated about evaluation domain is yes or no, the performance evaluation does not improve in quality of information system. So in this paper we improve existing performance evaluation methods in public service and add a quality evaluation and quantitative evaluation in the methods. Therefore we propose an integrated evaluation model which guarantees objectivity and reliability of evaluation result and which improves an information system. We consider a project object of an Administration Information DB Construction Project which is promoted in information-oriented project and derive an evaluation result which is applied reflection ratio and weight in each evaluation domain to show a validity of the proposed model.

Study of MongoDB Architecture by Data Complexity for Big Data Analysis System (빅데이터 분석 시스템 구현을 위한 데이터 구조의 복잡성에 따른 MongoDB 환경 구성 연구)

  • Hyeopgeon Lee;Young-Woon Kim;Jin-Woo Lee;Seong Hyun Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.354-361
    • /
    • 2023
  • Big data analysis systems apply NoSQL databases like MongoDB to store, process, and analyze diverse forms of large-scale data. MongoDB offers scalability and fast data processing speeds through distributed processing and data replication, depending on its configuration. This paper investigates the suitable MongoDB environment configurations for implementing big data analysis systems. For performance evaluation, we configured both single-node and multi-node environments. In the multi-node setup, we expanded the number of data nodes from two to three and measured the performance in each environment. According to the analysis, the processing speeds for complex data structures with three or more dimensions are approximately 5.75% faster in the single-node environment compared to an environment with two data nodes. However, a setting with three data nodes processes data about 25.15% faster than the single-node environment. On the other hand, for simple one-dimensional data structures, the multi-node environment processes data approximately 28.63% faster than the single-node environment. Further research is needed to practically validate these findings with diverse data structures and large volumes of data.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

DB Construction of Activation Temperature and Response Time Index for Domestic Fixed-temperature Heat Detectors in Ceiling Jet Flow (천장제트기류에 대한 국내 정온식 열감지기의 작동온도 및 반응시간지수(RTI)에 관한 DB 구축)

  • Yoon, Ga-Yeong;Han, Ho-Sik;Mun, Sun-Yeo;Park, Chung-Hwa;Hwang, Cheol-Hong
    • Fire Science and Engineering
    • /
    • v.34 no.3
    • /
    • pp.35-42
    • /
    • 2020
  • The accurate prediction of fire detector activation time is required to ensure the reliability of fire modeling during the safety assessment of performance-based fire safety design. The main objective of this study is to determine the activation temperature and the response time index (RTI) of a fixed heat detector, which are the main input factors of a fixed-temperature heat detector applied to the fire dynamics simulator (FDS), a typical fire model. Therefore, a fire detector evaluator, which is a fire detector experimental apparatus, was applied, and 10 types of domestic fixed-temperature heat detectors were selected through a product recognition survey. It was found that there were significant differences in the activation temperature and RTI among the detectors. Additionally, the detector activation time of the FDS with the measured DB can be predicted more accurately. Finally, the DB of the activation temperature and RTI of the fixed-temperature heat detectors with reliability was provided.

Development of Hybrid Spatial Information Model for National Base Map (국가기본도용 Hybrid 공간정보 모델 개발)

  • Hwang, Jin Sang;Yun, Hong Sik;Yoo, Jae Yong;Cho, Seong Hwan;Kang, Seong Chan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.4_1
    • /
    • pp.335-341
    • /
    • 2014
  • The main goal of this study is on developing a proper brand-new data of national base map and Data Based(DB) model for new information technology environments. To achieve this goal, we generated a brand-new Hybrid spatial information model which is specialized in the spatio-temporal map structure, the framework map for information integration, and the multiple-layered topology structure. The DB structure was designed to reflect the change of objections by adding a new dimension of 'time' in the spartial information, while the infrastructure was able to connect/converge with other information by giving the unique ID and multi-scale fusion map structure. Furthermore, the topology and multi visualization structure, including indoor and basement information, were designed to overcome limitations of expressing in 2 dimension map. The result from the performance test, which was based on the Hybrid spatial information model, confirms the possibility in advanced national base map and conducted DB model through implementing various information and spatiotemporal connections.

Vanishing Points Detection in Indoor Scene Using Line Segment Classification (선분분류를 이용한 실내영상의 소실점 추출)

  • Ma, Chaoqing;Gwun, Oubong
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.8
    • /
    • pp.1-10
    • /
    • 2013
  • This paper proposes a method to detect vanishing points of an indoor scene using line segment classification. Two-stage vanishing points detection is carried out to detect vanishing point in indoor scene efficiently. In the first stage, the method examines whether the image composition is a one-point perspective projection or a two-point one. If it is a two-point perspective projection, a horizontal line through the detected vanishing point is found for line segment classification. In the second stage, the method detects two vanishing points exactly using line segment classification. The method is evaluated by synthetic images and an image DB. In the synthetic image which some noise is added in, vanishing point detection error is under 16 pixels until the percent of the noise to the image becomes 60%. Vanishing points detection ratio by A.Quattoni and A.Torralba's image DB is over 87%.