• Title/Summary/Keyword: DB data

Search Result 1,774, Processing Time 0.026 seconds

A Study on the Construction of the Framework Spatial DB for Developing Watershed Management System Based on River Network (하천 네트워크 기반의 유역관리시스템 개발을 위한 프레임워크 공간 DB 구축에 관한 연구)

  • Kim, Kyung-Tak;Choi, Yun-Seok;Kim, Joo-Hun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.2
    • /
    • pp.87-96
    • /
    • 2004
  • When watershed spatial database is constructed from DEM, hydrological geographic characteristics of watershed can be easily extracted. And the characteristics can be assigned and managed as the attribute of spatial database. In this study the scheme of constructing framework spatial database which is basic information for managing watershed information is examined. We established framework spatial data and defined the relationship of the data. And framework spatial database of test site was constructed. In this study, HyGIS(Hydrological Geographic Information System) which is developed by domestic technology for making hydrological spatial data and developing water resources system is used. Hydrological geographic characteristics and spatial data is extracted by HyGIS. And the data from HyGIS is used for constructing framework spatial database of test site. Finally, this study suggests the strategy of constructing framework spatial database for developing watershed management system based on river network.

  • PDF

Design technique of Game DB optimized with Android-base (안드로이드 기반의 최적화된 게임DB 설계기법)

  • Ryu, Chang-su;Hur, Chang-wu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.465-468
    • /
    • 2012
  • The growth speed of the industry of national smart phone game service has become faster and the its economical and cultural effect has increased steadily these days. The DB design for data perpetuation in C/S MMO which has accumulated data a lot and many simultaneous plays like smart phone games is very important. This paper, mindful of the operation and the scalability suitable for android OS, by utilizing RDBMS, enabling very short, many transactions, their complexity, and a large amount of game data process, suggests fully-qualified MMORPG game DB design techniques widely availabe for more than online game industries.

  • PDF

Implementation of motor control system using NodeJS and MongoDB (NodeJS와 MongoDB를 활용한 모터 동작 제어시스템 구현)

  • Kang, Jin Young;Lee, Young-dong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.748-750
    • /
    • 2017
  • With the development of intelligent technologies, the Internet of Things(IoT) has been applied to various applications. A platform technology including a sensor-server-DB for easily managing data at a remote site is required. In this paper, we implemented a servo motor control system that moves by the smart phone tilt value using NodeJS and MongoDB. The system consists of Rasberry Pi, servo motor and smart phone and the servo motor sensor data is transmitted to NodeJS so that data can be stored in database.

  • PDF

Performance Comparison and Analysis between Open-Source DBMS (오픈소스 DBMS 성능비교분석)

  • Jang, Rae-Young;Bae, Jung-Min;Jung, Sung-Jae;Soh, Woo-Young;Sung, Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.805-808
    • /
    • 2014
  • The DBMS is a database management software system to access by people. It is an open source DBMS, such as MySQL and commercial services, such as ORACLE. Since MySQL has been acquired by Oracle, MariaDB released increase demand. NoSQL also are increasing, the trend is of interest, depending on the circumstances. Based on the same type of mass data, Depending on the performance comparison between the open source DBMS is required, and The study compared the performance between MariaDB and MongoDB. This paper proposes a DBMS for big data to process.

  • PDF

Development of an Integrated DataBase System of Marine Geological and Geophysical Data Around the Korean Peninsula (한반도 해역 해양지질 및 지구물리 자료 통합 DB시스템 개발)

  • KIM, Sung-Dae;BAEK, Sang-Ho;CHOI, Sang-Hwa;PARK, Hyuk-Min
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.2
    • /
    • pp.47-62
    • /
    • 2016
  • An integrated database(DB) system was developed to manage the marine geological data and geophysical data acquired from around the Korean peninsula from 2009 to 2013. Geological data such as size analysis data, columnar section images, X-ray images, heavy metal data, and organic carbon data of sediment samples, were collected in the form of text files, excel files, PDF files and image files. Geophysical data such as seismic data, magnetic data, and gravity data were gathered in the form of SEG-Y binary files, image files and text files. We collected scientific data from research projects funded by the Ministry of Oceans and Fisheries, data produced by domestic marine organizations, and public data provided by foreign organizations. All the collected data were validated manually and stored in the archive DB according to data processing procedures. A geographic information system was developed to manage the spatial information and provide data effectively using the map interface. Geographic information system(GIS) software was used to import the position data from text files, manipulate spatial data, and produce shape files. A GIS DB was set up using the Oracle database system and ArcGIS spatial data engine. A client/server GIS application was developed to support data search, data provision, and visualization of scientific data. It provided complex search functions and on-the-fly visualization using ChartFX and specially developed programs. The system is currently being maintained and newly collected data is added to the DB system every year.

A Study on Metadata Development for Establishing International Research Cooperation Information Database (국제연구협력정보 DB 구축을 위한 메타데이터 개발에 관한 연구)

  • Noh, Younghee
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.29 no.2
    • /
    • pp.5-34
    • /
    • 2018
  • In this research, we intended to discover all types of information related to international research cooperation, collect information by each type, and build a database. To this end, we initially developed metadata, discussed with metadata experts to develop metadata in the primary phase, and conducted a survey on the experts related to international research cooperation. Furthermore, we collected and entered data in the meta field for each type of information source, and verified the meta field through processes such as the existence of actual data for each meta field, among others. The types of database designed in this research are the international research cooperation information source database, international research cooperation project database, international research cooperation expert database, international research cooperation institution database, international organization database, and other database, and as a result of validating of the field by entering the data by conducting the survey, the survey results and the data entry rate by field demonstrated such a high rate of consistency. However, only the international organization data field was confirmed to have approximately 25% of the field having the data entry rate of less than 10% despite the high rate of significance rated by the users.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A COMPARATIVE STUDY ON BLOCKCHAIN DATA MANAGEMENT SYSTEMS: BIGCHAINDB VS FALCONDB

  • Abrar Alotaibi;Sarah Alissa;Salahadin Mohammed
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.128-134
    • /
    • 2023
  • The widespread usage of blockchain technology in cryptocurrencies has led to the adoption of the blockchain concept in data storage management systems for secure and effective data storage and management. Several innovative studies have proposed solutions that integrate blockchain with distributed databases. In this article, we review current blockchain databases, then focus on two well-known blockchain databases-BigchainDB and FalconDB-to illustrate their architecture and design aspects in more detail. BigchainDB is a distributed database that integrates blockchain properties to enhance immutability and decentralization as well as a high transaction rate, low latency, and accurate queries. Its architecture consists of three layers: the transaction layer, consensus layer, and data model layer. FalconDB, on the other hand, is a shared database that allows multiple clients to collaborate on the database securely and efficiently, even if they have limited resources. It has two layers: the authentication layer and the consensus layer, which are used with client requests and results. Finally, a comparison is made between the two blockchain databases, revealing that they share some characteristics such as immutability, low latency, permission, horizontal scalability, decentralization, and the same consensus protocol. However, they vary in terms of database type, concurrency mechanism, replication model, cost, and the usage of smart contracts.

Development of GIS-based Integrated DB Management System for the Analysis of Climate Environment Change (기후.환경 변화 분석을 위한 GIS기반의 통합DB 관리시스템 개발)

  • Kim, Na-Young;Kim, Kye-Hyun;Park, Yong-Gil
    • Spatial Information Research
    • /
    • v.19 no.6
    • /
    • pp.101-109
    • /
    • 2011
  • Climate change affects all components of the global environment system and, in turn, all components mutually interact and affect climate change through non-linear feedback processes. It is thus necessary to study the interaction between the climate and the environment, in order to comprehensively understand and predict climate and environment change. However, current relevant systems are limited to particular areas and do not sufficiently support the mutual linking of research studies. Therefore, this study develops prototype a GIS based integrated DB management system for supporting the climate and environment data storage, management and distribution. The integrated DB management system was developed using VB.NET languages and ArcObjects component. First, considering the demands of climate environment experts, the study areas are selected and the methods of data management and utilization were defined. In addition, a location-based GIS DB was created in order to aid in understanding climate change through visual representation. Finally, the integrated DB management system provides an efficient data management and distribution data and it creates synergistic effect on climate and environment study. It also contributes significantly to the comprehensive diagnosis and prediction of climate change and environment systems.

A Study on DB Security Problem Improvement of DB Masking by Security Grade (DB 보안의 문제점 개선을 위한 보안등급별 Masking 연구)

  • Baek, Jong-Il;Park, Dea-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.101-109
    • /
    • 2009
  • An encryption module is equipped basically at 8i version ideal of Oracle DBMS, encryption module, but a performance decrease is caused, and users are restrictive. We analyze problem of DB security by technology by circles at this paper whether or not there is an index search, object management disorder, a serious DB performance decrease by encryption, real-time data encryption beauty whether or not there is data approach control beauty circular-based IP. And presentation does the comprehensive security Frame Work which utilized the DB Masking technique that is an alternative means technical encryption in order to improve availability of DB security. We use a virtual account, and set up a DB Masking basis by security grades as alternatives, we check advance user authentication and SQL inquiry approvals and integrity after the fact through virtual accounts, utilize to method as collect by an auditing log that an officer was able to do safely DB.