• Title/Summary/Keyword: DB 구조

Search Result 506, Processing Time 0.031 seconds

An Implementation of NEIS′DB Security Using RBAC based on PMI (PMI기반의 RBAC를 이용한 NEIS의 DB 보안 구현)

  • Ryoo Du-Gyu;Moon Bong-Keun;Jun Moon-Seog
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.14 no.6
    • /
    • pp.31-45
    • /
    • 2004
  • Public Key Infrastructure(PKI) provides a strong authentication. Privilege Management Infrastructure(PMI) as a new technology can provide user's attribute information. The main function of PMI is to give more specified authority and role to user. To authenticate net and role, we have used digital signature. Role Based Access Control(RBAC) is implemented by digital signature. RBAC provides some flexibility for security management. NEIS(National Education Information System) can not always provide satisfied quality of security management. The main idea of the proposed RNEIS(Roll Based NEIS) is that user's role is stored in AC, access control decisions are driven by authentication policy and role. Security manager enables user to refer to the role stored in user's AC, admits access control and suggests DB encryption by digital signature.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Optimal Input Database Construction for 3D Dredging Quantification (3차원 준설물량 산출을 위한 최적의 입력DB 구축방안)

  • Gang, ByeungJu;Hwang, Bumsik;Park, Heonwoo;Cho, Wanjei
    • Journal of the Korean GEO-environmental Society
    • /
    • v.19 no.5
    • /
    • pp.23-31
    • /
    • 2018
  • The dredging project became more important with the recent construction of off shore structures and reclamation projects. Accordingly, more exact quantitative estimation of the dredged amount should be required. The sub-sea ground information can be obtained generally by the boring investigation and the dredged amount can be estimated based on the depth or the deeper bound of a ceratin layer via 3D visualization program. During the estimation process, the input DB should be constructed with 1D elevation information from boring investigation for the spatially approximated distribution of a deeper bound of each ground layer. The input DB can be varied with the application of the borings and the approximation targets. Therefore, the 3D visualized ground profile and dredged amounts are compared on the actively dredged sites, vicinity of Saemangeum area and outer port area in Gunsan with regard to the input DB construction methods. Conclusively, the input DB based on the spatially approximated depths show higher precision results and more reasonable 3D visualized ground profiles.

Design of a Fast 256Kb EEPROM for MCU (MCU용 Fast 256Kb EEPROM 설계)

  • Kim, Yong-Ho;Park, Heon;Park, Mu-Hun;Ha, Pan-Bong;Kim, Young-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.3
    • /
    • pp.567-574
    • /
    • 2015
  • In this paper, a 50ns 256-kb EEPROM IP for MCU (micro controller unit) ICs is designed. The speed of data sensing is increased in the read mode by using a proposed DB sensing circuit of differential amplifier type which uses the reference voltage, and the switching speed is also increased by reducing the total DB parasitic capacitance as a distributed DB structure is separated into eight. Also, the access time is reduced reducing a precharging time of BL in the read mode removing a 5V NMOS transistor in the conventional RD switch, and the reliability of output data can be secured by obtaining the differential voltage (${\Delta}V$) between the DB and the reference voltages as 0.2*VDD. The access time of the designed 256-kb EEPROM IP is 45.8ns and the layout size is $1571.625{\mu}m{\times}798.540{\mu}m$ based on MagnaChip's $0.18{\mu}m$ EEPROM process.

Design of User Interface on Decommissioning DB (연구로 해체 DB User Interface 설계)

  • 박희성;정관성;이근우;백삼태;이규일;박진호
    • Proceedings of the Korean Radioactive Waste Society Conference
    • /
    • 2003.11a
    • /
    • pp.681-686
    • /
    • 2003
  • It has been designed GUI(graphic user interface) to consult the convenience of a input data and the flexibility that can be retrieved dismantling information relation to decommissioning DB of KRR1&2. The GUI can proceed an input materials and a search and output of a saved data in server based on a facility code and also have a function of explorer which can find the lower dismantling objects in each facilities. It has added the structure of the multimedia that could be showed a series of dismantling activities with a Mpeg and pictures into the GUI. In the future work, Decommissioning DB and User Interface are intend to contribute a functions that could be evaluate and analyze for a dismantling activities with a engineering theory.

  • PDF

A Study on the Database Structure for Utilizing Classical Literature Knowledge (고문헌 지식활용을 위한 DB구조에 관한 고찰)

  • Woo, Dong-Hyun;Kim, Ki-Wook;Lee, Byung-Wook
    • The Journal of Korean Medical History
    • /
    • v.33 no.2
    • /
    • pp.89-104
    • /
    • 2020
  • The purpose of this research is to build a database structure that can be useful for evidence-based medical practices by constructing the knowledge related to oriental medicine in the classical literature knowledge in a form that can utilize new forms of information technology. As a method, "database" is used as a keyword to search published studies in the field of oriental medicine, research is conducted on classic literature knowledge, and studies describing the contents of the data structure are found and analyzed. In conclusion, the original text DB for the preservation of the original texts and the presentation of the supporting texts should include 'Contents Text', 'Tree Structure', 'Herbal Structure', 'Medicine Manufacture', and 'Disease Structure' tables. In order to search, calculate, and automatically extract expressions written in the original text of the old literature, the tool DB should include 'Unit List', 'Capacity Notation List', 'CUI', 'LUI', and 'SUI' tables. In addition, In order to manage integrated knowledge such as herbal, medicine, acupuncture, disease, and literature, and to implement a search function such as comparison of similarity of control composition, the knowledge DB must contain 'dose-controlled medicine name', 'dose-controlled medicine composition', 'relational knowledge', 'knowledge structure', and 'computational knowledge' tables.

A Study on Welding Joint Parts of Heavy Equipment (중장비용 용접구조물의 신뢰성 평가)

  • Ko, Jung;Cho, Yong-Geun
    • Proceedings of the Korean Reliability Society Conference
    • /
    • 2002.06a
    • /
    • pp.111-111
    • /
    • 2002
  • 당사 제품인 건설 중장비용 대형 용접 구조물에 대하여, 설계단계부터 구조해석 및 초도품에 대한 Rig Test, 실차 시험 등 여러 단계를 거쳐 내구성을 평가하고 있다. 그러나 용접품 특성상 수명 평가 및 예측이 용이하지 않아 특히 내구성 개선 또는 효율 증대를 위한 경량화 등의 필요에 따른 설계 변경시 시행착오를 거치는 경우가 많다. 이에 개발 일정의 지연, 시험 비용의 증대 등이 불가피하고, 보증 수명의 증대를 통한시장 확대 등에 있어서도 애로가 있다. 또한 기존 Rig Test의 경우도 실제 사용환경과의 차이 등으로 인해 필드에서의 사용 수명을 예측하는데 한계가 있다. 이에 당 센터에서는 통계적 분석을 통한 사용 조건의 DB 구축과 제품 품질 DB의 구축 및 통합을 통하여 제관품의 특성을 반영한 용접 구조물의 가속 수명 평가법의 신뢰도 향상과 시장 목표에 부합하는 최적 설계 달성을 위한 독자적 Tool을 개발하고 있으며, 이에 대한 첫 번째 과제로 기확보 데이터에 대한 상관관계를 분석하였다.

  • PDF

Construction of Road Information Database for Urban Disaster Management : Focused on Evacuation Vulnerability (방재관점의 도로 데이터베이스 구축 : 대피위험도를 중심으로)

  • Kim, Ji-Young;Kim, Jung-Ok;Kim, Yong-Il;Yu, Ki-Yun
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2007.06a
    • /
    • pp.212-216
    • /
    • 2007
  • 본 연구의 목적은 도시 지역에서 지진 발생시 중요한 대피로 역할을 하는 도로의 위험평가요소를 분석하고 데이터베이스(Database, DB)를 구축하는데 있다. 현재 우리나라는 도로의 체계적인 유지관리를 위해 도로관리통합시스템이 개발되어 있으나, 이는 도로대장전산화, 포장관리시스템, 교랑관리시스템, 도로절개면유지관리시스템 등에 한정되어 있다. 다시 말해서 재난 시 사람들이 신변의 안전을 확보할 수 있는 대피나 구급을 위한 통로로써의 도로에 대한 이해가 부족한 실정이다. 따라서 본 연구에서는 선행연구의 분석을 토대로 방재관점의 도로관리를 위한 항목들을 자연환경, 시회환경, 도로 및 시설물, 유발요인 등으로 제안하고, 서울대학교 부근을 대상지역으로 하여 이들 요인을 DB화하였다. 이는 도로와 주변 환경 등을 동시에 고려하여 재난 발생 시 대피와 구조 활동을 위한 방재계획은 물론, 지역별 위험도평가 및 재해위험도작성에 기초 자료로 이용될 것으로 기대된다. 그러나 본 연구에서는 기구축된 GIS 자료가 아닌 현장 조사를 바탕으로 구축된 DB로, 방재관점의 통합된 DB의 필요성을 제시하는데 그 의미가 있을 것이다.

  • PDF

A Study on the DB Construction Method for Analyzing Housing Demand Analysis Based on Big-Data (빅데이터 기반 주택수요 분석을 위한 DB 구축 방안 연구)

  • Yang, Dong-Suk;Lee, Sang-Hoon;Lim, Jae-Bin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.778-780
    • /
    • 2017
  • 적절한 주택공급 및 주택정책을 위해서는 인구 및 가구 구조의 변화에 따른 주택수요의 예측의 정확성이 요구되고 있다. 본 연구에서는 기존 주택수요 예측에 있어서의 DB의 문제점들을 살펴보고 개선방안 및 빅데이터를 활용할 수 있는 DB 구축방안을 제시하였다. 향후, 기존에 활용되지 않고 있는 주택공시가격, 건축물대장, 가계동향조사, 인구주택 총조사 등을 활용하여 주택수요를 분석할 수 있도록 파일럿시스템을 개발하여 타당성을 검토할 예정이다.

A Knowledge Management Tool for ETRI Korean/English and English/Korean Automatic Translation System (자동 번역용 대용량 번역 지식 DB 관리 시스템 설계 및 구현)

  • 장현숙;임점미;유원경;홍기형;박상규
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.363-365
    • /
    • 2000
  • 본 논문은 영/한, 한/영 자동 번역용 대용량 번역 지식의 효과적인 관리를 위한 시스템 개발이다. 현재 개발 중의 ETRI의 영/한, 한/영 자동 번역 시스템의 개발 환경을 분석하고, gdbm 기반의 번역지식관리의 문제점을 정리하였다. 본 논문에서 제시한 번역 지식 관리 시스템은 클라이언트/서버 구조를 가지는 MS SQL 서버 기반의 시스템이다. 번역 지식은 관계형 DB로 모델링하여 스키마를 설계하고 구현하였다. 관리 시스템에는 기존에 gdbm 파일로 구축해 놓은 지식을 번역지식 DB로 변환하기 위한 이전 도구, 번역지식 DB에 저장된 번역 지식을 검색하기 위한 검색 도구, 번역지식의 삽입, 삭제, 변경을 지원하는 구축도구로 구성된다.

  • PDF