• Title/Summary/Keyword: 웹기반 데이터베이스 모델

Search Result 128, Processing Time 0.021 seconds

Design and Implementation of Web-based SWOT Analysis Supporting Tool (웹 기반의 SWOT 분석 지원도구 설계 및 구현)

  • Hwang, Jeena;Seo, Ju Hwan;Lim, Jung-Sun;Yoo, Hyoung Sun;Park, Jinhan;Kim, You-eil;Kim, Ji Hui
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.7
    • /
    • pp.1-11
    • /
    • 2017
  • The best business strategy leading to innovation and productivity can be achieved by carefully analyzing internal and external environments of a company. Many companies often require, but difficult to find a tool to determine their own internal/external environmental factors including strengths, weaknesses, opportunities and threats(SWOT). SWOT is one analytical base model that is utilized in this research to design semi-automated environmental analysis process. This study investigates on SWOT generation system that is built on existing analysis database created by experts in each field. Companies can search and choose their best expressing environmental elements that are stored in the database. This semi-automated SWOT tool is expected to contribute that companies can recognize their internal capabilities more accurately, and help consider external environment changes around them.

Full Stack Platform Design with MongoDB (MongoDB를 활용한 풀 스택 플랫폼 설계)

  • Hong, Sun Hag;Cho, Kyung Soon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.12
    • /
    • pp.152-158
    • /
    • 2016
  • In this paper, we implemented the full stack platform design with MongoDB database of open source platform Raspberry PI 3 model. We experimented the triggering of event driven with acceleration sensor data logging with wireless communication. we captured the image of USB Camera(MS LifeCam cinema) with 28 frames per second under the Linux version of Raspbian Jessie and extended the functionality of wireless communication function with Bluetooth technology for the purpose of making Android Mobile devices interface. And therefore we implemented the functions of the full stack platform for recognizing the event triggering characteristics of detecting the acceleration sensor action and gathering the temperature and humidity sensor data under IoT environment. Especially we used MEAN Stack for developing the performance of full stack platform because the MEAN Stack is more akin to working with MongoDB than what we know of as a database. Afterwards, we would enhance the performance of full stack platform for IoT clouding functionalities and more feasible web design with MongoDB.

XSTAR: XQuery to SQL Translation Algorithms on RDBMS (XSTAR: XML 질의의 SQL 변환 알고리즘)

  • Hong, Dong-Kweon;Jung, Min-Kyoung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.3
    • /
    • pp.430-433
    • /
    • 2007
  • There have been several researches to manipulate XML Queries efficiently since XML has been accepted in many areas. Among the many of the researches majority of them adopt relational databases as underlying systems because relational model which is used the most widely for managing large data efficiently. In this paper we develop XQuery to SQL Translation Algorithms called XSTAR that can efficiently handle XPath, XQuery FLWORs with nested iteration expressions, element constructors and keywords retrieval on relational database as well as constructing XML fragments from the transformed SQL results. The entire algorithms mentioned in XSTAR have been implemented as the XQuery processor engine in XML management system, XPERT, and we can test and confirm it's prototype from "http ://dblab.kmu.ac.kr/project.jsp".

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Text-mining Techniques for Metabolic Pathway Reconstruction (대사경로 재구축을 위한 텍스트 마이닝 기법)

  • Kwon, Hyuk-Ryul;Na, Jong-Hwa;Yoo, Jae-Soo;Cho, Wan-Sup
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.4
    • /
    • pp.138-147
    • /
    • 2007
  • Metabolic pathway is a series of chemical reactions occuning within a cell and can be used for drug development and understanding of life phenomenon. Many biologists are trying to extract metabolic pathway information from huge literatures for their metabolic-circuit regulation study. We propose a text-mining technique based on the keyword and pattern. Proposed technique utilizes a web robot to collect huge papers and stores them into a local database. We use gene ontology to increase compound recognition rate and NCBI Tokenizer library to recognize useful information without compound destruction. Furthermore, we obtain useful sentence patterns representing metabolic pathway from papers and KEGG database. We have extracted 66 patterns in 20,000 documents for Glycosphingolipid species from KEGG, a representative metabolic database. We verify our system for nineteen compounds in Glycosphingolipid species. The result shows that the recall is 95.1%, the precision 96.3%, and the processing time 15 seconds. Proposed text mining system is expected to be used for metabolic pathway reconstruction.

  • PDF

Automatic Training Corpus Generation Method of Named Entity Recognition Using Knowledge-Bases (개체명 인식 코퍼스 생성을 위한 지식베이스 활용 기법)

  • Park, Youngmin;Kim, Yejin;Kang, Sangwoo;Seo, Jungyun
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.1
    • /
    • pp.27-41
    • /
    • 2016
  • Named entity recognition is to classify elements in text into predefined categories and used for various departments which receives natural language inputs. In this paper, we propose a method which can generate named entity training corpus automatically using knowledge bases. We apply two different methods to generate corpus depending on the knowledge bases. One of the methods attaches named entity labels to text data using Wikipedia. The other method crawls data from web and labels named entities to web text data using Freebase. We conduct two experiments to evaluate corpus quality and our proposed method for generating Named entity recognition corpus automatically. We extract sentences randomly from two corpus which called Wikipedia corpus and Web corpus then label them to validate both automatic labeled corpus. We also show the performance of named entity recognizer trained by corpus generated in our proposed method. The result shows that our proposed method adapts well with new corpus which reflects diverse sentence structures and the newest entities.

  • PDF

Linkage Base of Geo-based Processing Service and Open PaaS Cloud (오픈소스 PaaS 클라우드와 공간정보 처리서비스 연계 기초)

  • KIM, Kwang-Seob;LEE, KI-Won
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.4
    • /
    • pp.24-38
    • /
    • 2017
  • The awareness and demand for technological elements in the field of cloud computing and their application models have increased. Cloud-based service information systems are being expanded for use in many applications. Advancements in information technology are directly related to spatial information. PaaS is an important platform for implementing a substantial cloud ecosystem to develop geo-based application services. For this reason, it is necessary to analyze the PaaS cloud technology prior to the development of SaaS. The PaaS cloud supports sharing of related extensions, database operations and management, and application development and deployment. The development of geo-spatial information systems or services based on PaaS in ranging the domestic and overseas range is in the initial stages of both research and application. In this study, state-of-the-art cloud computing is reviewed and a conceptual design for geo-based applications is presented. The proposed model is based on container methods, which are the core elements of PaaS cloud technology based on open source. It is thought that these technologies contribute to the applicability and scalability of the geo-spatial information industry that addresses cloud computing. It is expected that the results of this study will provide a technological base for practical service implementation and experimentation for geo-based applications.

Development of Mobile Application for Ship Officers' Job Stress Measurement and Management (해기사 직무스트레스 측정 및 관리 모바일 애플리케이션 개발)

  • Yang, Dong-Bok;Kim, Joo-Sung;Kim, Deug-Bong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.2
    • /
    • pp.266-274
    • /
    • 2021
  • Ship officers are subject to excessive job stress, which has negative physical and psychological impacts and may adversely affect the smooth supply and demand of human resources. In this study, a mobile web application was developed as a tool for systematic job stress measurement and management of officers and verified through quality evaluation. Requirement analysis was performed by ship officers and staff in charge of human resources of shipping companies, and the results were reflected in the application configuration step. The application was designed according to the waterfall model, which is a traditional software development method, and functions were implemented using JSP and Spring Framework. Performance evaluation on the user interface, confirmed that proper input and output results were implemented, and the respondent results and the database were configured in the administrator interface. The results of evaluation questionnaires for quality evaluation of the interface based on ISO/IEC 9126-2 metric were significant 4.60 for the user interface and 4.65 for the administrator interface in a 5-point scale. In the future, it is necessary to conduct follow-up research on the development of data analysis system through utilization of the collected big-data sets.