• Title/Summary/Keyword: 서버 기반 컴퓨팅

Search Result 589, Processing Time 0.025 seconds

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Satellite Imagery and AI-based Disaster Monitoring and Establishing a Feasible Integrated Near Real-Time Disaster Monitoring System (위성영상-AI 기반 재난모니터링과 실현 가능한 준실시간 통합 재난모니터링 시스템)

  • KIM, Junwoo;KIM, Duk-jin
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.3
    • /
    • pp.236-251
    • /
    • 2020
  • As remote sensing technologies are evolving, and more satellites are orbited, the demand for using satellite data for disaster monitoring is rapidly increasing. Although natural and social disasters have been monitored using satellite data, constraints on establishing an integrated satellite-based near real-time disaster monitoring system have not been identified yet, and thus a novel framework for establishing such system remains to be presented. This research identifies constraints on establishing satellite data-based near real-time disaster monitoring systems by devising and testing a new conceptual framework of disaster monitoring, and then presents a feasible disaster monitoring system that relies mainly on acquirable satellite data. Implementing near real-time disaster monitoring by satellite remote sensing is constrained by technological and economic factors, and more significantly, it is also limited by interactions between organisations and policy that hamper timely acquiring appropriate satellite data for the purpose, and institutional factors that are related to satellite data analyses. Such constraints could be eased by employing an integrated computing platform, such as Amazon Web Services(AWS), which enables obtaining, storing and analysing satellite data, and by developing a toolkit by which appropriate satellites'sensors that are required for monitoring specific types of disaster, and their orbits, can be analysed. It is anticipated that the findings of this research could be used as meaningful reference when trying to establishing a satellite-based near real-time disaster monitoring system in any country.

Randomness based Static Wear-Leveling for Enhancing Reliability in Large-scale Flash-based Storage (대용량 플래시 저장장치에서 신뢰성 향상을 위한 무작위 기반 정적 마모 평준화 기법)

  • Choi, Kilmo;Kim, Sewoog;Choi, Jongmoo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.2
    • /
    • pp.126-131
    • /
    • 2015
  • As flash-based storage systems have been actively employed in large-scale servers and data centers, reliability has become an indispensable element. One promising technique for enhancing reliability is static wear-leveling, which distributes erase operations evenly among blocks so that the lifespan of storage systems can be prolonged. However, increasing the capacity makes the processing overhead of this technique non-trivial, mainly due to searching for blocks whose erase count would be minimum (or maximum) among all blocks. To reduce this overhead, we introduce a new randomized block selection method in static wear-leveling. Specifically, without exhaustive search, it chooses n blocks randomly and selects the maximal/minimal erased blocks among the chosen set. Our experimental results revealed that, when n is 2, the wear-leveling effects can be obtained, while for n beyond 4, the effect is close to that obtained from traditional static wear-leveling. For quantitative evaluation of the processing overhead, the scheme was actually implemented on an FPGA board, and overhead reduction of more than 3 times was observed. This implies that the proposed scheme performs as effectively as the traditional static wear-leveling while reducing overhead.

Recommendation of Best Empirical Route Based on Classification of Large Trajectory Data (대용량 경로데이터 분류에 기반한 경험적 최선 경로 추천)

  • Lee, Kye Hyung;Jo, Yung Hoon;Lee, Tea Ho;Park, Heemin
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.2
    • /
    • pp.101-108
    • /
    • 2015
  • This paper presents the implementation of a system that recommends empirical best routes based on classification of large trajectory data. As many location-based services are used, we expect the amount of location and trajectory data to become big data. Then, we believe we can extract the best empirical routes from the large trajectory repositories. Large trajectory data is clustered into similar route groups using Hadoop MapReduce framework. Clustered route groups are stored and managed by a DBMS, and thus it supports rapid response to the end-users' request. We aim to find the best routes based on collected real data, not the ideal shortest path on maps. We have implemented 1) an Android application that collects trajectories from users, 2) Apache Hadoop MapReduce program that can cluster large trajectory data, 3) a service application to query start-destination from a web server and to display the recommended routes on mobile phones. We validated our approach using real data we collected for five days and have compared the results with commercial navigation systems. Experimental results show that the empirical best route is better than routes recommended by commercial navigation systems.

Real-Time Remote Display Technique based on Wireless Mobile Environments (무선 모바일 환경 기반의 실시간 원격 디스플레이 기법)

  • Seo, Jung-Hee;Park, Hung-Bog
    • The KIPS Transactions:PartC
    • /
    • v.15C no.4
    • /
    • pp.297-302
    • /
    • 2008
  • In case of display a lot of information from mobile devices, those systems are being developed that display the information from mobile devices on remote devices such as TV using the mobile devices as remote controllers because it is difficult to display a lot of information on mobile devices due to their limited bandwidth and small screen sizes. A lot of cost is required to design and develop interfaces for these systems corresponding to each of remote display devices. In this paper, a mobile environment based remote display system for displays at real times is proposed for continuous monitoring of status data for unique 'Mote IDs'. Also, remote data are collected and monitored through sensor network devices such as ZigbeX by applying status perception based remote displays at real times through processing ubiquitous computing environment data, and remote display applications at real times are implemented through PDA wireless mobiles. The system proposed in this paper consists of a PDA for remote display and control, mote embedded applications programming for data collections and radio frequency, server modules to analyze and process collected data and virtual prototyping for monitoring and controls by virtual machines. The result of the implementations indicates that this system not only provides a good mobility from a human oriented viewpoint and a good usability of accesses to information but also transmits data efficiently.

Study on Improvement of Weil Pairing IBE for Secret Document Distribution (기밀문서유통을 위한 Weil Pairing IBE 개선 연구)

  • Choi, Cheong-Hyeon
    • Journal of Internet Computing and Services
    • /
    • v.13 no.2
    • /
    • pp.59-71
    • /
    • 2012
  • PKI-based public key scheme is outstanding in terms of authenticity and privacy. Nevertheless its application brings big burden due to the certificate/key management. It is difficult to apply it to limited computing devices in WSN because of its high encryption complexity. The Bilinear Pairing emerged from the original IBE to eliminate the certificate, is a future significant cryptosystem as based on the DDH(Decisional DH) algorithm which is significant in terms of computation and secure enough for authentication, as well as secure and faster. The practical EC Weil Pairing presents that its encryption algorithm is simple and it satisfies IND/NM security constraints against CCA. The Random Oracle Model based IBE PKG is appropriate to the structure of our target system with one secret file server in the operational perspective. Our work proposes modification of the Weil Pairing as proper to the closed network for secret file distribution[2]. First we proposed the improved one computing both encryption and message/user authentication as fast as O(DES) level, in which our scheme satisfies privacy, authenticity and integrity. Secondly as using the public key ID as effective as PKI, our improved IBE variant reduces the key exposure risk.

A Customization Method for Mobile App.'s Performance Improvement (모바일 앱의 성능향상을 위한 커스터마이제이션 방안)

  • Cho, Eun-Sook;Kim, Chul-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.11
    • /
    • pp.208-213
    • /
    • 2016
  • In the fourth industrial revolution, customization is becoming a conversation topic in various domains. Industry 4.0 applies cyber-physical systems (CPS), the Internet of Things (IoT), and cloud computing to manufacturing businesses. One of the main phrases in Industry 4.0 is mass customization. Optimized products or services are developed and provided through customization. Therefore, the competitiveness of a product can be enhanced, and satisfaction is improved. In particular, as IoT technology spreads, customization is an essential aspect of smooth service connections between various devices or things. Customized services in mobile applications are assembled and operate in various mobile devices in the mobile environment. Therefore, this paper proposes a method for improving customized cloud server-based mobile architectures, processes, and metrics, and for measuring the performance improvement of the customized architectures operating in various mobile devices based on the Android or IOS platforms. We reduce the total time required for customization in half as a result of applying the proposed customized architectures, processes, and metrics in various devices.

A Study on the Introduction of Green IT Based on the Cases of Implementing Green Internet Data Center (그린 데이터센터 구축 사례에 기반한 그린 IT 도입 방안에 관한 연구)

  • Song, Gil-Heon;Shin, Taek-Soo
    • Information Systems Review
    • /
    • v.11 no.2
    • /
    • pp.147-167
    • /
    • 2009
  • As global climate changes, the interest in environmental crisis is increasing and a number of international agreements and regulations against this crisis are being established. Global information technology(IT) corporations are building their own pro-environmental green IT strategies to cope with the regulatory measures. Green IT broadly refers to pro-environmental technologies designed to replace hazardous materials, maximize energy effectivity, and find alternative energies. In the current stage of the IT industry development, Green IT specifically refers to the technologies that deal with the server heat generation and the energy reduction in data center. This study defines the concept of Green IT and reviews its origin and necessity. Then, it examines the issues regarding Green IT industry in Korea as well as other countries and compares the Green IT strategies developed in each country. Reviewing the recent development of IT and data center market enables us to see that overall Green IT strategies focus on the establishment of Green Internet Data Centers. Therefore, this study analyzes the cases in which some domestic and foreign corporations introduced Green Data Centers in order to examine the protocol and legal requirements for building Green IT, the aspects of environmental evaluation and design, and specific strategies for launching Green IT strategies and its future assignments. The conclusions of this study are as follows. First, to introduce Green Data Center as a strategy to build Green IT, the government and corporations should cooperate with each other. Partial introduction at the initial stage is desirable because, through the process, mutual trust between the two parties can be built more smoothly. Second, CEO's determination to build Green IT and continue its operation is indispensable. CEO's are required to have clear understanding as to why Green IT needs to be built and how it should be constructed. Those who initiate the construction of Green Data Center for Green IT need to know the definition and necessity of Green IT while at the same time understanding the implicit meanings of Green IT. They also need to be aware of future-oriented values of Green Data Center and readjust their corporate business activities in the pro-environmental direction. Finally, not only the CEOs' pro-environmental activities but also the change of mind on the part of all corporate employees is required to realize Green IT. It should be remembered that pro-environmental Green IT starts with minor activities.

LxBSM: Loadable Kernel Module for the Creation of C2 Level Audit Data based on Linux (LxBSM: C2 수준의 감사 자료 생성을 위한 리눅스 기반 동적 커널 모듈)

  • 전상훈;최재영;김세환;심원태
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.2
    • /
    • pp.146-155
    • /
    • 2004
  • Currently most of commercial operating systems contain a high-level audit feature to increase their own security level. Linux does not fall behind the other commercial operating systems in performance and stability, but Linux does not have a good audit feature. Linux is required to support a higher security feature than C2 level of the TCSEC in order to be used as a server operating system, which requires the kernel-level audit feature that provides the system call auditing feature and audit event. In this paper, we present LxBSM, which is a kernel module to provide the kernel-level audit features. The audit record format of LxBSM is compatible with that of Sunshield BSM. The LxBSM is implemented as a loadable kernel module, so it has the enhanced usability. It provides the rich audit records including the user-level audit events such as login/logout. It supports both the pipe and file interface for increasing the connectivity between LxBSM and intrusion detection systems (IDS). The performance of LxBSM is compared and evaluated with that of Linux kernel without the audit features. The response time was increased when the system calls were called to create the audit data, such as fork, execve, open, and close. However any other performance degradation was not observed.

An Efficient Management of Network Traffic using Framework-based Performance Management Tool (프레임워크 기반 성능관리 도구를 이용한 효율적인 네트워크 트래픽 관리)

  • Choi Seong-Man;Tae Gyu-Yeol;Yoo Cheol-Jung;Chang Ok-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.3
    • /
    • pp.224-234
    • /
    • 2005
  • As the network-related technology develops the number of both Internet users and the usage are explosively increasing. The networking traffic is increasing in the campus as the networking system inside universities, following the trend, adds more nodes and various networking services. Nonetheless, the quality of services for users has been degraded. Accordingly, core problems, which can cause troubles for network management, design and expansion of the network, and the cost policy, has appeared. To effectively cope with the problems with analyses a great number of technicians, tools, and budget are needed. However, it is not possible for mid and small-sized colleges to spend such a high expenditure for professional consulting. To reduce the cost and investment creating the optimized environment, the analyses on the replacement of the tools, changing the network structure, and performance analysis about capacity planning of networking is necessary. For this reason, in this paper, framework-based performance management tools are used for all steps that are related to the subject of the analysis for the network management. As the major research method, the current data in detailed categories are collected, processed, and analyzed to provide the solution for the problems. As a result we could manage the network, server, and application more systematically and react efficiently to errors and degrading of performance that affect the networking tasks. Also, with the scientific and organized analyses the overall efficiency is upgraded by optimizing the cost for managing the operation of entire system.