• Title/Summary/Keyword: Internet File System

Search Result 313, Processing Time 0.03 seconds

Design and Implementation of Database Broker system for Integrated Data Environment of Virtual Enterprises (가상 기업의 통합 데이터 환경을 위한 데이터베이스 브로커 시스템의 설계 및 구현)

  • Yun, Seon-Hui;Jeong, Jin-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.2
    • /
    • pp.425-438
    • /
    • 1999
  • In recent days network computing technologies have bee developed rapidly and the extended use of Internet applications for enterprises such as internet/extarnet in and between enterprises has been increased enormously. Therefore the business in the future will be executed by virtual enterprise. Virtual enterprises which is based on information sharing between enterprises are composed of work processes related to information exchange between virtual enterprises, the team members who are representatives of the organizations that are participated in the actual business of virtual enterprises, and members who are representatives of the organizations that are participated in the actual business of virtual enterprises, and environment that are provided by supporting CALS(continuous Acquisition and Life cycle Support or commerce At light Speed). Supporting system of IDE(Integrated Data Environment)for CALS implementation that is provided as an environment of virtual enterprises has to ensure the autonomies of local data and to provide the accessibility of heterogeneous database of enterprises on network transparently for giving user a single global view of data. This paper introduce the design and implementation of the database broker system that can be accessed data transparently by the suers of participated enterprises in the integrated data environment supporting virtual enterprises. The system uses java/CORBA technology in Web environment and Object Query language (OQL) to process the queries of relational database system, object-oriented database system, and file information.

  • PDF

A Customized Tourism System Using Log Data on Hadoop (로그 데이터를 이용한 하둡기반 맞춤형 관광시스템)

  • Ya, Ding;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.2
    • /
    • pp.397-404
    • /
    • 2018
  • As the usage of internet is increasing, a lot of user behavior are written in a log file and the researches and industries using the log files are getting activated recently. This paper uses the Hadoop based on open source distributed computing platform and proposes a customized tourism system by analyzing user behaviors in the log files. The proposed system uses Google Analytics to get user's log files from the website that users visit, and stores search terms extracted by MapReduce to HDFS. Also it gathers features about the sight-seeing places or cities which travelers want to tour from travel guide websites by Octopus application. It suggests the customized cities by matching the search terms and city features. NBP(next bit permutation) algorithm to rearrange the search terms and city features is used to increase the probability of matching. Some customized cities are suggested by analyzing log files for 39 users to show the performance of the proposed system.

A Study on Standardization of Copyright Collective Management for Digital Contents (디지털콘덴츠 집중관리를 위한 표준화에 관한 연구)

  • 조윤희;황도열
    • Journal of the Korean Society for information Management
    • /
    • v.20 no.1
    • /
    • pp.301-320
    • /
    • 2003
  • The rapidly increasing use of the Internet and advancement of the communication network, the explosive growth of digital contents from personal home pages to professional information service the emerging file exchange service and the development of hacking techniques . these are some of the trends contributing to the spread of illegal reproduction and distribution of digital contents, thus threatening the exclusive copyrights of the creative works that should be legally protected Accordingly, there is urgent need for a digital copyright management system designed to provide centralized management while playing the role of bridge between the copyright owners and users for smooth trading of the rights to digital contents, reliable billing, security measures, and monitoring of illegal use. Therefore, in this study, I examined the requirements of laws and systems for the introduction of the centralized management system to support smooth distribution of digital contents, and also researched on the current status of domestic and international centralized management system for copyrights. Furthermore, 1 tried to provide basic materials for the standardization of digital contents copyright management information through the examination of the essential elements of the centralized digital contents management such as the system for unique identification the standardization for data elements, and the digital rights management (DHM) .

The Study on the Electronic Business System using P2P (P2P를 이용한 전자상거래 시스템 개발에 관한 연구)

  • Song, Eun-Jee
    • Journal of Digital Contents Society
    • /
    • v.8 no.3
    • /
    • pp.403-410
    • /
    • 2007
  • The P2P(Peer to Peer) business, which has brought the great wave all over the world, was made possible as a user's environment is getting improved through the introduction of high-tech PC, and the access service of high-speed internet. P2P is the form that anyone using computer could be both a provider and a user by searching or connecting to the personal computer each other out of the concept of server and client. In other words, it is the system made by the technique which could share the data connecting between the PC of the person with the information and the person searching the information. Recently a various system of P2P is under the development which could be applied to the electronic commerce among the companies which is not only the file share among the individuals. In this paper, we propose the more effective electronic business system making use of P2P.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Study on Intelligent Self-Recovery Technologies for Cyber Assets to Actively Respond to Cyberattacks (사이버 공격에 능동대응하기 위한 사이버 자산의 지능형 자가복구기술 연구)

  • Se-ho Choi;Hang-sup Lim;Jung-young Choi;Oh-jin Kwon;Dong-kyoo Shin
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.137-144
    • /
    • 2023
  • Cyberattack technology is evolving to an unpredictable degree, and it is a situation that can happen 'at any time' rather than 'someday'. Infrastructure that is becoming hyper-connected and global due to cloud computing and the Internet of Things is an environment where cyberattacks can be more damaging than ever, and cyberattacks are still ongoing. Even if damage occurs due to external influences such as cyberattacks or natural disasters, intelligent self-recovery must evolve from a cyber resilience perspective to minimize downtime of cyber assets (OS, WEB, WAS, DB). In this paper, we propose an intelligent self-recovery technology to ensure sustainable cyber resilience when cyber assets fail to function properly due to a cyberattack. The original and updated history of cyber assets is managed in real-time using timeslot design and snapshot backup technology. It is necessary to secure technology that can automatically detect damage situations in conjunction with a commercialized file integrity monitoring program and minimize downtime of cyber assets by analyzing the correlation of backup data to damaged files on an intelligent basis to self-recover to an optimal state. In the future, we plan to research a pilot system that applies the unique functions of self-recovery technology and an operating model that can learn and analyze self-recovery strategies appropriate for cyber assets in damaged states.

Translating Java Bytecode to SPARC Code using Retargetable Code Generating Techniques (재목적 코드 생성 기법을 이용한 자바 Bytecode에서 SPARC 코드로의 번역)

  • Oh, Se-Man;Jung, Chan-Sung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.3
    • /
    • pp.356-363
    • /
    • 2000
  • Java programming language is designed to run effectively on internet and distributed network environments. However, because it has a deficit to be executed by the interpreter method on each platform, to execute Java programs efficiently the code generation system which transforms Bytecode into SPARC code as target machine code must be developed. In this paper, we implement a code generation system which translates Bytecode into SPARC code using the retargetable code generating techniques. For the sake of code expander, we wrote a Bytecode table describing a rule of SPARC code generation from Bytecode, and implemented the information extractor transforming Bytecode to suitable form during expanding of source code from class file. The information extractor determines constant pool entry of each Bytecode instruction operand and then the code expander translates the Bytecode into SPARC code accoring to the Bytecode table. Also, the retargetable code generation system can be systematically reconfigured to generate code for a variety of distinct target computers.

  • PDF

A Lossless Vector Data Compression Using the Hybrid Approach of BytePacking and Lempel-Ziv in Embedded DBMS (임베디드 DBMS에서 바이트패킹과 Lempel-Ziv 방법을 혼합한 무손실 벡터 데이터 압축 기법)

  • Moon, Gyeong-Gi;Joo, Yong-Jin;Park, Soo-Hong
    • Spatial Information Research
    • /
    • v.19 no.1
    • /
    • pp.107-116
    • /
    • 2011
  • Due to development of environment of wireless Internet, location based services on the basis of spatial data have been increased such as real time traffic information as well as CNS(Car Navigation System) to provide mobile user with route guidance to the destination. However, the current application adopting the file-based system has limitation of managing and storing the huge amount of spatial data. In order to supplement this challenge, research which is capable of managing large amounts of spatial data based on embedded database system is surely demanded. For this reason, this study aims to suggest the lossless compression technique by using the hybrid approach of BytePacking and Lempel-Ziv which can be applicable in DBMS so as to save a mass spatial data efficiently. We apply the proposed compression technique to actual the Seoul and Inchcon metropolitan area and compared the existing method with suggested one using the same data through analyzing the query processing duration until the reconstruction. As a result of comparison, we have come to the conclusion that suggested technique is far more performance on spatial data demanding high location accuracy than the previous techniques.

A Design of Smart Banking System using Digital Signature based on Biometric Authentication (바이오인증 기반의 전자서명을 이용한 스마트 뱅킹 시스템 설계)

  • Kim, Jae-Woo;Park, Jeong-Hyo;Jun, Moon-Seog
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.9
    • /
    • pp.6282-6289
    • /
    • 2015
  • Today, there is an increasing number of cases in which certificate information is leak, and accordingly, electronic finance frauds are prevailing. As certificate and private key a file-based medium, are easily accessible and duplicated, they are vulnerable to information leaking crimes by cyber-attack using malignant codes such as pharming, phishing and smishing. Therefore, the use of security token and storage toke' has been encouraged as they are much safer medium, but the actual users are only minimal due to the reasons such as the risk of loss, high costs and so on. This thesis, in an effort to solve above-mentioned problems and to complement the shortcomings, proposes a system in which digital signature for Internet banking can be made with a simply bio-authentication process. In conclusion, it was found that the newly proposed system showed a better capability in handling financial transitions in terms of safety and convenience.

Design and Implementation of an Automatic Update System for Website Maintenance (웹사이트 유지보수를 위한 자동 업데이트 시스템의 설계 및 구현)

  • Hang, DaeHyeon;Yoo, JaeSoo
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.5
    • /
    • pp.129-138
    • /
    • 2021
  • Today we are getting a lot of information and various activities on our website through the internet. These websites are maintained by individuals or by website specialists. The basic method is to change the files that make up the running website. Changing the entire file in this process takes a long time and changes the files that do not need to be changed, so the efficiency is greatly reduced. When only the files that need to be changed are changed, it takes a lot of effort as a person must manually search each path to check the files and change the files one by one. To solve this problem, automatic distribution systems were developed. Additional resources and learning are required, resulting in additional cost, time and labor. Therefore, in this paper, we propose an automatic update system to minimize resource consumption by using the resources and technologies of the existing website. The proposed system does not require learning new skills. This aims to improve reliability and reduce time compared to human work.