• Title/Summary/Keyword: 데이터 자동복구

Search Result 22, Processing Time 0.04 seconds

A Study on Intelligent Self-Recovery Technologies for Cyber Assets to Actively Respond to Cyberattacks (사이버 공격에 능동대응하기 위한 사이버 자산의 지능형 자가복구기술 연구)

  • Se-ho Choi;Hang-sup Lim;Jung-young Choi;Oh-jin Kwon;Dong-kyoo Shin
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.137-144
    • /
    • 2023
  • Cyberattack technology is evolving to an unpredictable degree, and it is a situation that can happen 'at any time' rather than 'someday'. Infrastructure that is becoming hyper-connected and global due to cloud computing and the Internet of Things is an environment where cyberattacks can be more damaging than ever, and cyberattacks are still ongoing. Even if damage occurs due to external influences such as cyberattacks or natural disasters, intelligent self-recovery must evolve from a cyber resilience perspective to minimize downtime of cyber assets (OS, WEB, WAS, DB). In this paper, we propose an intelligent self-recovery technology to ensure sustainable cyber resilience when cyber assets fail to function properly due to a cyberattack. The original and updated history of cyber assets is managed in real-time using timeslot design and snapshot backup technology. It is necessary to secure technology that can automatically detect damage situations in conjunction with a commercialized file integrity monitoring program and minimize downtime of cyber assets by analyzing the correlation of backup data to damaged files on an intelligent basis to self-recover to an optimal state. In the future, we plan to research a pilot system that applies the unique functions of self-recovery technology and an operating model that can learn and analyze self-recovery strategies appropriate for cyber assets in damaged states.

A Study on MPLS OAM Functions for Fast LSP Restoration on MPLS Network (MPLS 망에서의 신속한 LSP 복구를 위한 MPLS OAM 기능 연구)

  • 신해준;임은혁;장재준;김영탁
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.7C
    • /
    • pp.677-684
    • /
    • 2002
  • Today's Internet does not have efficient traffic engineering mechanism to support QoS for the explosive increasing internet traffic such as various multimedia traffic. This functional shortage degrades prominently the quality of service, and makes it difficult to provide multi-media service and real-time service. Various technologies are under developed to solve these problems. IETF (Internet Engineering Task Force) developed the MPLS (Multi-Protocol Label Switching) technology that provides a good capabilities of traffic engineering and is independent layer 2 protocol, so MPLS is expected to be used in the Internet backbone network$\^$[1][2]/. The faults occurring in high-speed network such as MPLS, may cause massive data loss and degrade quality of service. So fast network restoration function is essential requirement. Because MPLS is independent to layer 2 protocol, the fault detection and reporting mechanism for restoration should also be independent to layer 2 protocol. In this paper, we present the experimental results of the MPLS OAM function for the performance monitoring and fault detection 'll'&'ll' notification, localization in MPLS network, based on the OPNET network simulator

Learning Text Chunking Using Maximum Entropy Models (최대 엔트로피 모델을 이용한 텍스트 단위화 학습)

  • Park, Seong-Bae;Zhang, Byoung-Tak
    • Annual Conference on Human and Language Technology
    • /
    • 2001.10d
    • /
    • pp.130-137
    • /
    • 2001
  • 최대 엔트로피 모델(maximum entropy model)은 여러 가지 자연언어 문제를 학습하는데 성공적으로 적용되어 왔지만, 두 가지의 주요한 문제점을 가지고 있다. 그 첫번째 문제는 해당 언어에 대한 많은 사전 지식(prior knowledge)이 필요하다는 것이고, 두번째 문제는 계산량이 너무 많다는 것이다. 본 논문에서는 텍스트 단위화(text chunking)에 최대 엔트로피 모델을 적용하는 데 나타나는 이 문제점들을 해소하기 위해 새로운 방법을 제시한다. 사전 지식으로, 간단한 언어 모델로부터 쉽게 생성된 결정트리(decision tree)에서 자동적으로 만들어진 규칙을 사용한다. 따라서, 제시된 방법에서의 최대 엔트로피 모델은 결정트리를 보강하는 방법으로 간주될 수 있다. 계산론적 복잡도를 줄이기 위해서, 최대 엔트로피 모델을 학습할 때 일종의 능동 학습(active learning) 방법을 사용한다. 전체 학습 데이터가 아닌 일부분만을 사용함으로써 계산 비용은 크게 줄어 들 수 있다. 실험 결과, 제시된 방법으로 결정트리의 오류의 수가 반으로 줄었다. 대부분의 자연언어 데이터가 매우 불균형을 이루므로, 학습된 모델을 부스팅(boosting)으로 강화할 수 있다. 부스팅을 한 후 제시된 방법은 전문가에 의해 선택된 자질로 학습된 최대 엔트로피 모델보다 졸은 성능을 보이며 지금까지 보고된 기계 학습 알고리즘 중 가장 성능이 좋은 방법과 비슷한 성능을 보인다 텍스트 단위화가 일반적으로 전체 구문분석의 전 단계이고 이 단계에서의 오류가 다음 단계에서 복구될 수 없으므로 이 성능은 텍스트 단위화에서 매우 의미가 길다.

  • PDF

A Study of the extraction algorithm of the disaster sign data from web (재난 전조 정보 추출 알고리즘 연구)

  • Lee, Changyeol;Kim, Taehwan;Cha, Sangyeul
    • Journal of the Society of Disaster Information
    • /
    • v.7 no.2
    • /
    • pp.140-150
    • /
    • 2011
  • Life Environment is rapidly changing and large scale disasters are increasing from the global warming. Although the disaster repair resources are deployed to the disaster fields, the prevention of the disasters is the most effective countermeasures. the disaster sign data is based on the rule of Heinrich. Automatic extraction of the disaster sign data from the web is the focused issues in this paper. We defined the automatic extraction processes and applied information, such as accident nouns, disaster filtering nouns, disaster sign nouns and rules. Using the processes, we implemented the disaster sign data management system. In the future, the applied information must be continuously updated, because the information is only the extracted and analytic result from the some disaster data.

The Recovery and Analysis of Digital Data in Digital Multifunction Copiers with a Digital Forensics Perspective (디지털포렌식 관점에서의 디지털복합기내 데이터 복구 및 분석)

  • Park, Il-Shin;Kang, Cheul-Hoon;Choi, Sung-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.6
    • /
    • pp.23-32
    • /
    • 2010
  • Caused by the development of IT environment, the frequency of using the embedded machines is increasing in our regular life. A typical example of these embedded machines is a Multi Function Copier and it has various functions; it is used as copier, scanner, fax machine, and file server. We would like to check the existence of and the way to abstract the data that may have been saved through using the scanner of the multi function printer and discuss how to use those data as the evidence.

Design of a Whitening Block Module for Minimizing DC Bias in Wireless Communications (무선 통신에서 DC 바이어스를 최소화하는 화이트닝 블록 설계)

  • Moon, San-Gook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.673-676
    • /
    • 2008
  • In wireless communications such as Bluetooth, Baseband should be able to minimize the DC bias of the data which passed the modem interface of either transmitter or receiver for the reliability of the circuit and the integrity of the data. The transmitter scrambles the data to send randomly to the error correction block and the receiver recovers the randomly spread data as they have been. To design the whitening block, it is important to select the prime polynomial for the filtering. In this paper, we designed a optimal whitening block using the prime polynomial $g(D)=D^7+D^4+1$ for hardware and area efficiency. The proposed hardware whitening block was described and verified using Verilog HDL and later to be automatically synthesized. The synthesized whitening block operated at 40Mhz normal clock speed of the target baseband microcontroller.

  • PDF

Automatic Recovery Network Design for the Efficient Costs (효율적인 비용을 갖는 자동장애극복 네트워크의 설계방안)

  • Song, Myeong-Kyu
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.11
    • /
    • pp.5885-5889
    • /
    • 2013
  • In general, the network survivability means that The user do not know the network faults and the recovery of those. For this, we use the dual(multi) routes between each two nodes. It is important that the each dual routes have efficient costs(or minimum). Even if one route is the minimum cost in case of no fault, another route of dual may be very large cost in case of fault case. Therefore we need the dual routes of each two nodes having the efficient(or minimum) costs. In this paper we find the network design method for the dual routes of each two node having the efficient costs. Although the design method is very simple and heuristic and it may be not useful for some networks, we will use it in various network environment.. Because this design method can be used very easy. A sample design will proof this usefulness.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Block based Smart Carving System for Forgery Analysis and Fragmented File Identification

  • Lee, Hanseong;Lee, Hyung-Woo
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.93-102
    • /
    • 2020
  • In order for data obtained through all stages of digital crime investigation to be recognized as evidence capability, it must satisfy legal / technical requirements. In this paper, we propose a mechanism and implement software to provide digital forensic evidence by automatically recovering files by scanning / inspecting the unallocated area inside the storage disk block without relying on information provided by the file system. The proposed technique checks / analyzes the RAW disk data of the system under analysis in 512-byte block units based on information on the storage format / file structure of various files stored on the disk without referring to the file system-related information provided by the operating system. The file carving process was implemented, and a smart carving mechanism was proposed to intelligently restore deleted or damaged files in the storage device. As a result, we have provided a block based smart carving method to intelligently identify fragmented and damaged files in storage efficiently for forgery analysis on digital forensic investigation.

Blurred Image Enhancement Techniques Using Stack-Attention (Stack-Attention을 이용한 흐릿한 영상 강화 기법)

  • Park Chae Rim;Lee Kwang Ill;Cho Seok Je
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.83-90
    • /
    • 2023
  • Blurred image is an important factor in lowering image recognition rates in Computer vision. This mainly occurs when the camera is unstablely out of focus or the object in the scene moves quickly during the exposure time. Blurred images greatly degrade visual quality, weakening visibility, and this phenomenon occurs frequently despite the continuous development digital camera technology. In this paper, it replace the modified building module based on the Deep multi-patch neural network designed with convolution neural networks to capture details of input images and Attention techniques to focus on objects in blurred images in many ways and strengthen the image. It measures and assigns each weight at different scales to differentiate the blurring of change and restores from rough to fine levels of the image to adjust both global and local region sequentially. Through this method, it show excellent results that recover degraded image quality, extract efficient object detection and features, and complement color constancy.