• Title/Summary/Keyword: 키복구 시스템

Search Result 79, Processing Time 0.028 seconds

A Study of SPA Vulnerability on 8-bit Implementation of Ring-LWE Cryptosystem (8 비트 구현 Ring-LWE 암호시스템의 SPA 취약점 연구)

  • Park, Aesun;Won, Yoo-Seung;Han, Dong-Guk
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.3
    • /
    • pp.439-448
    • /
    • 2017
  • It is news from nowhere that post-quantum cryptography has side-channel analysis vulnerability. Side-channel analysis attack method and countermeasures for code-based McEliece cryptosystem and lattice-based NTRU cryptosystem have been investigated. Unfortunately, the investigation of the ring-LWE cryptosystem in terms of side-channel analysis is as yet insufficient. In this paper, we propose a chosen ciphertext simple power analysis attack that can be applied when ring-LWE cryptography operates on 8-bit devices. Our proposed attack can recover the key only with [$log_2q$] traces. q is a parameter related to the security level. It is used 7681 and 12289 to match the common 128 and 256-bit security levels, respectively. We identify the vulnerability through experiment that can reveal the secret key in modular add while the ring-LWE decryption performed on real 8-bit devices. We also discuss the attack that uses a similarity measurement method for two vectors to reduce attack time.

Realistic Multiple Fault Injection System Based on Heterogeneous Fault Sources (이종(異種) 오류원 기반의 현실적인 다중 오류 주입 시스템)

  • Lee, JongHyeok;Han, Dong-Guk
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1247-1254
    • /
    • 2020
  • With the advent of the smart home era, equipment that provides confidentiality or performs authentication exists in various places in real life. Accordingly security against physical attacks is required for encryption equipment and authentication equipment. In particular, fault injection attack that artificially inject a fault from the outside to recover a secret key or bypass an authentication process is one of the very threatening attack methods. Fault sources used in fault injection attacks include lasers, electromagnetic, voltage glitches, and clock glitches. Fault injection attacks are classified into single fault injection attacks and multiple fault injection attacks according to the number of faults injected. Existing multiple fault injection systems generally use a single fault source. The system configured to inject a single source of fault multiple times has disadvantages that there is a physical delay time and additional equipment is required. In this paper, we propose a multiple fault injection system using heterogeneous fault sources. In addition, to show the effectiveness of the proposed system, the results of a multiple fault injection attack against Riscure's Piñata board are shown.

Differential Fault Analysis on Symmetric SPN Block Cipher with Bitslice Involution S-box (비트 슬라이스 대합 S-박스에 의한 대칭 SPN 블록 암호에 대한 차분 오류 공격)

  • Kang, HyungChul;Lee, Changhoon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.3
    • /
    • pp.105-108
    • /
    • 2015
  • In this paper, we propose a differential fault analysis on symmetric SPN block cipher with bitslice involution S-box in 2011. The target block cipher was designed using AES block cipher and has advantage about restricted hardware and software environment using the same structure in encryption and decryption. Therefore, the target block cipher must be secure for the side-channel attacks. However, to recover the 128-bit secret key of the targer block cipher, this attack requires only one random byte fault and an exhausted search of $2^8$. This is the first known cryptanalytic result on the target block cipher.

제철플랜트용 전기제품의 변천과 전망

  • 대한전기협회
    • JOURNAL OF ELECTRICAL WORLD
    • /
    • s.286
    • /
    • pp.64-69
    • /
    • 2000
  • 20세기에 있어서의 비약하는 산업계의 주축으로서 철강시장은 급속히 확대되었고 그와 더불어 신기술의 개발과 도입에서도 선도적 역할을 다하여 왔다. 특히 이 4반세기는 반도체기술의 진보와 함께 컴퓨터, 플랜트 컨트롤러, 드라이브 시스템 등 전기제품은 비약적인 진화를 이루어 왔다. 현재는 중국$\cdot$아시아$\cdot$남미를 중심으로 철강제품의 수요가 증가하여 설비투자도 확대되고 있다. 그러나 북미$\cdot$유럽$\cdot$일본에서는 생산능력과 수요와의 사이에 수급공백이 있어 이전과 같은 시장만큼의 신장은 기대할 수 없는 실정이다. 이와 같은 상황하에서 철강유저의 투자목적을 합리화$\cdot$성력화$\cdot$제품품질의 향상에 두고 있어, 미쓰비시전기는 ''경쟁력 있는 제품을 만들기와 총투자코스트의 최소화''를 서포트하는 제철플랜트용 전기제품의 제공을 지향하여, 그림에 표시하는 것과 같은 4개의 어프로치로 이에 대응하고 있다. 고품질화와 자동화에 대하여는 종래의 품질제어를 능가하는 초안정화 시스템의 적용, 프로세스의 이상 검지와 자동복구에 의한 완전 노터치 오퍼레이션의 실현, 인텔리전트 센서에 의한 프로세스의 가시화로 오퍼레이터가 최종판단을 용이하고 정확하게 할 수 있는 환경을 제공한다. 고효율화$\cdot$에너지 사용합리화에 대하여는 고역률 전원을 추구하여 고효율 드라이브장치와 모터를 제공한다. 글로벌 스탠더드화에의 대응으로서는 네트워크의 오픈화에 의한 멀티벤더 환경, 범용 하드웨어에 의한 오픈 HMI(Human Machine Interface)을 제공하고 있다. 플랜트의 신속한 가동과 리모트 메인터넌스 환경을 실현하기 위하여 플랜트 시뮬레이션 시험의 실시로 공장출하 품질의 향상을 도모한다. 또한 연구센터로부터의 원격감시와 트러블해석 서포트를 쉽게 할 수가 있다. 나아가 최근에는 급격한 기세로 신장되어 온 멀티미디어 기술, 인터넷 기술, 인트라넷 기술, 모바일단말, 화상압축기술 등에 의하여 원격집중감시, 현장과 중앙 쌍방향 협조보수작업, 버추얼 리얼리티 응용시스템이 현실화 되고있는 실정이다. 이들 IT(Information Technology)솔류션은 금후의 제철플랜트의 시스템을 크게 바꾸어 나가는 것은 물론 사업 경쟁력 강화의 키 테크놀로지가 될 것이다. 앞으로 미쓰비시전기는 제철플랜트에 대하여 유저니즈를 IT 솔루션으로 전개, 제공해 나가고자 한다.

  • PDF

Immersion Testing of Navigation Device Memory for Ship Track Extraction of Sunken Fishing Vessel (침몰 선박 항해장비의 항적추출 가능성 확인을 위한 침수시험)

  • Byung-Gil Lee;Byeong-Chel Choi;Ki-Jung Jo
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.214-217
    • /
    • 2022
  • In the maritime digital forensic part, it is very important and difficult process that analysis of data and information with vessel navigation system's binary log data for situation awareness of maritime accident. In recent years, analysis of vessel's navigation system's trajectory information is an essential element of maritime accident investigation. So, we made an experiment about corruption with various memory device in navigation system. The analysis of corruption test in seawater give us important information about the valid pulling time of sunken ship for acquirement useful trajectory information.

  • PDF

Transmission Methods Using RS Codes to Improve Spatial Relationship of Images in Reversible Data Hiding Systems (가역적 데이터 은닉 시스템에서 RS 부호를 사용한 이미지 공간상관 관계 향상을 위한 전송 기법)

  • Kim, Taesoo;Jang, Min-Ho;Kim, Sunghwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.8
    • /
    • pp.1477-1484
    • /
    • 2015
  • In this paper, a novel reversible data hiding by using Reed-Solomon (RS) code is proposed for efficient transmission in encryption image. To increase the recovery of data from encrypted image, RS codes are used to encode messages, and then the codewords can be embedded into encrypted image according to encryption key. After receiving encrypted image which embeds the codewords, the receiver firstly decryptes the encrypted image using the encryption key and get metric about codewords containing messages. According to recovery capability of RS codes, better estimation of message is done in data hiding system. Simulation results about two images and two RS codes show that the performances of the proposed schemes are better than ones of the reference scheme.

Key Recovery Algorithm from Randomly-Given Bits of Multi-Prime RSA and Prime Power RSA (비트 일부로부터 Multi-Prime RSA와 Prime Power RSA의 개인키를 복구하는 알고리즘)

  • Baek, Yoo-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.6
    • /
    • pp.1401-1411
    • /
    • 2016
  • The Multi-Prime RSA and the Prime Power RSA are the variants of the RSA cryptosystem, where the Multi-Prime RSA uses the modulus $N=p_1p_2{\cdots}p_r$ for distinct primes $p_1,p_2,{\cdots},p_r$ (r>2) and the Prime Power RSA uses the modulus $N=p^rq$ for two distinct primes p, q and a positive integer r(>1). This paper analyzes the security of these systems by using the technique given by Heninger and Shacham. More specifically, this paper shows that if the $2-2^{1/r}$ random portion of bits of $p_1,p_2,{\cdots},p_r$ is given, then $N=p_1p_2{\cdots}p_r$ can be factorized in the expected polynomial time and if the $2-{\sqrt{2}}$ random fraction of bits of p, q is given, then $N=p^rq$ can be factorized in the expected polynomial time. The analysis is then validated with experimental results for $N=p_1p_2p_3$, $N=p^2q$ and $N=p^3q$.

Study on Memory Data Encryption of Windows Hibernation File (윈도우 최대 절전 모드 파일의 메모리 데이터 암호화 기법 연구)

  • Lee, Kyoungho;Lee, Wooho;Noh, Bongnam
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.5
    • /
    • pp.1013-1022
    • /
    • 2017
  • Windows hibernation is a function that stores data of physical memory on a non-volatile media and then restores the memory data from the non-volatile media to the physical memory when the system is powered on. Since the hibernation file has memory data in a static state, when the attacker collects it, key information in the system's physical memory may be leaked. Because Windows does not support protection for hibernation files only, we need to protect the memory that is written to the hibernate file. In this paper, we propose a method to encrypt the physical memory data in the hibernation file to protect the memory data of the processes recorded in the hibernation file. Hibernating procedure is analyzed to encrypt the memory data at the hibernating and the encryption process for hibernation memory is implemented to operate transparently for each process. Experimental results show that the hibernation process memory encryption tool showed about 2.7 times overhead due to the crypt cost. This overhead is necessary to prevent the attacker from exposing the plaintext memory data of the process.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.