• Title/Summary/Keyword: 키 복구

Search Result 177, Processing Time 0.021 seconds

Development of Vegetation Structure after Forest Fire in the East Coastal Region, Korea (동해안 산불 피해지에서 산불 후 경과 년 수에 따른 식생 구조의 발달)

  • 이규송;정연숙;김석철;신승숙;노찬호;박상덕
    • The Korean Journal of Ecology
    • /
    • v.27 no.2
    • /
    • pp.99-106
    • /
    • 2004
  • We developed the estimation model for the vegetation developmental processes on the severely burned slope areas after forest fire in the east coastal region, Korea. And we calculated the vegetation indices as a useful parameter for the development of land management technique in the burned area and suggested the changes of the vegetation indices after forest fire. In order to estimate the woody standing biomass in the burned area, allometric equations of the 17 woody species regenerated by sprouter were investigated. According to the our results, twenty year after forest fire need for the development to the normal forest formed by 4 stratum structure, tree, sub-tree, shrub and herb layer. The height of top vegetation layer, basal area and standing biomass of woody species show a tendency to increase linearly, and the ground vegetation coverage and litter layer show a tendency to increase logarithmically after forest fire. Among vegetation indices, Ive and Ivcd show a tendency to increase logarithmically, and Hcl and Hcdl show a tendency to increase linearly after forest fire. The spatial variation of the most vegetation factors was observed in the developmental stages less than the first 5 years which were estimated secondary disaster by soil erosion after forest fire. Among vegetation indices, Ivc and Ivcd were the good indices for the representation of the spatial heterogeneity in the earlier developmental stages, and Hcl and Hcdl were the useful indices for the long-term estimation of the vegetation development after forest fire.

APEC Mining Task Force 개요

  • Heo, Cheol-Ho
    • 한국지구과학회:학술대회논문집
    • /
    • 2010.04a
    • /
    • pp.110-110
    • /
    • 2010
  • 2009년 7월 23일-24일 양일간의 APEC MTF 컨퍼런스는 APEC 회원 경제가 아시아-태평양지역에서 광업부문의 지속가능한 발전의 주제를 토의할 기회를 제공했다. 본 컨퍼런스는 APEC의 광업부문의 지속가능한 발전이라는 프로젝트의 중요 부분이며 컨퍼런스의 활발한 참여는 프로젝트의 성공을 증진시켰다. 지속가능한 발전에 대한 안건이 수년간 APEC의 핵심부분이었으며, 특히 광업장관(MRM)회의에 상정되어 왔다. 2004년 6월 칠레 안토파가스타의 제1차 회의에서, 광업장관들은 APEC 지역에서 광업 및 금속산업의 지속가능한 발전은 부를 창출하고, 환경사업을 창출하며, 사회적으로 책임있는 발전을 도모하며 사회를 위한 향상된 가치를 만들어낸다는데 동의했다. 초기의 action item들 중에서 지속가능한 발전에 있어서 광물 및 금속의 기여를 규명하는 것도 있었다. 광업에 있어서 지속가능한 발전에 대한 안건의 토의는 2005년 10월 한국의 경주 제2차 APEC MRM 회의에서 속계되었다. 관련된 action task는 채광 후 토지운영 뿐만 아니라 에너지 효율기술, 광업 오염 통제 기술과 같은 환경친화적인 채광기술에 대한 정보교환 및 협조를 독려하는 것이었다. 2007년 호주 퍼스의 제3차 회의에서 APEC MRM 회의는 특히 지구화의 시대에 APEC 지역 광물자원의 지속가능한 발전에 대한 긴밀한 지역적 협조에 대한 필요성을 인지하고 있다. 장관들은 역시 광업부문에서 지속가능한 발전에 대한 APEC 위상을 정립하기 위한 작업을 주도하기로 했으며 APEC 경제의 공통관심사를 UNCSD에 반영키 위한 자료제공을 하기로 결정했다. APEC 광업분야의 지속 가능한 개발에 관한 APEC MTF회의는 호주, 캐나다, 칠레, 중국, 인도네시아, 일본, 말레이시아, 파푸아 뉴기니, 페루, 필리핀, 한국, 러시아, 싱가포르, 타이완, 태국, 미국, 베트남에서 자신들의 지속 가능한 개발을 위한 활동에 관한 발표나 의견을 제시하였다. 세계 은행이나 AIM에서도 발표를 하였다. 중요한 소주제들은 다음과 같다. $\cdot$ APEC MTF가 APEC 광업분야의 지속 가능한 발전을 추구하는데 있어 적절한 포럼이라는 것 $\cdot$ 기업들이 사회적 책임(CSR)을 성실히 이행할 필요가 있다는 것 $\cdot$ 수자원과 인적자원의 부족을 다룰 필요가 있다는 것 $\cdot$ 적절한 광산 복구가 필요하다는 것이다. 한국은 "광업분야의 지속가능한 발전을 위한 환경과 광업간의 균형"이라는 프로젝트 아이디어를 제안했다. 인도네시아와 말레이시아는 한국측 프로젝트 수행의 중요성을 강조했다. 러시아 연방은 "광업에 있어 투자 활성화"라는 프로젝트 아이디어를 제안했다. 이 관점에서 MTF는 APEC 투자전문가 그룹과의 협력을 지지했으며 이 포럼간 활동을 활성화시키기 위하여 APEC 사무국에 요청했다. 이 프로젝트는 세계 광업분야의 투자를 증진시키는 최적관행 분석에 따라 제안될 것이고 수행될 것이다. 말레이시아는 광업 및 광업 산업의 지속가능한 발전지시자를 위한 역량구축 프로젝트를 제안했다. 태국은 말레이시아의 제안을 지지했으며 공동프로젝트를 제안했다.

  • PDF

Key Recovery Algorithm from Randomly-Given Bits of Multi-Prime RSA and Prime Power RSA (비트 일부로부터 Multi-Prime RSA와 Prime Power RSA의 개인키를 복구하는 알고리즘)

  • Baek, Yoo-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.6
    • /
    • pp.1401-1411
    • /
    • 2016
  • The Multi-Prime RSA and the Prime Power RSA are the variants of the RSA cryptosystem, where the Multi-Prime RSA uses the modulus $N=p_1p_2{\cdots}p_r$ for distinct primes $p_1,p_2,{\cdots},p_r$ (r>2) and the Prime Power RSA uses the modulus $N=p^rq$ for two distinct primes p, q and a positive integer r(>1). This paper analyzes the security of these systems by using the technique given by Heninger and Shacham. More specifically, this paper shows that if the $2-2^{1/r}$ random portion of bits of $p_1,p_2,{\cdots},p_r$ is given, then $N=p_1p_2{\cdots}p_r$ can be factorized in the expected polynomial time and if the $2-{\sqrt{2}}$ random fraction of bits of p, q is given, then $N=p^rq$ can be factorized in the expected polynomial time. The analysis is then validated with experimental results for $N=p_1p_2p_3$, $N=p^2q$ and $N=p^3q$.

Study on Memory Data Encryption of Windows Hibernation File (윈도우 최대 절전 모드 파일의 메모리 데이터 암호화 기법 연구)

  • Lee, Kyoungho;Lee, Wooho;Noh, Bongnam
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.5
    • /
    • pp.1013-1022
    • /
    • 2017
  • Windows hibernation is a function that stores data of physical memory on a non-volatile media and then restores the memory data from the non-volatile media to the physical memory when the system is powered on. Since the hibernation file has memory data in a static state, when the attacker collects it, key information in the system's physical memory may be leaked. Because Windows does not support protection for hibernation files only, we need to protect the memory that is written to the hibernate file. In this paper, we propose a method to encrypt the physical memory data in the hibernation file to protect the memory data of the processes recorded in the hibernation file. Hibernating procedure is analyzed to encrypt the memory data at the hibernating and the encryption process for hibernation memory is implemented to operate transparently for each process. Experimental results show that the hibernation process memory encryption tool showed about 2.7 times overhead due to the crypt cost. This overhead is necessary to prevent the attacker from exposing the plaintext memory data of the process.

A Study on Cryptography Scheme and Secure Protocol for Safety Secure Scheme Construction in 13.56Mhz RFID (13.56Mhz RFID 환경에서 안전한 보안 스킴 구축을 위한 암호 스킴 및 보안 프로토콜 연구)

  • Kang, Jung-Ho;Kim, Hyung-Joo;Lee, Jae-Sik;Park, Jae-Pyo;Jun, Moon-Seog
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.3
    • /
    • pp.1393-1401
    • /
    • 2013
  • What is RFID Microchip tag attached to an object, the reader recognizes technology collectively, through communication with the server to authenticate the object. A variety of RFID tags, 13.56Mhz bandwidth RFID card, ISO/IEC 14443 standards based on NXP's Mifare tag occupies 72.5% of the world market. Of the Mifare tags, low cost tag Mifare Classic tag provided in accordance with the limited hardware-based security operations, protocol leaked by a variety of attacks and key recovery vulnerability exists. Therefore, in this paper, Cryptography Scheme and Secure Protocol for Safety Secure Scheme Construction in 13.56Mhz RFID have been designed. The proposed security scheme that KS generated by various fixed values and non-fixed value, S-Box operated, values crossed between LFSR and S-Box is fully satisfied spoofing, replay attacks, such as vulnerability of existing security and general RFID secure requirement. Also, It is designed by considering the limited hardware computational capabilities and existing security schemes, so it could be suit to Mifare Classic now.

Comparative Analysis of Rice Lodging Area Using a UAV-based Multispectral Imagery (무인기 기반 다중분광 영상을 이용한 벼 쓰러짐 영역의 특성 분석)

  • Moon, Hyun-Dong;Ryu, Jae-Hyun;Na, Sang-il;Jang, Seon Woong;Sin, Seo-ho;Cho, Jaeil
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.917-926
    • /
    • 2021
  • Lodging rice is one of critical agro-meteorological disasters. In this study, the UAV-based multispectral imageries before and after rice lodging in rice paddy field of Jeollanamdo agricultural research and extension servicesin 2020 was analyzed. The UAV imagery on 14th Aug. includesthe paddy rice without any damage. However, 4th and 19th Sep. showed the area of rice lodging. Multispectral camera of 10 bands from 444 nm to 842 nm was used. At the area of restoration work against lodging rice, the reflectance from 531 nm to 842 nm were decreased in comparison to un-lodging rice. At the area of lodging rice, the reflectance of around 668 nm had small increases. Further, the blue and NIR (Near-Infrared) wavelength had larger. However, according to the types of lodging, the change of reflectance was different. The NDVI (Normalized Difference Vegetation Index) and NDRE (Normalized Difference Red Edge) shows dome sensitivities to lodging rice, but they were different to types of lodging. These results will be useful to make algorithm to detect the area of lodging rice using a UAV.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.