• Title/Summary/Keyword: 데이터복구

Search Result 548, Processing Time 0.023 seconds

A Simulation of a Small Mountainous Chachment in Gyeoungbuk Using the RAMMS Model (RAMMS 모형을 이용한 경북 소규모 산지 유역의 토석류 모의)

  • Hyung-Joon Chang;Ho-Jin Lee;Seong-Goo Kim
    • Journal of Korean Society of Disaster and Security
    • /
    • v.17 no.1
    • /
    • pp.1-8
    • /
    • 2024
  • In Korea, mountainous areas cover 60% of the land, leading to increased factors such as concentrated heavy rainfall and typhoons, which can result in debris flow and landslide. Despite the high risk of disasters like landslides and debris flow, there has been a tendency in most regions to focus more on post-damage recovery rather than preventing damage. Therefore, in this study, precise topographic data was constructed by conducting on-site surveys and drone measurements in areas where debris flow actually occurred, to analyze the risk zones for such events. The numerical analysis program RAMMS model was utilized to perform debris flow analysis on the areas prone to debris flow, and the actual distribution of debris flow was compared and analyzed to evaluate the applicability of the model. As a result, the debris flow generation area calculated by the RAMMS model was found to be 18% larger than the actual area, and the travel distance was estimated to be 10% smaller. However, the simulated shape of debris flow generation and the path of movement calculated by the model closely resembled the actual data. In the future, we aim to conduct additional research, including model verification suitable for domestic conditions and the selection of areas for damage prediction through debris flow analysis in unmeasured watersheds.

A Study on the Design of Standard Code for Hazardous and Noxious Substance Accidents at Sea (해상 HNS 사고 표준코드 설계에 관한 연구)

  • Ha, Min-Jae;Jang, Ha-Lyong;Yun, Jong-Hwui;Lee, Moonjin;Lee, Eun-Bang
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.22 no.2
    • /
    • pp.228-232
    • /
    • 2016
  • As the quantity of HNS sea trasport and the number of HNS accidents at sea are increasing recently, the importance of HNS management is emphasized so that we try to develop marine accident case standard code for making HNS accidents at sea databased systemically in this study. First and foremost, we draw the related requisites of essential accident reports along with internal and external decrees and established statistics of classified items for conducting study, and we referred to analogous standard codes obtained from developed countries in order to research code design. Code design is set like 'Accident occurrence ${\rightarrow}$ The initial accident information ${\rightarrow}$ Accident response ${\rightarrow}$ Accident investigation' in accordance with the general flow of marine HNS accidents of in which the accident information is input and queried. We classified initial accident information into the items of five categories and constructed "Preliminary Information Code(P.I.C.)". In addition we constructed accident response in two categories and accident investigation in three categories that get possible after the accident occurrence as called "Full Information(F.I.C.)", including the P.I.C. It is represented in 3 kinds of steps on each topic by departmentalizing the classified majority as classified middle class and classified minority. As a result of coding marine HNS accident and of the code to a typical example of marine HNS accident, HNS accident was ascertained to be represented sufficiently well. We expect that it is feasible to predict possible trouble or accident henceforward by applying code, and also consider that it is valuable to the preparedness, response and restoration in relation to HNS accidents at sea by managing systemically the data of marine HNS accidents which will occur in the future.

Detection of Wildfire Burned Areas in California Using Deep Learning and Landsat 8 Images (딥러닝과 Landsat 8 영상을 이용한 캘리포니아 산불 피해지 탐지)

  • Youngmin Seo;Youjeong Youn;Seoyeon Kim;Jonggu Kang;Yemin Jeong;Soyeon Choi;Yungyo Im;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1413-1425
    • /
    • 2023
  • The increasing frequency of wildfires due to climate change is causing extreme loss of life and property. They cause loss of vegetation and affect ecosystem changes depending on their intensity and occurrence. Ecosystem changes, in turn, affect wildfire occurrence, causing secondary damage. Thus, accurate estimation of the areas affected by wildfires is fundamental. Satellite remote sensing is used for forest fire detection because it can rapidly acquire topographic and meteorological information about the affected area after forest fires. In addition, deep learning algorithms such as convolutional neural networks (CNN) and transformer models show high performance for more accurate monitoring of fire-burnt regions. To date, the application of deep learning models has been limited, and there is a scarcity of reports providing quantitative performance evaluations for practical field utilization. Hence, this study emphasizes a comparative analysis, exploring performance enhancements achieved through both model selection and data design. This study examined deep learning models for detecting wildfire-damaged areas using Landsat 8 satellite images in California. Also, we conducted a comprehensive comparison and analysis of the detection performance of multiple models, such as U-Net and High-Resolution Network-Object Contextual Representation (HRNet-OCR). Wildfire-related spectral indices such as normalized difference vegetation index (NDVI) and normalized burn ratio (NBR) were used as input channels for the deep learning models to reflect the degree of vegetation cover and surface moisture content. As a result, the mean intersection over union (mIoU) was 0.831 for U-Net and 0.848 for HRNet-OCR, showing high segmentation performance. The inclusion of spectral indices alongside the base wavelength bands resulted in increased metric values for all combinations, affirming that the augmentation of input data with spectral indices contributes to the refinement of pixels. This study can be applied to other satellite images to build a recovery strategy for fire-burnt areas.

Application of MicroPACS Using the Open Source (Open Source를 이용한 MicroPACS의 구성과 활용)

  • You, Yeon-Wook;Kim, Yong-Keun;Kim, Yeong-Seok;Won, Woo-Jae;Kim, Tae-Sung;Kim, Seok-Ki
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.51-56
    • /
    • 2009
  • Purpose: Recently, most hospitals are introducing the PACS system and use of the system continues to expand. But small-scaled PACS called MicroPACS has already been in use through open source programs. The aim of this study is to prove utility of operating a MicroPACS, as a substitute back-up device for conventional storage media like CDs and DVDs, in addition to the full-PACS already in use. This study contains the way of setting up a MicroPACS with open source programs and assessment of its storage capability, stability, compatibility and performance of operations such as "retrieve", "query". Materials and Methods: 1. To start with, we searched open source software to correspond with the following standards to establish MicroPACS, (1) It must be available in Windows Operating System. (2) It must be free ware. (3) It must be compatible with PET/CT scanner. (4) It must be easy to use. (5) It must not be limited of storage capacity. (6) It must have DICOM supporting. 2. (1) To evaluate availability of data storage, we compared the time spent to back up data in the open source software with the optical discs (CDs and DVD-RAMs), and we also compared the time needed to retrieve data with the system and with optical discs respectively. (2) To estimate work efficiency, we measured the time spent to find data in CDs, DVD-RAMs and MicroPACS. 7 technologists participated in this study. 3. In order to evaluate stability of the software, we examined whether there is a data loss during the system is maintained for a year. Comparison object; How many errors occurred in randomly selected data of 500 CDs. Result: 1. We chose the Conquest DICOM Server among 11 open source software used MySQL as a database management system. 2. (1) Comparison of back up and retrieval time (min) showed the result of the following: DVD-RAM (5.13,2.26)/Conquest DICOM Server (1.49,1.19) by GE DSTE (p<0.001), CD (6.12,3.61)/Conquest (0.82,2.23) by GE DLS (p<0.001), CD (5.88,3.25)/Conquest (1.05,2.06) by SIEMENS. (2) The wasted time (sec) to find some data is as follows: CD ($156{\pm}46$), DVD-RAM ($115{\pm}21$) and Conquest DICOM Server ($13{\pm}6$). 3. There was no data loss (0%) for a year and it was stored 12741 PET/CT studies in 1.81 TB memory. In case of CDs, On the other hand, 14 errors among 500 CDs (2.8%) is generated. Conclusions: We found that MicroPACS could be set up with the open source software and its performance was excellent. The system built with open source proved more efficient and more robust than back-up process using CDs or DVD-RAMs. We believe that the operation of the MicroPACS would be effective data storage device as long as its operators develop and systematize it.

  • PDF

A Theoretical Study for Estimation of Oxygen Effect in Radiation Therapy (방사선 조사시 산소가 세포에 미치는 영향의 이론적 분석)

  • Rena J. Lee;HyunSuk Suh
    • Progress in Medical Physics
    • /
    • v.11 no.2
    • /
    • pp.157-165
    • /
    • 2000
  • Purpose: For estimation of yields of l)NA damages induced by radiation and enhanced by oxygen, a mathematical model was used and tested. Materials and Methods: Reactions of the products of water radiolysis were modeled as an ordinary time dependant equations. These reactions include formation of radicals, DNA damage, damage repair, restitution, and damage fixation by oxygen and H-radical. Several rate constants were obtained from literature while others were calculated by fitting an experimental data. Sensitivity studies were performed changing the chemical rate constant at a constant oxygen number density and varying the oxygen concentration. The effects of oxygen concentration as well as the damage fixation mechanism by oxygen were investigated. Oxygen enhancement ratio(OER) was calculated to compare the simulated data with experimental data. Results: Sensitivity studies with oxygen showed that DNA survival was a function of both oxygen concentration and the magnitude of chemical rate constants. There were no change in survival fraction as a function of dose while the oxygen concentration change from 0 to 1.0 x 10$^{7}$ . When the oxygen concentration change from 1.0 $\times$ 107 to 1.0 $\times$ 101o, there was significant decrease in cell survival. The OER values obtained from the simulation study were 2.32 at 10% cell survival level and 1.9 at 45% cell survival level. Conclusion: Sensitivity studies with oxygen demonstrated that the experimental data were reproduced with the effects being enhanced for the cases where the oxygen rate constants are largest and the oxygen concentration is increased. OER values obtained from the simulation study showed good agreement for a low level of cell survival. This indicated that the use of the semi-empirical model could predict the effect of oxygen in cell killing.

  • PDF

Geological Heritage Grade Distribution Mapping Using GIS (공간정보를 이용한 지질유산 등급분포도 작성 연구)

  • Lee, Soo-Jae;Lee, Sunmin;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_3
    • /
    • pp.867-878
    • /
    • 2017
  • Recent interest in geological heritage has been increased in that it can be used as a basic data onto predicting the global environmental change of its containing information about past global environment. In addition, due to the characteristics of the geological heritage, it is easy to damage and difficult to recover without continuous preservation and management. However, there are more damages occurring because of the sporadic spatial distribution and ambiguous management authority of geological heritage. Therefore, an integrated management system is needed by determining the spatial distribution of geological heritage preferentially. In this study, the detailed criteria for assessment of value from the preliminary studies were applied and the geological heritage grade distribution map was generated by using geospatial data in Seoul metropolitan area. For this purpose, the list of geological heritage sites in the Seoul metropolitan area, which is the study area, were complied through a literature review. The geospatial database was designed and constructed by applying the detailed criteria for assessment of value from the preliminary studies. After the construction of the spatial database, a grade map of the geological heritage was created. As a result of the geological heritage grade map in the Seoul metropolitan area, there were more than 35% of the geological heritage in northern Gyeonggi provinces such as Yeoncheon city (18.8%), Pocheon city (10.6%) and Paju city (6.3%). It is followed by 18.1% in Incheon and 8.1% in Ansan, which is approximately 26.2% in western Gyeonggi Province. The geological age of the geological heritage was the highest at in the fourth stage of the Cenozoic era of 16.9%. Through the results of this study, the geological heritage data of the Seoul metropolitan area were extracted from existing literature data and converted into spatial information. It enables comparing the geological features with the spatial distribution of geological heritage. In addition, a management system has been established based on spatial information of constantly building geological heritage data. This provides the integrated management system of the geological heritage to manage authority so that it can be used as a basis for the development of the geological park. Based on the results of this study, it is considered to be possible to systematically construct and utilize the geological heritage across the country.

A New Bias Scheduling Method for Improving Both Classification Performance and Precision on the Classification and Regression Problems (분류 및 회귀문제에서의 분류 성능과 정확도를 동시에 향상시키기 위한 새로운 바이어스 스케줄링 방법)

  • Kim Eun-Mi;Park Seong-Mi;Kim Kwang-Hee;Lee Bae-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1021-1028
    • /
    • 2005
  • The general solution for classification and regression problems can be found by matching and modifying matrices with the information in real world and then these matrices are teaming in neural networks. This paper treats primary space as a real world, and dual space that Primary space matches matrices using kernel. In practical study, there are two kinds of problems, complete system which can get an answer using inverse matrix and ill-posed system or singular system which cannot get an answer directly from inverse of the given matrix. Further more the problems are often given by the latter condition; therefore, it is necessary to find regularization parameter to change ill-posed or singular problems into complete system. This paper compares each performance under both classification and regression problems among GCV, L-Curve, which are well known for getting regularization parameter, and kernel methods. Both GCV and L-Curve have excellent performance to get regularization parameters, and the performances are similar although they show little bit different results from the different condition of problems. However, these methods are two-step solution because both have to calculate the regularization parameters to solve given problems, and then those problems can be applied to other solving methods. Compared with UV and L-Curve, kernel methods are one-step solution which is simultaneously teaming a regularization parameter within the teaming process of pattern weights. This paper also suggests dynamic momentum which is leaning under the limited proportional condition between learning epoch and the performance of given problems to increase performance and precision for regularization. Finally, this paper shows the results that suggested solution can get better or equivalent results compared with GCV and L-Curve through the experiments using Iris data which are used to consider standard data in classification, Gaussian data which are typical data for singular system, and Shaw data which is an one-dimension image restoration problems.

Analysis and Improvement Strategies for Korea's Cyber Security Systems Regulations and Policies

  • Park, Dong-Kyun;Cho, Sung-Je;Soung, Jea-Hyen
    • Korean Security Journal
    • /
    • no.18
    • /
    • pp.169-190
    • /
    • 2009
  • Today, the rapid advance of scientific technologies has brought about fundamental changes to the types and levels of terrorism while the war against the world more than one thousand small and big terrorists and crime organizations has already begun. A method highly likely to be employed by terrorist groups that are using 21st Century state of the art technology is cyber terrorism. In many instances, things that you could only imagine in reality could be made possible in the cyber space. An easy example would be to randomly alter a letter in the blood type of a terrorism subject in the health care data system, which could inflict harm to subjects and impact the overturning of the opponent's system or regime. The CIH Virus Crisis which occurred on April 26, 1999 had significant implications in various aspects. A virus program made of just a few lines by Taiwanese college students without any specific objective ended up spreading widely throughout the Internet, causing damage to 30,000 PCs in Korea and over 2 billion won in monetary damages in repairs and data recovery. Despite of such risks of cyber terrorism, a great number of Korean sites are employing loose security measures. In fact, there are many cases where a company with millions of subscribers has very slackened security systems. A nationwide preparation for cyber terrorism is called for. In this context, this research will analyze the current status of Korea's cyber security systems and its laws from a policy perspective, and move on to propose improvement strategies. This research suggests the following solutions. First, the National Cyber Security Management Act should be passed to have its effectiveness as the national cyber security management regulation. With the Act's establishment, a more efficient and proactive response to cyber security management will be made possible within a nationwide cyber security framework, and define its relationship with other related laws. The newly passed National Cyber Security Management Act will eliminate inefficiencies that are caused by functional redundancies dispersed across individual sectors in current legislation. Second, to ensure efficient nationwide cyber security management, national cyber security standards and models should be proposed; while at the same time a national cyber security management organizational structure should be established to implement national cyber security policies at each government-agencies and social-components. The National Cyber Security Center must serve as the comprehensive collection, analysis and processing point for national cyber crisis related information, oversee each government agency, and build collaborative relations with the private sector. Also, national and comprehensive response system in which both the private and public sectors participate should be set up, for advance detection and prevention of cyber crisis risks and for a consolidated and timely response using national resources in times of crisis.

  • PDF