• Title/Summary/Keyword: Damage reason and type

Search Result 55, Processing Time 0.018 seconds

The Accuracy of the Digital Imaging System and the Frequency Dependent Type Apex Locator in Root Canal Length Measurement (근관장 측정에 있어서 디지털 영상 처리기와 주파수 의존형 측정기의 정확도)

  • Lee Byaung-Rib;Park Chang-Seo
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.28 no.2
    • /
    • pp.435-459
    • /
    • 1998
  • In order to achieve a successful endodontic treatment, root canals must be obturated three-dimensionally without causing any damage to apical tissues. Accurate length determination of the root canal is critical in this case. For this reason, I've used the conventional periapical radiography, Digora/sup (R)/(digital imaging system) and Root ZX/sup (R)/(the frequency dependent type apex locator) to measure the length of the canal and compare it with the true length obtained by cutting the tooth in half and measuring the length between the occlusal surface and the apical foramen. From the information obtained by these measurements, I was able to evaluate the accuracy and clinical usefulness of each systems. whether the thickness of files used in endodontic therapy has any effect on the measuring systems was also evaluated in an effort to simplify the treatment planning phase of endodontic treatment. 29 canals of 29 sound premolars were measured with #15, #20, #25 files by 3 different dentists each using the periapical radiography. Digora/sup (R)/ and Root ZX/sup (R)/. The measurements were then compared with the true length. The results were as follows: 1. In comparing mean discrepancies between measurements obtained by using periapical radiography(mean error: -0.449±0.444 mm), Digora/sup (R)/(mean error: -0.417±0.415 mm) and Root ZX/sup (R)/(mean error: 0.123±0.458 mm) with true length. periapical radiography and Digora/sup (R)/ system had statistically significant differences(p<0.05) in most cases while Root ZX/sup (R)/ showed none(p>0.05). 2. By subtracting values obtained by using periapical radiography, Digora/sup (R)/ and Root ZX/sup (R)/ from the true length and making a distribution table of their absolute values. the following analysis was possible. In the case of periapical film. 140 out of 261<53.6%) were clinically acceptable satisfying the margin of error of less than 0.5 mm. 151 out of 261 (53,6%) were acceptable in the Digora/sup (R)/ system while Root ZX/sup (R)/ had 197 out of 261(75.5%) within the limits of 0.5mm margin of error. 3. In determining whether the thickness of files has any effect on measuring methoths, no statistically significant differences were found(p>0.05). 4. In comparing data obtained from these methods in order to evaluate the difference among measuring methods, there was no statistically significant difference between periapical radiography and Digora/sup (R)/ system(p>0.05), but there was statistically significant difference between Root ZX/sup (R)/ and periapical radiography(p<0.05). Also there was statistically significant difference between Root ZX/sup (R)/ and Digora/sup (R)/ system(p<0.05). In conclusion, Root ZX/sup (R)/ was more accurate when compared with the Digora/sup (R)/ system and periapical radiography and seems to be more effective clinically in determining root canal length. But Root ZX/sup (R)/ has its limits in determining root morphology and number of roots and its accuracy becomes questionable when apical foramen is open due to unknown reasons. Therefore the combined use of Root ZX/sup (R)/ and the periapical radiography are mandatory. Digora/sup (R)/ system seems to be more effective when periapical radiographs are needed in a short period of time because of its short processing time and less exposure.

  • PDF

Managing Technological Risk and Risk Conflict : Public Debates on Health Risks of Mobile Phones EMF (기술위험 관리와 위험갈등 : 휴대전화 전자파의 인체유해성 논란)

  • Jung, Byung-Kul
    • Journal of Science and Technology Studies
    • /
    • v.8 no.1
    • /
    • pp.97-129
    • /
    • 2008
  • We are living in the time of high probability of technological risk due to increased rate of technology development and diffusion of new technologies. Resolving uncertainties, the basic attribution of risk, by accumulating knowledge over the risk factors of certain technology is critical to management of technological risk. In many cases of technological risks, high uncertainty of knowledge is commonly mentioned reason for public controversies on risk management. However, the type of technological risk with low social agreement and low uncertainty of knowledge, the main reason for public controversy is absence of social agreement. Public debates on the risks of mobile phones electromagnetic fields(EMF) to human health comes under this category. The knowledge uncertainty on human health effect of mobile phones EMF has been lowered increasingly by accumulating enormous volume of knowledge though scientists have not reached a final conclusion whether it pose a risk to the physical and mental health of the general population or not. In contrast with civil organizations calling for precautionary approach based regulation, the mobile phone industry is cling to the position of no-regulation-needed by arguing no clear evidence to prove health risks of mobile phone EMF has found. In Korea, government set exposure standards based on a measurement called the 'specific absorption rate'(SAR) and require the mobile phone industry to open SAR information to the public by their own decision. From the view of pro-regulation side based on precautionary approach, technology risk managament of mobile phones EMF in Korea is highly limited and formalized one with limited measuring of SAR on head part only and problematic self-regulated opening of information about SAR to the public. As far as the government keeps having priority on protecting interest of mobile phone industry over precautionary regulation of mobile phones EMF, the disagreement between civil organizations and the government will not resolved. The risk of mobile phones EMF to human health have high probability of being underestimated in the rate and damage of risk than objectively estimated ones due to familiarity of mobile phone technology. And this can be the cause of destructive social dispute or devastating disaster. To prevent such disastrous results, technology risk management, which integrating the goals of safety with economic growth in public policy and designing and promoting risk communication, is required.

  • PDF

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

The Restoration and Conservation of Indigo Paper in the Late Goryeo Dynasty: Focusing on Transcription of Saddharmapundarika Sutra(The Lotus Sutra) in Silver on Indigo Paper, Volume 7 (고려말 사경의 감지(紺紙) 재현과 수리 - 이화여자대학교 소장 감지은니묘법연화경을 중심으로 -)

  • Lee, Sanghyun
    • Korean Journal of Heritage: History & Science
    • /
    • v.54 no.1
    • /
    • pp.52-69
    • /
    • 2021
  • The transcriptions of Buddhist sutra in the Goryeo Dynasty are more elaborate and splendid than those of any other period and occupy a very important position in Korean bibliography. Among them, the transcriptions made on indigo paper show decorative features that represent the dignity and quality that nobles would have preferred. Particularly, during the Goryeo Dynasty, a large number of transcriptions were made on indigo paper, often in hand-scrolled and folded forms. If flexibility was not guaranteed, the hand-scrolled form caused inconvenience and damage when handling the transcription because of the structural limitations of the material that is rolled up and opened. It was possible to overcome these shortcomings by changing from the hand-scrolled to the folded form to obtain convenience and structural stability. The folded form of the transcription utilizes the same principle as the folding screen, so it is a structure that can be folded and unfolded, and it is made by connecting parts at regularly spaced intervals. No matter how small the transcription is, if it is made of thin paper, it is difficult to handle it and to maintain its shape and structure. For this reason, the folded transcription was usually made of thick paper to support the structure, and the cover was made thicker than the inner part to protect the contents. In other words, the forded form was generally manufactured to suit the characteristics of maintaining strength by making the paper thick. Because a large amount of indigo paper was needed to make this type of transcription, it is assumed that there were craftsmen who were in charge only of dark dyeing the papers. Usually, paper dyeing requires much more dye than silk dyeing, and dyeing dozens of times would be required to obtain the deep indigo color of the base of the transcription of Buddhist sutra in the Goryeo Dynasty. Unfortunately, there is no record of the Goryeo Dynasty's indigo blue paper manufacturing technique, and the craftsmen who made indigo paper no longer remain, so no one knows the exact method of making indigo paper. Recently, Hanji artisans, natural dyers, and conservators attempted to restore the Goryeo Dynasty's indigo paper, but the texture and deep colors found in the relics could not be reproduced. This study introduces the process of restoring indigo paper in the Goryeo Dynasty through collaboration between dyeing artisans, Hanji artisans, and conservators for conservation of the transcription of Buddhist sutra in the late Goryeo dynasty, yielding a suggested method of making indigo paper.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.