• Title/Summary/Keyword: rule accuracy

Search Result 499, Processing Time 0.033 seconds

Accuracy of Spirometry at Predicting Restrictive Pulmonary Impairment (제한성 환기장애의 진단에서 폐활량검사의 정확성)

  • Ahn, Young Mee;Koh, Won-Jung;Kim, Cheol Hong;Lim, Seong Yong;An, Chang Hyeok;Suh, Gee Young;Chung, Man Pyo;Kim, Hojoong;Kwon, O Jung
    • Tuberculosis and Respiratory Diseases
    • /
    • v.54 no.3
    • /
    • pp.330-337
    • /
    • 2003
  • Background : Low spirometric forced vital capacity(FVC) in conjunction with a normal or high ratio of the forced expiratory volume at 1 second to the forced vital capacity($FEV_1$/FVC%) has traditionally been classified as a restrictive abnormality. However, the gold-standard diagnosis of a restrictive pulmonary impairment requires a measurement of the total lung capacity (TLC). This study was performed to determine the predictive value of spirometric measurements of the FVC for diagnosing a restrictive pulmonary abnormality. Methods : Test results from 1,371 adult patients who undertook both spirometry and lung volume measurements on the same visit from January 1999 to December 2000 were enrolled in this study. The test values for the FVC, the TLC that was below 80% of predicted value, and a $FEV_1$/FVC% that was below 70%, were classified as being abnormal. Results : Of the 1,371 patients, 353 patients had a reduced a FVC. Of these patients, 186 patients had a reduced TLC. Therefore, the positive predictive value was 52.7%. Of the 196 patients with a normal $FEV_1$/FVC% and a reduced FVC, 148(75.5%) patients had a lower TLC. Thirty eight (24.2%) patients out of 157 patients with a low $FEV_1$/FVC% and a low FVC showed a restrictive defect. Conclusion : Spirometry is useful to rule out a restrictive pulmonary abnormality, but a restrictive pattern on the spirometry dose not mean there is a true restrictive disease. For the patients with a low FVC, TLC measurements are essential for diagnosing a restrictive pulmonary impairment.

Prediction of Target Motion Using Neural Network for 4-dimensional Radiation Therapy (신경회로망을 이용한 4차원 방사선치료에서의 조사 표적 움직임 예측)

  • Lee, Sang-Kyung;Kim, Yong-Nam;Park, Kyung-Ran;Jeong, Kyeong-Keun;Lee, Chang-Geol;Lee, Ik-Jae;Seong, Jin-Sil;Choi, Won-Hoon;Chung, Yoon-Sun;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.20 no.3
    • /
    • pp.132-138
    • /
    • 2009
  • Studies on target motion in 4-dimensional radiotherapy are being world-widely conducted to enhance treatment record and protection of normal organs. Prediction of tumor motion might be very useful and/or essential for especially free-breathing system during radiation delivery such as respiratory gating system and tumor tracking system. Neural network is powerful to express a time series with nonlinearity because its prediction algorithm is not governed by statistic formula but finds a rule of data expression. This study intended to assess applicability of neural network method to predict tumor motion in 4-dimensional radiotherapy. Scaled Conjugate Gradient algorithm was employed as a learning algorithm. Considering reparation data for 10 patients, prediction by the neural network algorithms was compared with the measurement by the real-time position management (RPM) system. The results showed that the neural network algorithm has the excellent accuracy of maximum absolute error smaller than 3 mm, except for the cases in which the maximum amplitude of respiration is over the range of respiration used in the learning process of neural network. It indicates the insufficient learning of the neural network for extrapolation. The problem could be solved by acquiring a full range of respiration before learning procedure. Further works are programmed to verify a feasibility of practical application for 4-dimensional treatment system, including prediction performance according to various system latency and irregular patterns of respiration.

  • PDF

Current Wheat Quality Criteria and Inspection Systems of Major Wheat Producing Countries (밀 품질평가 현황과 검사제도)

  • 이춘기;남중현;강문석;구본철;김재철;박광근;박문웅;김용호
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.47
    • /
    • pp.63-94
    • /
    • 2002
  • On the purpose to suggest an advanced scheme in assessing the domestic wheat quality, this paper reviewed the inspection systems of wheat in major wheat producing countries as well as the quality criteria which are being used in wheat grading and classification. Most wheat producing countries are adopting both classifications of class and grade to provide an objective evaluation and an official certification to their wheat. There are two main purposes in the wheat classification. The first objectives of classification is to match the wheat with market requirements to maximize market opportunities and returns to growers. The second is to ensure that payments to glowers aye made on the basis of the quality and condition of the grain delivered. Wheat classes has been assigned based on the combination of cultivation area, seed-coat color, kernel and varietal characteristics that are distinctive. Most reputable wheat marketers also employ a similar approach, whereby varieties of a particular type are grouped together, designed by seed coat colour, grain hardness, physical dough properties, and sometimes more precise specification such as starch quality, all of which are genetically inherited characteristics. This classification in simplistic terms is the categorization of a wheat variety into a commercial type or style of wheat that is recognizable for its end use capabilities. All varieties registered in a class are required to have a similar end-use performance that the shipment be consistent in processing quality, cargo to cargo and year to year, Grain inspectors have historically determined wheat classes according to visual kernel characteristics associated with traditional wheat varieties. As well, any new wheat variety must not conflict with the visual distinguishability rule that is used to separate wheats of different classes. Some varieties may possess characteristics of two or more classes. Therefore, knowledge of distinct varietal characteristics is necessary in making class determinations. The grading system sets maximum tolerance levels for a range of characteristics that ensure functionality and freedom from deleterious factors. Tests for the grading of wheat include such factors as plumpness, soundness, cleanliness, purity of type and general condition. Plumpness is measured by test weight. Soundness is indicated by the absence or presence of musty, sour or commercially objectionable foreign odors and by the percentage of damaged kernels that ave present in the wheat. Cleanliness is measured by determining the presence of foreign material after dockage has been removed. Purity of class is measured by classification of wheats in the test sample and by limitation for admixtures of different classes of wheat. Moisture does not influence the numerical grade. However, it is determined on all shipments and reported on the official certificate. U.S. wheat is divided into eight classes based on color, kernel Hardness and varietal characteristics. The classes are Durum, Hard Red Spring, Hard Red Winter, Soft Red Winter, Hard White, soft White, Unclassed and Mixed. Among them, Hard Red Spring wheat, Durum wheat, and Soft White wheat are further divided into three subclasses, respectively. Each class or subclass is divided into five U.S. numerical grades and U.S. Sample grade. Special grades are provided to emphasize special qualities or conditions affecting the value of wheat and are added to and made a part of the grade designation. Canadian wheat is also divided into fourteen classes based on cultivation area, color, kernel hardness and varietal characteristics. The classes have 2-5 numerical grades, a feed grade and sample grades depending on class and grading tolerance. The Canadian grading system is based mainly on visual evaluation, and it works based on the kernel visual distinguishability concept. The Australian wheat is classified based on geographical and quality differentiation. The wheat grown in Australia is predominantly white grained. There are commonly up to 20 different segregations of wheat in a given season. Each variety grown is assigned a category and a growing areas. The state governments in Australia, in cooperation with the Australian Wheat Board(AWB), issue receival standards and dockage schedules annually that list grade specifications and tolerances for Australian wheat. AWB is managing "Golden Rewards" which is designed to provide pricing accuracy and market signals for Australia's grain growers. Continuous payment scales for protein content from 6 to 16% and screenings levels from 0 to 10% based on varietal classification are presented by the Golden Rewards, and the active payment scales and prices can change with market movements.movements.

A Combat Effectiveness Evaluation Algorithm Considering Technical and Human Factors in C4I System (NCW 환경에서 C4I 체계 전투력 상승효과 평가 알고리즘 : 기술 및 인적 요소 고려)

  • Jung, Whan-Sik;Park, Gun-Woo;Lee, Jae-Yeong;Lee, Sang-Hoon
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.55-72
    • /
    • 2010
  • Recently, the battlefield environment has changed from platform-centric warfare(PCW) which focuses on maneuvering forces into network-centric warfare(NCW) which is based on the connectivity of each asset through the warfare information system as information technology increases. In particular, C4I(Command, Control, Communication, Computer and Intelligence) system can be an important factor in achieving NCW. It is generally used to provide direction across distributed forces and status feedback from thoseforces. It can provide the important information, more quickly and in the correct format to the friendly units. And it can achieve the information superiority through SA(Situational Awareness). Most of the advanced countries have been developed and already applied these systems in military operations. Therefore, ROK forces also have been developing C4I systems such as KJCCS(Korea Joint Command Control System). And, ours are increasing the budgets in the establishment of warfare information systems. However, it is difficult to evaluate the C4I effectiveness properly by deficiency of methods. We need to develop a new combat effectiveness evaluation method that is suitable for NCW. Existing evaluation methods lay disproportionate emphasis on technical factors with leaving something to be desired in human factors. Therefore, it is necessary to consider technical and human factors to evaluate combat effectiveness. In this study, we proposed a new Combat Effectiveness evaluation algorithm called E-TechMan(A Combat Effectiveness Evaluation Algorithm Considering Technical and Human Factors in C4I System). This algorithm uses the rule of Newton's second law($F=(m{\Delta}{\upsilon})/{\Delta}t{\Rightarrow}\frac{V{\upsilon}I}{T}{\times}C$). Five factors considered in combat effectiveness evaluation are network power(M), movement velocity(v), information accuracy(I), command and control time(T) and collaboration level(C). Previous researches did not consider the value of the node and arc in evaluating the network power after the C4I system has been established. In addition, collaboration level which could be a major factor in combat effectiveness was not considered. E-TechMan algorithm is applied to JFOS-K(Joint Fire Operating System-Korea) system that can connect KJCCS of Korea armed forces with JADOCS(Joint Automated Deep Operations Coordination System) of U.S. armed forces and achieve sensor to shooter system in real time in JCS(Joint Chiefs of Staff) level. We compared the result of evaluation of Combat Effectiveness by E-TechMan with those by other algorithms(e.g., C2 Theory, Newton's second Law). We can evaluate combat effectiveness more effectively and substantially by E-TechMan algorithm. This study is meaningful because we improved the description level of reality in calculation of combat effectiveness in C4I system. Part 2 will describe the changes of war paradigm and the previous combat effectiveness evaluation methods such as C2 theory while Part 3 will explain E-TechMan algorithm specifically. Part 4 will present the application to JFOS-K and analyze the result with other algorithms. Part 5 is the conclusions provided in the final part.

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Evaluation of Applicability of Sea Ice Monitoring Using Random Forest Model Based on GOCI-II Images: A Study of Liaodong Bay 2021-2022 (GOCI-II 영상 기반 Random Forest 모델을 이용한 해빙 모니터링 적용 가능성 평가: 2021-2022년 랴오둥만을 대상으로)

  • Jinyeong Kim;Soyeong Jang;Jaeyeop Kwon;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1651-1669
    • /
    • 2023
  • Sea ice currently covers approximately 7% of the world's ocean area, primarily concentrated in polar and high-altitude regions, subject to seasonal and annual variations. It is very important to analyze the area and type classification of sea ice through time series monitoring because sea ice is formed in various types on a large spatial scale, and oil and gas exploration and other marine activities are rapidly increasing. Currently, research on the type and area of sea ice is being conducted based on high-resolution satellite images and field measurement data, but there is a limit to sea ice monitoring by acquiring field measurement data. High-resolution optical satellite images can visually detect and identify types of sea ice in a wide range and can compensate for gaps in sea ice monitoring using Geostationary Ocean Color Imager-II (GOCI-II), an ocean satellite with short time resolution. This study tried to find out the possibility of utilizing sea ice monitoring by training a rule-based machine learning model based on learning data produced using high-resolution optical satellite images and performing detection on GOCI-II images. Learning materials were extracted from Liaodong Bay in the Bohai Sea from 2021 to 2022, and a Random Forest (RF) model using GOCI-II was constructed to compare qualitative and quantitative with sea ice areas obtained from existing normalized difference snow index (NDSI) based and high-resolution satellite images. Unlike NDSI index-based results, which underestimated the sea ice area, this study detected relatively detailed sea ice areas and confirmed that sea ice can be classified by type, enabling sea ice monitoring. If the accuracy of the detection model is improved through the construction of continuous learning materials and influencing factors on sea ice formation in the future, it is expected that it can be used in the field of sea ice monitoring in high-altitude ocean areas.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.