• 제목/요약/키워드: visual decision

Search Result 352, Processing Time 0.03 seconds

A Study of Improvement of School Health in Korea (학교보건(學校保健)의 개선방안(改善方案) 연구(硏究))

  • Lee, Soo Hee
    • Journal of the Korean Society of School Health
    • /
    • v.1 no.2
    • /
    • pp.118-135
    • /
    • 1988
  • This study is designed to analyze the problems of health education in schools and explore the ways of enhancing health education from a historical perspective. It also shed light on the managerial aspect of health education (including medical-check-up for students disease management. school feeding and the health education law and its organization) as well as its educational aspect (including curriculum, teaching & learning, and wishes of teachers). At the same time it attempted to present the ways of resolving the problems in health education as identified her. Its major findings are as follows; I. Colculsion and Summary 1. Despite the importance of health education, the area remains relatively undeveloped. Students spend a greater part of their time in schools. Hence the government should develop a keener awareness of the importance of health education and invest more in it to ensure a healthy, comfortable life for students. 2. At the moment the outcomes of medical-check-up for students, which constitutes the mainstay of health education, are used only as statistical data to report to the relevant authorities. Needless to say they should be used to help improve the wellbeing of students. Specifically, nurse-teachers and home-room teachers should share the outcomes of medical-check-up to help the students wit shortcomings in growth or development or other physical handicaps more clearly recognize their problems and correct them if possible. 3. In the area of disease management, 62.6, 30.3 and 23.0 percent of primary, middle, and highschool students, respectively, were found to suffer from dental ailments. By contrast 2.2, 7.8, and 11.5 percent of primary, middle and highschool students suffered from visual disorders. The incidence of dental ailments decreases while that of visual impairments increases as students grow up. This signifies that students are under tremendous physical strain in their efforts to be admitted by schools of higher grade. Accordingly the relevant authorities should revise the current admission system as well as improve lighting system in classrooms. 4. Budget restraints have often been cited as a major bottleneck to the expansion of school feeding. Nevertheless it should be extended at least, to all primary schools even at the expense of parents to ensure the sound growth of children by improving their diet. 5. The existing health education law should be revised in such a way as to better meet the needs of schools. Also the manpower for health education should be strengthened. 6. Proper curriculum is essential to the effective implementation of health education. Hence it is necessary to remove those parts in the current health education curriculum that overlaps with other subjects. It is also necessary to make health education a compulsory course in teachers' college at the same time the teachers in charge of health education should be given an in-service training. 7. Currently health education is being taught as part of physical education, science, home economics or other courses. However these subjects tend to be overshadowed by English, mathematics, and other subjects which carry heavier weight in admission test. It is necessary among other things, to develop an educational plan specifying the course hours and teaching materials. 8. Health education is carried out by nurse-teachers or home-room teachers. In connection with health education, they expressed the hope that health education will be normalized with newly-developed teaching material, expanded opportunity for in-service training and increased budget, facilities and supply of manpower. These are the mainpoints that the decision-makers should take into account in the formation of future policy for health education. II. Recommendations for the Improvement of Health Education 1. Regular medical check-up for students, which now is the mainstay of health education, should be used as educational data in an appropriate manner. For instance the records of medical check-up could be transferred between schools. 2. School feeding should be expanded at least in primary schools at the expense of the government or even parents. It will help improve the physical wellbeing of youths and the diet for the people. 3. At the moment the health education law is only nominal. Hence the law should be revised in such a way as to ensure the physical wellbeing of students and faculty. 4. Health education should be made a compulsory course in teachers' college. Also the teachers in service should be offered training in health education. 5. The curriculum of health education should be revised. Also the course hours should be extended or readjusted to better meet the needs of students. 6. In the meantime the course hours should be strictly observed, while educational materials should be revised in no time. 7. The government should expand its investment in facilities, budget and personnel for health education in schools at all levels.

  • PDF

Prioritization of Species Selection Criteria for Urban Fine Dust Reduction Planting (도시 미세먼지 저감 식재를 위한 수종 선정 기준의 우선순위 도출)

  • Cho, Dong-Gil
    • Korean Journal of Environment and Ecology
    • /
    • v.33 no.4
    • /
    • pp.472-480
    • /
    • 2019
  • Selection of the plant material for planting to reduce fine dust should comprehensively consider the visual characteristics, such as the shape and texture of the plant leaves and form of bark, which affect the adsorption function of the plant. However, previous studies on reduction of fine dust through plants have focused on the absorption function rather than the adsorption function of plants and on foliage plants, which are indoor plants, rather than the outdoor plants. In particular, the criterion for selection of fine dust reduction species is not specific, so research on the selection criteria for plant materials for fine dust reduction in urban areas is needed. The purpose of this study is to identify the priorities of eight indicators that affect the fine dust reduction by using the fuzzy multi-criteria decision-making model (MCDM) and establish the tree selection criteria for the urban planting to reduce fine dust. For the purpose, we conducted a questionnaire survey of those who majored in fine dust-related academic fields and those with experience of researching fine dust. A result of the survey showed that the area of leaf and the tree species received the highest score as the factors that affect the fine dust reduction. They were followed by the surface roughness of leaves, tree height, growth rate, complexity of leaves, edge shape of leaves, and bark feature in that order. When selecting the species that have leaves with the coarse surface, it is better to select the trees with wooly, glossy, and waxy layers on the leaves. When considering the shape of the leaves, it is better to select the two-type or three-type leaves and palm-shaped leaves than the single-type leaves and to select the serrated leaves than the smooth edged leaves to increase the surface area for adsorbing fine dust in the air on the surface of the leaves. When considering the characteristics of the bark, it is better to select trees that have cork layers or show or are likely to show the bark loosening or cracks than to select those with lenticel or patterned barks. This study is significant in that it presents the priorities of the selection criteria of plant material based on the visual characteristics that affect the adsorption of fine dust for the planning of planting to reduce fine dust in the urban area. The results of this study can be used as basic data for the selection of trees for plantation planning in the urban area.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Legal Issues Regarding the Civil Injunction Against the Drone Flight (토지 상공에서의 드론의 비행자유에 대한 제한과 법률적 쟁점)

  • Shin, Hong-Kyun
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.35 no.2
    • /
    • pp.75-111
    • /
    • 2020
  • The civilian drone world has evolved in recent years from one dominated by hobbyists to growing involvement by companies seeking to profit from unmanned flight in everything from infrastructure inspections to drone deliveries that are already subject to regulations. Drone flight under the property right relation with the land owner would be deemed legal on the condition that expeditious and innocent passage of drone flight over the land be assured. The United Nations Convention on the Law of the Sea (UNCLOS) enshrines the concept of innocent passage through a coastal state's territorial sea. Passage is innocent so long as it is not prejudicial to the peace, good order or security of the coastal state. A vessel in innocent passage may traverse the coastal state's territorial sea continuously and expeditiously, not stopping or anchoring except in force majeure situations. However, the disturbances caused by drone flight may be removed, which is defined as infringement against the constitutional interest of personal rights. For example, aggressive infringement against privacy and personal freedom may be committed by drone more easily than ever before, and than other means. The cost-benefit analysis, however, has been recognjzed as effective criteria regarding the removal of disturbances or injunction decision. Applying that analysis, the civil action against such infringement may not find suitable basis for making a good case. Because the removal of such infringement through civil actions may result in only the deletion of journal article. The injunction of drone flight before taking the information would not be obtainable through civil action, Therefore, more detailed and meticulous regulation and criteria in public law domain may be preferable than civil action, at present time. It may be suitable for legal stability and drone industry to set up the detailed public regulations restricting the free flight of drone capable of acquiring visual information amounting to the infrigement against the right of personal information security.

Reproducibility of Adenosine Tc-99m sestaMIBI SPECT for the Diagnosis of Coronary Artery Disease (관동맥질환의 진단을 위한 아데노신 Tc-99m sestaMIBI SPECT의 재현성)

  • Lee, Duk-Young;Bae, Jin-Ho;Lee, Sang-Woo;Chun, Kyung-Ah;Yoo, Jeong-Soo;Ahn, Byeong-Cheol;Ha, Jeoung-Hee;Chae, Shung-Chull;Lee, Kyu-Bo;Lee, Jae-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.473-480
    • /
    • 2005
  • Purpose: Adenosine myocardial perfusion SPECT has proven to be useful in the detection of coronary artery disease, in the follow up the success of various therapeutic regimens and in assessing the prognosis of coronary artery disease. The purpose of this study is to define the reproducibility of myocardial perfusion SPECT using adenosine stress testing between two consecutive Tc-99m sestaMIBI (MIBI) SPECT studies in the same subjects. Methods: Thirty patients suspected of coronary artery disease in stable condition underwent sequential Tc-99m MIBI SPECT studies using intravenous adenosine. Gamma camera, acquisition and processing protocols used for the two tests were identical and no invasive procedures were performed between two tests. Mean interval between two tests were 4.1 days (range: 2-11 days). The left ventricular wall was divided into na segments and the degree of myocardial tracer uptake was graded with four-point scoring system by visual analysis. Images were interpretated by two independent nuclear medicine physicians and consensus was taken for final decision, if segmental score was not agreeable. Results: Hemodynamic responses to adenosine were not different between two consecutive studies. There were no serious side effects to stop infusion of adenosine and side effects profile was not different. When myocardial uptake was divided into normal and abnormal uptake, 481 of 540 segments were concordant (agreement rate 89%, Kappa index 0.74). With four-grade storing system, exact agreement was 81.3% (439 of 540 segments, tau b=0.73). One and two-grade differences were observed in 97 segments (18%) and 4 segments (0.7%) respectively, but three-grade difference was not observed in any segment. Extent and severity scores were not different between two studios. The extent and severity scores of the perfusion defect revealed excellent positive correlation between two test (r value for percentage extent and severity score is 0.982 and 0.965, p<0.001) Conclusion: Hemodynamic responses and side effects profile were not different between two consecutive adenosine stress tests in the same subjects. Adenosine Tc-99m sestaMIBI SPECT is highly reproducible, and could be used to assess temporal changes in myocardial perfusion in individual patients.

Automatic Interpretation of Epileptogenic Zones in F-18-FDG Brain PET using Artificial Neural Network (인공신경회로망을 이용한 F-18-FDG 뇌 PET의 간질원인병소 자동해석)

  • 이재성;김석기;이명철;박광석;이동수
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.5
    • /
    • pp.455-468
    • /
    • 1998
  • For the objective interpretation of cerebral metabolic patterns in epilepsy patients, we developed computer-aided classifier using artificial neural network. We studied interictal brain FDG PET scans of 257 epilepsy patients who were diagnosed as normal(n=64), L TLE (n=112), or R TLE (n=81) by visual interpretation. Automatically segmented volume of interest (VOI) was used to reliably extract the features representing patterns of cerebral metabolism. All images were spatially normalized to MNI standard PET template and smoothed with 16mm FWHM Gaussian kernel using SPM96. Mean count in cerebral region was normalized. The VOls for 34 cerebral regions were previously defined on the standard template and 17 different counts of mirrored regions to hemispheric midline were extracted from spatially normalized images. A three-layer feed-forward error back-propagation neural network classifier with 7 input nodes and 3 output nodes was used. The network was trained to interpret metabolic patterns and produce identical diagnoses with those of expert viewers. The performance of the neural network was optimized by testing with 5~40 nodes in hidden layer. Randomly selected 40 images from each group were used to train the network and the remainders were used to test the learned network. The optimized neural network gave a maximum agreement rate of 80.3% with expert viewers. It used 20 hidden nodes and was trained for 1508 epochs. Also, neural network gave agreement rates of 75~80% with 10 or 30 nodes in hidden layer. We conclude that artificial neural network performed as well as human experts and could be potentially useful as clinical decision support tool for the localization of epileptogenic zones.

  • PDF

Effects of Additive materials on the Quality Characteristics of Dasik (다식의 제조시 첨가하는 부재료와 품질특성)

  • 정외숙;박금순
    • Korean journal of food and cookery science
    • /
    • v.18 no.2
    • /
    • pp.225-231
    • /
    • 2002
  • This study was carried out to investigate the possibility of improving the texture and flavor of Dasik by adding various types of sugar (syrup, honey) and flavor ingredients (omija, chija, coffee, green tea extract) to rice powder. Dasik samples were prepared, and the sensory quality and physical characteristics of those were compared. The moisture content of Dasik added with syrup was higher than that of honey. Coffee Dasik with syrup was the highest (23.6) in moisture content. In sensory quality, the omija and coffee Dasik showed the highest score in flavor quality (p<.001). Omija Dasik with honey and coffee Dasik with syrup showed the highest scores in overall acceptability (6.4, 6.2). Green tea Dasik with syrup showed the highest value in the lightness (L) of color. Omija Dasik with syrup showed the highest value in the redness (a) of color Chija Dasik was the highest in the yellowness(b) of color. In physical characteristics, the hardness was negatively correlated with the moistness, tenderness, and texture acceptability in sensory quality(p〈0.001). The cohesiveness was positively correlated with the overall acceptability in sensory quality (p〈0.01). In the relation of texture characteristics and sensory quality, the higher the moisture content, the lower the hardness and springiness were, but the higher the brittleness and the cohesiveness were(p〈.001). Overall, omija and coffee Dasik appeared to have desirable flavor, taste and overall acceptability.

Creation of Actual CCTV Surveillance Map Using Point Cloud Acquired by Mobile Mapping System (MMS 점군 데이터를 이용한 CCTV의 실질적 감시영역 추출)

  • Choi, Wonjun;Park, Soyeon;Choi, Yoonjo;Hong, Seunghwan;Kim, Namhoon;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1361-1371
    • /
    • 2021
  • Among smart city services, the crime and disaster prevention sector accounted for the highest 24% in 2018. The most important platform for providing real-time situation information is CCTV (Closed-Circuit Television). Therefore, it is essential to create the actual CCTV surveillance coverage to maximize the usability of CCTV. However, the amount of CCTV installed in Korea exceeds one million units, including those operated by the local government, and manual identification of CCTV coverage is a time-consuming and inefficient process. This study proposed a method to efficiently construct CCTV's actual surveillance coverage and reduce the time required for the decision-maker to manage the situation. For this purpose, first, the exterior orientation parameters and focal lengths of the pre-installed CCTV cameras, which are difficult to access, were calculated using the point cloud data of the MMS (Mobile Mapping System), and the FOV (Field of View) was calculated accordingly. Second, using the FOV result calculated in the first step, CCTV's actual surveillance coverage area was constructed with 1 m, 2 m, 3 m, 5 m, and 10 m grid interval considering the occluded regions caused by the buildings. As a result of applying our approach to 5 CCTV images located in Uljin-gun, Gyeongsnagbuk-do the average re-projection error was about 9.31 pixels. The coordinate difference between calculated CCTV and location obtained from MMS was about 1.688 m on average. When the grid length was 3 m, the surveillance coverage calculated through our research matched the actual surveillance obtained from visual inspection with a minimum of 70.21% to a maximum of 93.82%.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.