• Title/Summary/Keyword: actual-Based cost

Search Result 675, Processing Time 0.049 seconds

An Outlier Detection Using Autoencoder for Ocean Observation Data (해양 이상 자료 탐지를 위한 오토인코더 활용 기법 최적화 연구)

  • Kim, Hyeon-Jae;Kim, Dong-Hoon;Lim, Chaewook;Shin, Yongtak;Lee, Sang-Chul;Choi, Youngjin;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.6
    • /
    • pp.265-274
    • /
    • 2021
  • Outlier detection research in ocean data has traditionally been performed using statistical and distance-based machine learning algorithms. Recently, AI-based methods have received a lot of attention and so-called supervised learning methods that require classification information for data are mainly used. This supervised learning method requires a lot of time and costs because classification information (label) must be manually designated for all data required for learning. In this study, an autoencoder based on unsupervised learning was applied as an outlier detection to overcome this problem. For the experiment, two experiments were designed: one is univariate learning, in which only SST data was used among the observation data of Deokjeok Island and the other is multivariate learning, in which SST, air temperature, wind direction, wind speed, air pressure, and humidity were used. Period of data is 25 years from 1996 to 2020, and a pre-processing considering the characteristics of ocean data was applied to the data. An outlier detection of actual SST data was tried with a learned univariate and multivariate autoencoder. We tried to detect outliers in real SST data using trained univariate and multivariate autoencoders. To compare model performance, various outlier detection methods were applied to synthetic data with artificially inserted errors. As a result of quantitatively evaluating the performance of these methods, the multivariate/univariate accuracy was about 96%/91%, respectively, indicating that the multivariate autoencoder had better outlier detection performance. Outlier detection using an unsupervised learning-based autoencoder is expected to be used in various ways in that it can reduce subjective classification errors and cost and time required for data labeling.

Analysis on Factors Influencing Welfare Spending of Local Authority : Implementing the Detailed Data Extracted from the Social Security Information System (지방자치단체 자체 복지사업 지출 영향요인 분석 : 사회보장정보시스템을 통한 접근)

  • Kim, Kyoung-June;Ham, Young-Jin;Lee, Ki-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.141-156
    • /
    • 2013
  • Researchers in welfare services of local government in Korea have rather been on isolated issues as disables, childcare, aging phenomenon, etc. (Kang, 2004; Jung et al., 2009). Lately, local officials, yet, realize that they need more comprehensive welfare services for all residents, not just for above-mentioned focused groups. Still cases dealt with focused group approach have been a main research stream due to various reason(Jung et al., 2009; Lee, 2009; Jang, 2011). Social Security Information System is an information system that comprehensively manages 292 welfare benefits provided by 17 ministries and 40 thousand welfare services provided by 230 local authorities in Korea. The purpose of the system is to improve efficiency of social welfare delivery process. The study of local government expenditure has been on the rise over the last few decades after the restarting the local autonomy, but these studies have limitations on data collection. Measurement of a local government's welfare efforts(spending) has been primarily on expenditures or budget for an individual, set aside for welfare. This practice of using monetary value for an individual as a "proxy value" for welfare effort(spending) is based on the assumption that expenditure is directly linked to welfare efforts(Lee et al., 2007). This expenditure/budget approach commonly uses total welfare amount or percentage figure as dependent variables (Wildavsky, 1985; Lee et al., 2007; Kang, 2000). However, current practice of using actual amount being used or percentage figure as a dependent variable may have some limitation; since budget or expenditure is greatly influenced by the total budget of a local government, relying on such monetary value may create inflate or deflate the true "welfare effort" (Jang, 2012). In addition, government budget usually contain a large amount of administrative cost, i.e., salary, for local officials, which is highly unrelated to the actual welfare expenditure (Jang, 2011). This paper used local government welfare service data from the detailed data sets linked to the Social Security Information System. The purpose of this paper is to analyze the factors that affect social welfare spending of 230 local authorities in 2012. The paper applied multiple regression based model to analyze the pooled financial data from the system. Based on the regression analysis, the following factors affecting self-funded welfare spending were identified. In our research model, we use the welfare budget/total budget(%) of a local government as a true measurement for a local government's welfare effort(spending). Doing so, we exclude central government subsidies or support being used for local welfare service. It is because central government welfare support does not truly reflect the welfare efforts(spending) of a local. The dependent variable of this paper is the volume of the welfare spending and the independent variables of the model are comprised of three categories, in terms of socio-demographic perspectives, the local economy and the financial capacity of local government. This paper categorized local authorities into 3 groups, districts, and cities and suburb areas. The model used a dummy variable as the control variable (local political factor). This paper demonstrated that the volume of the welfare spending for the welfare services is commonly influenced by the ratio of welfare budget to total local budget, the population of infants, self-reliance ratio and the level of unemployment factor. Interestingly, the influential factors are different by the size of local government. Analysis of determinants of local government self-welfare spending, we found a significant effect of local Gov. Finance characteristic in degree of the local government's financial independence, financial independence rate, rate of social welfare budget, and regional economic in opening-to-application ratio, and sociology of population in rate of infants. The result means that local authorities should have differentiated welfare strategies according to their conditions and circumstances. There is a meaning that this paper has successfully proven the significant factors influencing welfare spending of local government in Korea.

A Study on Efficiently Designing Customer Rewards Programs (고객 보상프로그램의 효율적 구성에 관한 연구)

  • Kim, Sang-Cheol
    • Journal of Distribution Science
    • /
    • v.10 no.1
    • /
    • pp.5-10
    • /
    • 2012
  • Currently, the rewards programs offered by many companies to strengthen customer relationships have been working quite well. In addition, many companies' rewards programs, designed for stabilizing revenue, are recognized to be effective. However, these rewards programs are not significantly differentiated between companies and there are no accurate conclusions currently, which can be made about their effects. Because of this, a company with a customer rewards program may not comprehend the true level of active participation. In this environment some companies' rewards programs inadvertently hinder business profitability as a side effect while attempting to increase customer loyalty. In fact, airline and oil companies pass on the financial cost of their programs to the customer, and as a result, they have been criticized publicly. The result of this is that the corporations with bad rewards programs tend to get a bad image. In this study of stores' rewards programs, we centered our focus on the design of the program. The main problem in this study is to recognize the financial value of the rewards program and whether it can create a competitive edge for the companies despite the cost issues experienced by them. Customers receiving financial rewards for their business may be just as satisfied with a particular company or store versus those who are not, and the program, perhaps, does not form a distinctive competitive advantage. When the customer is deciding between competing companies to secure their product needs with, we wanted to figure out how much of an affect a valuable reward program had on their decision making. To evaluate this, we set the first hypothesis as, "based on the level of involvement of the customers, there is a difference between customers' preferences for rewards programs." In the results of Experiment 1 we saw that in a financial compensation program for high-involvement groups and low-involvement groups, significant differences appeared and Hypothesis 1 was partially supported. As for the second hypothesis that "customers will have different preferences between a financial rewards programs (SE) and a joint rewards programs (JE)," the analysis showed that the preference for JE was significantly higher than that for other programs. In addition, through Experiment 2, we were able to find meaningful results, which revealed that consumers have shown a significant difference in their preferences between SE and JE. The purpose of these experiments was to enable the designing of a rewards program by learning how to enhance service information distribution and strengthen customer relationships. From the results, there should be a great amount of value for future service-related endeavors and academic research programs. The research is significant, because the results can be found to have a positive effect on reward program designs however, it does have the following limitations. First, this study was performed using an experiment, and all experiments have limitations. Second, although there was an individual evaluation and a joint evaluation, setting a proper evaluation criteria was difficult. In this study, 1,000 Korean won (KRW) in the individual evaluation had a value of 2 points, and, in the joint evaluation, 1,000 KRW had a value of 1 point. There may have been alternative ways to differentiate the evaluations to obtain the proper results. In this study, since there was no funding, the experiments were performed orally however, this was complementary to the study. Third, the subjects who participated in this experiment were students. Conducting this study through experimentation was unavoidable for us, and future research should be conducted using an actual program with the target customers.

  • PDF

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

Research on ITB Contract Terms Classification Model for Risk Management in EPC Projects: Deep Learning-Based PLM Ensemble Techniques (EPC 프로젝트의 위험 관리를 위한 ITB 문서 조항 분류 모델 연구: 딥러닝 기반 PLM 앙상블 기법 활용)

  • Hyunsang Lee;Wonseok Lee;Bogeun Jo;Heejun Lee;Sangjin Oh;Sangwoo You;Maru Nam;Hyunsik Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.471-480
    • /
    • 2023
  • The Korean construction order volume in South Korea grew significantly from 91.3 trillion won in public orders in 2013 to a total of 212 trillion won in 2021, particularly in the private sector. As the size of the domestic and overseas markets grew, the scale and complexity of EPC (Engineering, Procurement, Construction) projects increased, and risk management of project management and ITB (Invitation to Bid) documents became a critical issue. The time granted to actual construction companies in the bidding process following the EPC project award is not only limited, but also extremely challenging to review all the risk terms in the ITB document due to manpower and cost issues. Previous research attempted to categorize the risk terms in EPC contract documents and detect them based on AI, but there were limitations to practical use due to problems related to data, such as the limit of labeled data utilization and class imbalance. Therefore, this study aims to develop an AI model that can categorize the contract terms based on the FIDIC Yellow 2017(Federation Internationale Des Ingenieurs-Conseils Contract terms) standard in detail, rather than defining and classifying risk terms like previous research. A multi-text classification function is necessary because the contract terms that need to be reviewed in detail may vary depending on the scale and type of the project. To enhance the performance of the multi-text classification model, we developed the ELECTRA PLM (Pre-trained Language Model) capable of efficiently learning the context of text data from the pre-training stage, and conducted a four-step experiment to validate the performance of the model. As a result, the ensemble version of the self-developed ITB-ELECTRA model and Legal-BERT achieved the best performance with a weighted average F1-Score of 76% in the classification of 57 contract terms.

A Store Recommendation Procedure in Ubiquitous Market for User Privacy (U-마켓에서의 사용자 정보보호를 위한 매장 추천방법)

  • Kim, Jae-Kyeong;Chae, Kyung-Hee;Gu, Ja-Chul
    • Asia pacific journal of information systems
    • /
    • v.18 no.3
    • /
    • pp.123-145
    • /
    • 2008
  • Recently, as the information communication technology develops, the discussion regarding the ubiquitous environment is occurring in diverse perspectives. Ubiquitous environment is an environment that could transfer data through networks regardless of the physical space, virtual space, time or location. In order to realize the ubiquitous environment, the Pervasive Sensing technology that enables the recognition of users' data without the border between physical and virtual space is required. In addition, the latest and diversified technologies such as Context-Awareness technology are necessary to construct the context around the user by sharing the data accessed through the Pervasive Sensing technology and linkage technology that is to prevent information loss through the wired, wireless networking and database. Especially, Pervasive Sensing technology is taken as an essential technology that enables user oriented services by recognizing the needs of the users even before the users inquire. There are lots of characteristics of ubiquitous environment through the technologies mentioned above such as ubiquity, abundance of data, mutuality, high information density, individualization and customization. Among them, information density directs the accessible amount and quality of the information and it is stored in bulk with ensured quality through Pervasive Sensing technology. Using this, in the companies, the personalized contents(or information) providing became possible for a target customer. Most of all, there are an increasing number of researches with respect to recommender systems that provide what customers need even when the customers do not explicitly ask something for their needs. Recommender systems are well renowned for its affirmative effect that enlarges the selling opportunities and reduces the searching cost of customers since it finds and provides information according to the customers' traits and preference in advance, in a commerce environment. Recommender systems have proved its usability through several methodologies and experiments conducted upon many different fields from the mid-1990s. Most of the researches related with the recommender systems until now take the products or information of internet or mobile context as its object, but there is not enough research concerned with recommending adequate store to customers in a ubiquitous environment. It is possible to track customers' behaviors in a ubiquitous environment, the same way it is implemented in an online market space even when customers are purchasing in an offline marketplace. Unlike existing internet space, in ubiquitous environment, the interest toward the stores is increasing that provides information according to the traffic line of the customers. In other words, the same product can be purchased in several different stores and the preferred store can be different from the customers by personal preference such as traffic line between stores, location, atmosphere, quality, and price. Krulwich(1997) has developed Lifestyle Finder which recommends a product and a store by using the demographical information and purchasing information generated in the internet commerce. Also, Fano(1998) has created a Shopper's Eye which is an information proving system. The information regarding the closest store from the customers' present location is shown when the customer has sent a to-buy list, Sadeh(2003) developed MyCampus that recommends appropriate information and a store in accordance with the schedule saved in a customers' mobile. Moreover, Keegan and O'Hare(2004) came up with EasiShop that provides the suitable tore information including price, after service, and accessibility after analyzing the to-buy list and the current location of customers. However, Krulwich(1997) does not indicate the characteristics of physical space based on the online commerce context and Keegan and O'Hare(2004) only provides information about store related to a product, while Fano(1998) does not fully consider the relationship between the preference toward the stores and the store itself. The most recent research by Sedah(2003), experimented on campus by suggesting recommender systems that reflect situation and preference information besides the characteristics of the physical space. Yet, there is a potential problem since the researches are based on location and preference information of customers which is connected to the invasion of privacy. The primary beginning point of controversy is an invasion of privacy and individual information in a ubiquitous environment according to researches conducted by Al-Muhtadi(2002), Beresford and Stajano(2003), and Ren(2006). Additionally, individuals want to be left anonymous to protect their own personal information, mentioned in Srivastava(2000). Therefore, in this paper, we suggest a methodology to recommend stores in U-market on the basis of ubiquitous environment not using personal information in order to protect individual information and privacy. The main idea behind our suggested methodology is based on Feature Matrices model (FM model, Shahabi and Banaei-Kashani, 2003) that uses clusters of customers' similar transaction data, which is similar to the Collaborative Filtering. However unlike Collaborative Filtering, this methodology overcomes the problems of personal information and privacy since it is not aware of the customer, exactly who they are, The methodology is compared with single trait model(vector model) such as visitor logs, while looking at the actual improvements of the recommendation when the context information is used. It is not easy to find real U-market data, so we experimented with factual data from a real department store with context information. The recommendation procedure of U-market proposed in this paper is divided into four major phases. First phase is collecting and preprocessing data for analysis of shopping patterns of customers. The traits of shopping patterns are expressed as feature matrices of N dimension. On second phase, the similar shopping patterns are grouped into clusters and the representative pattern of each cluster is derived. The distance between shopping patterns is calculated by Projected Pure Euclidean Distance (Shahabi and Banaei-Kashani, 2003). Third phase finds a representative pattern that is similar to a target customer, and at the same time, the shopping information of the customer is traced and saved dynamically. Fourth, the next store is recommended based on the physical distance between stores of representative patterns and the present location of target customer. In this research, we have evaluated the accuracy of recommendation method based on a factual data derived from a department store. There are technological difficulties of tracking on a real-time basis so we extracted purchasing related information and we added on context information on each transaction. As a result, recommendation based on FM model that applies purchasing and context information is more stable and accurate compared to that of vector model. Additionally, we could find more precise recommendation result as more shopping information is accumulated. Realistically, because of the limitation of ubiquitous environment realization, we were not able to reflect on all different kinds of context but more explicit analysis is expected to be attainable in the future after practical system is embodied.

Comparative Analysis of the Use of Leisure Resources and Leisure Activity According to the Execution of Forty-hour-a-week Working System: Based on 2012 Survey on National Leisure Activity (근로자의 주 40시간 근무제 시행 유무에 따른 여가자원 이용 및 여가활동 비교분석: 2012년 국민여가활동 조사 결과를 기초로)

  • Bark, Min-Jeng;Yoon, So-Young
    • Journal of Family Resource Management and Policy Review
    • /
    • v.17 no.4
    • /
    • pp.19-37
    • /
    • 2013
  • From the perspective of labor welfare, forty-hour-a-week working system:(FWS) has been an important goal throughout world, and in fact, advanced countries implemented this a long time ago. However, there are differences in opinions concerning FWS; some people emphasizes the improvement in life quality, while others point out that this measure increases the cost of wages and has limited effectiveness. Thus, discussions about the success of FWS have emerged from diverse perspectives; One thing that should be made clear before debating FWS is that reducing laborers' working hours is already a global trend, and Korea, also intends to extend it. Therefore, in order to maximize the benefits resulting from the execution of the system and to identify measures that can be used to solve the problems related to FWS, it is necessary to consider actual laborers' use of leisure resources and whether they have increased or decreased as a result of FWS. It is also necessary to look at the differences in workers' leisure activity with and without the system. To evaluate and diagnose FWS's political effect from the perspective of laborers' leisure satisfaction and improvements in life quality, this study examines differences in leisure expenses, leisure hours, use of and demand for leisure resources such as leisure space, and types of leisure activity, according to the execution of FWS, This research is based on the "2012 Survey on National Leisure Activity" conducted by the Ministry of Culture, Sports and Tourism. In addition, through analysis of the differences in leisure satisfaction and happiness levels, the study intends to confirm the necessity of executing FWS and ensuring that the system is in use. The study results can be briefly summed up as followa:: First, regarding the general findings of the study, a significant result was shown in terms of the execution of FWS according to income level. The finding that the execution of FWS works differently according to the working environment or life quality reassures us of the common notion in society that working environment or life quality may differ in relation to social characteristics. The utility of weekday leisure hours did not indicate a statistically significant difference, but in terms of weekend leisure hours, laborers practicing FWS had an additional 30 minutes of leisure time than those who did not. Furthermore, regarding leisure expenses, laborers practicing FWS indicated more monthly average leisure expenses or expected leisure expenses. In relation to leisure activity, those working at a company executing FWS engaged in culture and art activities more frequently than those working at the companies that did not. On the other hand, those working at companies without FWS indicated more hobbies, amusement activities, rest, and other activities than those working at the companies with FWS. In terms of vacation experience, those working at companies with FWS had more vacation time than those working at companies without it; on average, they had longer vacations by 1.64 days. Regarding their leisure life satisfaction and happiness level, those working at companies with FWS indicated higher satisfaction and greater happiness than those working at companies without it. The findings mentioned above represent the preliminary results of this paper, and the remainder of the research will provide more detailed analysis results and suggestions corresponding to them.

  • PDF

A Study on Smart Accuracy Control System based on Augmented Reality and Portable Measurement Device for Shipbuilding (조선소 블록 정도관리를 위한 경량화 측정 장비 및 증강현실 기반의 스마트 정도관리 시스템 개발)

  • Nam, Byeong-Wook;Lee, Kyung-Ho;Lee, Won-Hyuk;Lee, Jae-Duck;Hwang, Ho-Jin
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.1
    • /
    • pp.65-73
    • /
    • 2019
  • In order to increase the production efficiency of the ship and shorten the production cycle, it is important to evaluate the accuracy of the ship components efficiently during the drying cycle. The accuracy control of the block is important for shortening the ship process, reducing the cost, and improving the accuracy of the ship. Some systems have been developed and used mainly in large shipyards, but in some cases, they are measured and managed using conventional measuring instruments such as tape measure and beam, optical instruments as optical equipment, In order to perform accuracy control, these tools and equipment as well as equipment for recording measurement data and paper drawings for measuring the measurement position are inevitably combined. The measured results are managed by the accuracy control system through manual input or recording device. In this case, the measurement result is influenced by the work environment and the skill level of the worker. Also, in the measurement result management side, there are a human error about the lack of the measurement result creation, the lack of the management sheet management, And costs are lost in terms of efficiency due to consumption. The purpose of this study is to improve the working environment in the existing accuracy management process by using the augmented reality technology to visualize the measurement information on the actual block and to obtain the measurement information And a smart management system based on augmented reality that can effectively manage the accuracy management data through interworking with measurement equipment. We confirmed the applicability of the proposed system to the accuracy control through the prototype implementation.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

Variation Analysis of Distance and Exposure Dose in Radiation Control Area and Monitoring Area according to the Thickness of Radiation Protection Tool Using the Calculation Model: Non-Destructive Test Field (계산 모델을 활용한 방사선방어용 도구 두께에 따른 방사선관리구역 및 감시구역의 거리 및 피폭선량 변화 분석 : 방사선투과검사 분야 중심으로)

  • Gwon, Da Yeong;Park, Chan-hee;Kim, Hye Jin;Kim, Yongmin
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.3
    • /
    • pp.279-287
    • /
    • 2020
  • Recently, interest in radiation protection is increasing because of the occurrence of accidents related to exposure dose. So, the nuclear safety act provides to install the shields to avoid exceeding the dose limit. In particular, when the worker conducts the non-destructive testing (NDT) without the fixed shielding structure, we should monitor the access to the workplace based on a constant dose rate. However, when we apply for permits for NDT work in these work environments, the consideration factors to the estimation of the distance and exposure dose are not legally specified. Therefore, we developed the excel model that automatically calculates the distance, exposure dose, and cost if we input the factors. We applied the assumption data to this model. As a result of the application, the distance change rate was low when the thickness of the lead blanket and collimator is above 25 mm, 21.5 mm, respectively. However, we didn't consider the scattering and build-up factor. And, we assumed the shape of the lead blanket and collimator. Therefore, if we make up for these limitations and use the actual data, we expect that we can build a database on the distance and exposure dose.