• Title/Summary/Keyword: Information Application

Search Result 20,571, Processing Time 0.051 seconds

Development of the Accident Prediction Model for Enlisted Men through an Integrated Approach to Datamining and Textmining (데이터 마이닝과 텍스트 마이닝의 통합적 접근을 통한 병사 사고예측 모델 개발)

  • Yoon, Seungjin;Kim, Suhwan;Shin, Kyungshik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.1-17
    • /
    • 2015
  • In this paper, we report what we have observed with regards to a prediction model for the military based on enlisted men's internal(cumulative records) and external data(SNS data). This work is significant in the military's efforts to supervise them. In spite of their effort, many commanders have failed to prevent accidents by their subordinates. One of the important duties of officers' work is to take care of their subordinates in prevention unexpected accidents. However, it is hard to prevent accidents so we must attempt to determine a proper method. Our motivation for presenting this paper is to mate it possible to predict accidents using enlisted men's internal and external data. The biggest issue facing the military is the occurrence of accidents by enlisted men related to maladjustment and the relaxation of military discipline. The core method of preventing accidents by soldiers is to identify problems and manage them quickly. Commanders predict accidents by interviewing their soldiers and observing their surroundings. It requires considerable time and effort and results in a significant difference depending on the capabilities of the commanders. In this paper, we seek to predict accidents with objective data which can easily be obtained. Recently, records of enlisted men as well as SNS communication between commanders and soldiers, make it possible to predict and prevent accidents. This paper concerns the application of data mining to identify their interests, predict accidents and make use of internal and external data (SNS). We propose both a topic analysis and decision tree method. The study is conducted in two steps. First, topic analysis is conducted through the SNS of enlisted men. Second, the decision tree method is used to analyze the internal data with the results of the first analysis. The dependent variable for these analysis is the presence of any accidents. In order to analyze their SNS, we require tools such as text mining and topic analysis. We used SAS Enterprise Miner 12.1, which provides a text miner module. Our approach for finding their interests is composed of three main phases; collecting, topic analysis, and converting topic analysis results into points for using independent variables. In the first phase, we collect enlisted men's SNS data by commender's ID. After gathering unstructured SNS data, the topic analysis phase extracts issues from them. For simplicity, 5 topics(vacation, friends, stress, training, and sports) are extracted from 20,000 articles. In the third phase, using these 5 topics, we quantify them as personal points. After quantifying their topic, we include these results in independent variables which are composed of 15 internal data sets. Then, we make two decision trees. The first tree is composed of their internal data only. The second tree is composed of their external data(SNS) as well as their internal data. After that, we compare the results of misclassification from SAS E-miner. The first model's misclassification is 12.1%. On the other hand, second model's misclassification is 7.8%. This method predicts accidents with an accuracy of approximately 92%. The gap of the two models is 4.3%. Finally, we test if the difference between them is meaningful or not, using the McNemar test. The result of test is considered relevant.(p-value : 0.0003) This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of enlisted men's data. Additionally, various independent variables used in the decision tree model are used as categorical variables instead of continuous variables. So it suffers a loss of information. In spite of extensive efforts to provide prediction models for the military, commanders' predictions are accurate only when they have sufficient data about their subordinates. Our proposed methodology can provide support to decision-making in the military. This study is expected to contribute to the prevention of accidents in the military based on scientific analysis of enlisted men and proper management of them.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Application of Westgard Multi-Rules for Improving Nuclear Medicine Blood Test Quality Control (핵의학 검체검사 정도관리의 개선을 위한 Westgard Multi-Rules의 적용)

  • Jung, Heung-Soo;Bae, Jin-Soo;Shin, Yong-Hwan;Kim, Ji-Young;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.115-118
    • /
    • 2012
  • Purpose: The Levey-Jennings chart controlled measurement values that deviated from the tolerance value (mean ${\pm}2SD$ or ${\pm}3SD$). On the other hand, the upgraded Westgard Multi-Rules are actively recommended as a more efficient, specialized form of hospital certification in relation to Internal Quality Control. To apply Westgard Multi-Rules in quality control, credible quality control substance and target value are required. However, as physical examinations commonly use quality control substances provided within the test kit, there are many difficulties presented in the calculation of target value in relation to frequent changes in concentration value and insufficient credibility of quality control substance. This study attempts to improve the professionalism and credibility of quality control by applying Westgard Multi-Rules and calculating credible target value by using a commercialized quality control substance. Materials and Methods : This study used Immunoassay Plus Control Level 1, 2, 3 of Company B as the quality control substance of Total T3, which is the thyroid test implemented at the relevant hospital. Target value was established as the mean value of 295 cases collected for 1 month, excluding values that deviated from ${\pm}2SD$. The hospital quality control calculation program was used to enter target value. 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T of Westgard Multi-Rules were applied in the Total T3 experiment, which was conducted 194 times for 20 days in August. Based on the applied rules, this study classified data into random error and systemic error for analysis. Results: Quality control substances 1, 2, and 3 were each established as 84.2 ng/$dl$, 156.7 ng/$dl$, 242.4 ng/$dl$ for target values of Total T3, with the standard deviation established as 11.22 ng/$dl$, 14.52 ng/$dl$, 14.52 ng/$dl$ respectively. According to error type analysis achieved after applying Westgard Multi-Rules based on established target values, the following results were obtained for Random error, 12s was analyzed 48 times, 13s was analyzed 13 times, R4s was analyzed 6 times, for Systemic error, 22s was analyzed 10 times, 41s was analyzed 11 times, 2 of 32s was analyzed 17 times, $10\bar{x}$ was analyzed 10 times, and 7T was not applied. For uncontrollable Random error types, the entire experimental process was rechecked and greater emphasis was placed on re-testing. For controllable Systemic error types, this study searched the cause of error, recorded the relevant cause in the action form and reported the information to the Internal Quality Control committee if necessary. Conclusions : This study applied Westgard Multi-Rules by using commercialized substance as quality control substance and establishing target values. In result, precise analysis of Random error and Systemic error was achieved through the analysis of 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T rules. Furthermore, ideal quality control was achieved through analysis conducted on all data presented within the range of ${\pm}3SD$. In this regard, it can be said that the quality control method formed based on the systematic application of Westgard Multi-Rules is more effective than the Levey-Jennings chart and can maximize error detection.

  • PDF

The Current Status of the Warsaw Convention and Subsequent Protocols in Leading Asian Countries (아시아 주요국가(主要國家)들에 있어서의 바르샤바 체제(體制)의 적용실태(適用實態)와 전망(展望))

  • Lee, Tae-Hee
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.1
    • /
    • pp.147-162
    • /
    • 1989
  • The current status of the application and interpretation of the Warsaw Convention and its subsequent Protocols in Asian countries is in its fredgling stages compared to the developed countries of Europe and North America, and there is thus little published information about the various Asian governments' treatment and courts' views of the Warsaw System. Due to that limitation, the accent of this paper will be on Korea and Japan. As one will be aware, the so-called 'Warsaw System' is made up of the Warsaw Convention of 1929, the Hague Protocol of 1955, the Guadalajara Convention of 1961, the Guatemala City Protocol of 1971 and the Montreal Additional Protocols Nos. 1,2,3 and 4 of 1975. Among these instruments, most of the countries in Asia are parties to both the Warsaw Convention and the Hague Protocol. However, the Republic of Korea and Mongolia are parties only to the Hague Protocol, while Burma, Indonesia and Sri Lanka are parties only to the Warsaw Convention. Thailand and Taiwan are not parties only to the convention or protocol. Among Asian states, Indonesia, the Phillipines and Pakistan are also parties to the Guadalajara Convention, but no country in Asia has signed the Guatemala City Protocol of 1971 or the Montreal Additional Protocols, which Protocols have not yet been put into force. The People's Republic of China has declared that the Warsaw Convention shall apply to the entire Chinese territory, including Taiwan. 'The application of the Warsaw Convention to one-way air carriage between a state which is a party only to the Warsaw Convention and a state which is a party only to the Hague Protocol' is of particular importance in Korea as it is a signatory only to the Hague Protocol, but it is involved in a great deal of air transportation to and from the united states, which in turn is a party only to the Warsaw Convention. The opinion of the Supreme Court of Korea appears to be, that parties to the Warsaw Convention were intended to be parties to the Hague Protocol, whether they actually signed it or not. The effect of this decision is that in Korea the United States and Korea will be considered by the courts to be in a treaty relationship, though neither State is a signatory to the same instrument as the other State. The first wrongful death claim in Korea related to international carriage by air under the Convention was made in Hyun-Mo Bang, et al v. Korean Air Lines Co., Ltd. case. In this case, the plaintiffs claimed for damages based upon breach of contract as well as upon tort under the Korean Civil Code. The issue in the case was whether the time limitation provisions of the Convention should be applicable to a claim based in tort as well as to a claim based in contract. The Appellate Court ruled on 29 August 1983 that 'however founded' in Article 24(1) of the Convention should be construed to mean that the Convention should be applicable to the claim regardless of whether the cause of action was based in tort or breach of contract, and that the plaintiffs' rights to damages had therefore extinguished because of the time limitation as set forth in Article 29(1) of the Convention. The difficult and often debated question of what exactly is meant by the words 'such default equivalent to wilful misconduct' in Article 25(1) of the Warsaw Convention, has also been litigated. The Supreme Court of Japan dealt with this issue in the Suzuki Shinjuten Co. v. Northwest Airlines Inc. case. The Supreme Court upheld the Appellate Court's ruling, and decided that 'such default equivalent to wilful misconduct' under Article 25(1) of the Convention was within the meaning of 'gross negligence' under the Japanese Commercial Code. The issue of the convention of the 'franc' into national currencies as provided in Article 22 of the Warsaw Convention as amended by the Hague Protocol has been raised in a court case in Korea, which is now before the District Court of Seoul. In this case, the plaintiff argues that the gold franc equivalent must be converted in Korean Won in accordance with the free market price of gold in Korea, as Korea has not enacted any law, order or regulation prescribing the proper method of calculating the equivalent in its national currency. while it is unclear if the court will accept this position, the last official price of gold of the United States as in the famous Franklin Mint case, Special Drawing Right(SDR) or the current French franc, Korean Air Lines has argued in favor of the last official price of gold of the United States by which the air lines converted such francs into us Dollars in their General Conditions of Carriage. It is my understanding that in India, an appellate court adopted the free market price valuation. There is a report as well saying that if a lawsuit concerning this issue were brought in Pakistan, the free market cost of gold would be applied there too. Speaking specifically about the future of the Warsaw System in Asia though I have been informed that Thailand is actively considering acceding to the Warsaw Convention, the attitudes of most Asian countries' governments towards the Warsaw System are still wnot ell known. There is little evidence that Asian countries are moving to deal concretely with the conversion of the franc into their own local currencies. So too it cannot be said that they are on the move to adhere to the Montreal Additional Protocols Nos. 3 & 4 which attempt to basically solve many of the current problems with the Warsaw System, by adopting the SDR as the unit of currency, by establishing the carrier's absolute liability and an unbreakable limit and by increasing the carrier's passenger limit of liability to SDR 100,000, as well as permiting the domestic introduction of supplemental compensation. To summarize my own sentiments regarding the future, I would say that given the fact that Asian air lines are now world leaders both in overall size and rate of growth, and the fact that both Asian individuals and governments are becoming more and more reliant on the global civil aviation networks as their economies become ever stronger, I am hopeful that Asian nations will henceforth play a bigger role in ensuring the orderly and hasty development of a workable unified system of rules governing international commercial air carriage.

  • PDF

Critical Success Factor of Noble Payment System: Multiple Case Studies (새로운 결제서비스의 성공요인: 다중사례연구)

  • Park, Arum;Lee, Kyoung Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.59-87
    • /
    • 2014
  • In MIS field, the researches on payment services are focused on adoption factors of payment service using behavior theories such as TRA(Theory of Reasoned Action), TAM(Technology Acceptance Model), and TPB (Theory of Planned Behavior). The previous researches presented various adoption factors according to types of payment service, nations, culture and so on even though adoption factors of identical payment service were presented differently by researchers. The payment service industry relatively has strong path dependency to the existing payment methods so that the research results on the identical payment service are different due to payment culture of nation. This paper aims to suggest a successful adoption factor of noble payment service regardless of nation's culture and characteristics of payment and prove it. In previous researches, common adoption factors of payment service are convenience, ease of use, security, convenience, speed etc. But real cases prove the fact that adoption factors that the previous researches present are not always critical to success to penetrate a market. For example, PayByPhone, NFC based parking payment service, successfully has penetrated to early market and grown. In contrast, Google Wallet service failed to be adopted to users despite NFC based payment method which provides convenience, security, ease of use. As shown in upper case, there remains an unexplained aspect. Therefore, the present research question emerged from the question: "What is the more essential and fundamental factor that should takes precedence over factors such as provides convenience, security, ease of use for successful penetration to market". With these cases, this paper analyzes four cases predicted on the following hypothesis and demonstrates it. "To successfully penetrate a market and sustainably grow, new payment service should find non-customer of the existing payment service and provide noble payment method so that they can use payment method". We give plausible explanations for the hypothesis using multiple case studies. Diners club, Danal, PayPal, Square were selected as a typical and successful cases in each category of payment service. The discussion on cases is primarily non-customer analysis that noble payment service targets on to find the most crucial factor in the early market, we does not attempt to consider factors for business growth. We clarified three-tier non-customer of the payment method that new payment service targets on and elaborated how new payment service satisfy them. In case of credit card, this payment service target first tier of non-customer who can't pay for because they don't have any cash temporarily but they have regular income. So credit card provides an opportunity which they can do economic activities by delaying the date of payment. In a result of wireless phone payment's case study, this service targets on second of non-customer who can't use online payment because they concern about security or have to take a complex process and learn how to use online payment method. Therefore, wireless phone payment provides very convenient payment method. Especially, it made group of young pay for a little money without a credit card. Case study result of PayPal, online payment service, shows that it targets on second tier of non-customer who reject to use online payment service because of concern about sensitive information leaks such as passwords and credit card details. Accordingly, PayPal service allows users to pay online without a provision of sensitive information. Final Square case result, Mobile POS -based payment service, also shows that it targets on second tier of non-customer who can't individually transact offline because of cash's shortness. Hence, Square provides dongle which function as POS by putting dongle in earphone terminal. As a result, four cases made non-customer their customer so that they could penetrate early market and had been extended their market share. Consequently, all cases supported the hypothesis and it is highly probable according to 'analytic generation' that case study methodology suggests. We present for judging the quality of research designs the following. Construct validity, internal validity, external validity, reliability are common to all social science methods, these have been summarized in numerous textbooks(Yin, 2014). In case study methodology, these also have served as a framework for assessing a large group of case studies (Gibbert, Ruigrok & Wicki, 2008). Construct validity is to identify correct operational measures for the concepts being studied. To satisfy construct validity, we use multiple sources of evidence such as the academic journals, magazine and articles etc. Internal validity is to seek to establish a causal relationship, whereby certain conditions are believed to lead to other conditions, as distinguished from spurious relationships. To satisfy internal validity, we do explanation building through four cases analysis. External validity is to define the domain to which a study's findings can be generalized. To satisfy this, replication logic in multiple case studies is used. Reliability is to demonstrate that the operations of a study -such as the data collection procedures- can be repeated, with the same results. To satisfy this, we use case study protocol. In Korea, the competition among stakeholders over mobile payment industry is intensifying. Not only main three Telecom Companies but also Smartphone companies and service provider like KakaoTalk announced that they would enter into mobile payment industry. Mobile payment industry is getting competitive. But it doesn't still have momentum effect notwithstanding positive presumptions that will grow very fast. Mobile payment services are categorized into various technology based payment service such as IC mobile card and Application payment service of cloud based, NFC, sound wave, BLE(Bluetooth Low Energy), Biometric recognition technology etc. Especially, mobile payment service is discontinuous innovations that users should change their behavior and noble infrastructure should be installed. These require users to learn how to use it and cause infra-installation cost to shopkeepers. Additionally, payment industry has the strong path dependency. In spite of these obstacles, mobile payment service which should provide dramatically improved value as a products and service of discontinuous innovations is focusing on convenience and security, convenience and so on. We suggest the following to success mobile payment service. First, non-customers of the existing payment service need to be identified. Second, needs of them should be taken. Then, noble payment service provides non-customer who can't pay by the previous payment method to payment method. In conclusion, mobile payment service can create new market and will result in extension of payment market.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

A Study on Chinese Traditional Auspicious Fish Pattern Application in Corperate Identity Design (중국 전통 길상 어(魚)문양을 응용한 중국 기업의 아이덴티티 디자인 동향)

  • ZHANG, JINGQIU
    • Cartoon and Animation Studies
    • /
    • s.50
    • /
    • pp.349-382
    • /
    • 2018
  • China is a great civilization which is a combination of various ethnic groups with long history change. As one of these important components of traditional culture, the lucky shape has been going through the ideological upheaval of the history change of China. Up to now, it has become the important parts which can stimulate the emotion of Chinese nation. The lucky shape becomes the basis of the rich traditional culture by long history of the Chinese nation. Even say it is the centre of this traditional culture resource. The lucky shape is a way of expressing the Chinese history and national emotions. It is the important part of people's living habits, emotion, as well as the cultural background. What's more, it has the value of beliefs of Surname totem. Meanwhile, it also has the function of passing on information. The symbol of information finally was created by the being of lucky shape to indicate its conceptual content. There are various kinds of lucky shapes. It will have its limitations when researching all kinds of them professionally. So, here the lucky shape of FISH will be researched. The shape of fish is the first good shape created by the Chinese nation. It is about 6000 years. Its special shape and lucky meaning embody the peculiar inherent culture and intension of the Chinese nation. It's the important component of the Chinese traditional culture. The traditional shape of fish was focused on the continuation of history and the patterns recognition, etc. It seldom indicated the meaning of the shape into the using of the modern design. So by searching the lucky meaning & the way of fish shape, the purpose of the search is to explore the real analysis of value of the fish shape in the modern enterprise identity design. The way of search is through the development of the history, the evolvement and the meaning of lucky of the traditional fish shape to analyse the symbolic meaning and the cultural meaning from all levels in nation, culture, art and life, etc. And by using the huge living example of the enterprise identity design of the traditional shape of the fish to analyse that how it works in positive way by those enterprise which is based on the trust with good image. In the modern Chinese enterprise identity design, the lucky image will be reinterpreted in the modern way. It will be proofed by the national perceptual knowledge of the consumer and the way of enlarge the goodwill of corporate image. It will be the conclusion. The traditional fish shape is the important core of modern design.So this search is taken through the instance of the design of enterprise image of the traditional fish shape to analysis the idea of the majority Chinese people of the traditional luck and the influence of corporation which based on trust and credibility. In modern image design of Chinese corporation, the auspicious sign reappear. The question survey is taken by people through the perceptual knowledge of the consumer and the cognition the enterprise image. According the result, people can speculate the improvement of consumer's recognition and the possibility of development of traditional concept.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

Actual Conditions and Perception of Safety Accidents by School Foodservice Employees in Chungbuk (충북지역 학교급식 조리종사원의 안전사고 실태 및 인식)

  • Cho, Hyun A;Lee, Young Eun;Park, Eun Hye
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.43 no.10
    • /
    • pp.1594-1606
    • /
    • 2014
  • The purpose of this study was to examine safety accidents related to school foodservice, working and operating environments of school foodservice, status and awareness of safety education, educational needs, and information on qualitative improvement of school foodservice. The subjects in this study were 234 cooks in charge of cooking at elementary and secondary schools in Chungbuk. A survey was conducted from July 30 to August 8, 2012, and among 202 questionnaires gathered, 194 completed questionnaires were analyzed. Statistical analyses were performed on data utilizing the SPSS version 19.0. The main results of this study were as follows: 44.3% of workers experienced safety accidents. The most frequent safety accident was 'once' (60.5%), and most safety accidents took place between June and August (31.4%). The time at which most safety accidents happened was between 8 and 11 am. Most safety accidents happened during cooking (52.3%) and while using a soup pot or frying pot (52.4%). The most common accidents were 'burns', 'wrist and arm pain', and 'slips and falls'. Respondents who experienced safety accidents replied that 57.6% of employees dealt with injuries at their own expense, and only 35.3% utilized industrial accident insurance. In terms of the operating environment, the score for 'offering information and application' was highest (3.76 points), whereas that for 'security of budget' was lowest (1.77 points). As for accident education, employees received safety education approximately 3.45 times and 5.10 hours per year. Improving the working environment of school foodservice cooks requires administrative and financial support. Furthermore, educational materials and guidelines based on the working environment and safety accident status of school foodservice cooks are required in order to minimize potential risk factors and control safety accidents in school foodservice.