• Title/Summary/Keyword: support parameters

Search Result 1,391, Processing Time 0.026 seconds

The Ways to Improve Competitiveness and Performance for Salesmen of Small and Medium IT Company: Focusing on Organizational Citizenship Behavior and Corporate Performance (중소 IT기업 영업사원의 경쟁력 강화를 위한 성과 창출 제고 방안: 조직시민행동 및 경영성과 제고 방안을 중심으로)

  • Lee, Gyu-Don;Lee, Sang-Jin;Lee, Chul-Gyu
    • The Journal of Society for e-Business Studies
    • /
    • v.21 no.3
    • /
    • pp.101-128
    • /
    • 2016
  • To improve competitiveness & performance for salesmen of small & medium IT company, this study aims not only to inspect how value orientation, leadership & justice make effects for Organizational Citizenship Behavior & Business Corporate Performance & but also to explore the role of adaptive selling practices as parameter. To support the study, the data collected from 314 employees in sales roles at more than 200 IT companies was processed via. regression analysis method. The research model of study lies at identification of 'the Effects of Value Orientation, Leadership, & Justice of/Posed by the Salesmen of a IT Company on Organizational Citizenship Behavior & Corporate Performance' based on the phenomena of unfair sales strategies rampantly being taken for short-term profits & survivals despite of the value of upholding business ethics to realize long-term, sustainable growth of a business of company. The hypotheses of this study are formulated as follows. First, value orientation, leadership, & justice shall have effects on organizational citizenship behavior & Corporate performance. Second, adaptive selling practices shall function as the parameters between the independent & dependent variables. The analysis results on the research, undertaken with verification of parametric effects, confirm the following: 1. Value orientation imposes positive (+) effects on adaptive selling practices which impose positive (+) impacts on organizational citizenship behavior & Corporate performance. 2. Adaptive selling practices function as a full parameter between value orientation & organizational citizenship behavior whilst functioning as a partial parameter between value orientation & Corporate performance. 3. Leadership imposes positive (+) effects on adaptive selling practices which impose positive (+) effects on organizational citizenship behavior & Corporate performance. 4. Adaptive selling practices function as a partial parameter between leadership & organizational citizenship behavior whilst functioning as a full parameter between leadership & Corporate performance. Therefore, this study is concluded that establishing & executing sales strategies in consideration of value orientation & fairness is of extreme importance for IT companies to realize & maintain their sustainable corporate management, & last but not least, it is necessary for IT companies to proactively introduce & provide educational systems for their salesmen thus to help them to uphold & sustain ethics & values of the business.

One-year evaluation of the national health screening program for infants and children in Korea (영유아 건강검진 시행 초기 1년의 결과 분석)

  • Moon, Jin Soo;Lee, Soon Young;Eun, Baik-Lin;Kim, Seong Woo;Kim, Young Key;Shin, Son Moon;Lee, Hye Kyoung;Chung, Hee Jung
    • Clinical and Experimental Pediatrics
    • /
    • v.53 no.3
    • /
    • pp.307-313
    • /
    • 2010
  • Purpose : Results of the Korea National Health Screening Program for Infants and Children, which was launched in November 2007, were evaluated for future research and policy development. Methods : Data from a total of 2,729,340 cases were analyzed. Five visiting ages, such as 4, 9, 18, 30, and 60 months, were included. Several parameters such as stunting, obesity, and positive rate of developmental screening were also analyzed. Telephone survey was performed in 1,035 users. For the provider survey, 262 doctors participated in our study. Results : The overall participation rate of users was 35.3%. This participation rate showed a decrement tendency to old age and low income. Only 6.9% of users participated in oral screening. Health screening was performed mainly in private clinics (82.6%). The recall rate of 4 months program users at the age of 9 months was 57.3%. The positive rate of screening was 3.1%, and was higher in the low-income group. By telephone survey, users reported that questionnaires were not difficult (94%) and overall satisfaction was good (73%). Longer duration of counseling was related with more satisfied users. Counseling and health education were helpful to users (73.2%). Doctors agreed that this program was helpful to children (98.5%). Conclusion : Korea National Health Screening Program for Infants and Children was launched successfully. Participation rate should be improved, and a quality control program needs to be developed. More intensive support following this program for children of low-income families may lead to effective interventions in controlling health inequality. Periodic update of guidelines is also needed.

Continuous Wet Oxidation of TCE over Supported Metal Oxide Catalysts (금속산화물 담지촉매상에서 연속 습식 TCE 분해반응)

  • Kim, Moon Hyeon;Choo, Kwang-Ho
    • Korean Chemical Engineering Research
    • /
    • v.43 no.2
    • /
    • pp.206-214
    • /
    • 2005
  • Heterogeneously-catalyzed oxidation of aqueous phase trichloroethylene (TCE) over supported metal oxides has been conducted to establish an approach to eliminate ppm levels of organic compounds in water. A continuous flow reactor system was designed to effect predominant reaction parameters in determining catalytic activity of the catalysts for wet TCE decomposition as a model reaction. 5 wt.% $CoO_x/TiO_2$ catalyst exhibited a transient period in activity vs. on-stream time behavior, suggesting that the surface structure of the $CoO_x$ might be altered with on-stream hours; regardless, it is probable to be the most promising catalyst. Not only could the bare support be inactive for the wet decomposition reaction at $36^{\circ}C$, but no TCE removal also occurred by the process of adsorption on $TiO_2$ surface. The catalytic activity was independent of all particle sizes used, thereby representing no mass transfer limitation in intraparticle diffusion. Very low TCE conversion appeared for $TiO_2$-supported $NiO_x$ and $CrO_x$ catalysts. Wet oxidation performance of supported Cu and Fe catalysts, obtained through an incipient wetness and ion exchange technique, was dependent primarily on the kinds of the metal oxides, in addition to the acidic solid supports and the preparation routes. 5 wt.% $FeO_x/TiO_2$ catalyst gave no activity in the oxidation reaction at $36^{\circ}C$, while 1.2 wt.% Fe-MFI was active for the wet decomposition depending on time on-stream. The noticeable difference in activity of the both catalysts suggests that the Fe oxidation states involved to catalytic redox cycle during the course of reaction play a significant role in catalyzing the wet decomposition as well as in maintaining the time on-stream activity. Based on the results of different $CoO_x$ loadings and reaction temperatures for the decomposition reaction at $36^{\circ}C$ with $CoO_x/TiO_2$, the catalyst possessed an optimal $CoO_x$ amount at which higher reaction temperatures facilitated the catalytic TCE conversion. Small amounts of the active ingredient could be dissolved by acidic leaching but such a process gave no appreciable activity loss of the $CoO_x$ catalyst.

Experimental Studies on Lead Toxicity in Domestic Cats 1. Symptomatology and Diagnostic Laboratory Parameters (고양이의 납중독에 관한 실험적 연구 1. 임상증상 및 실험실적 평가)

  • Hong Soon-Ho;Han Hong-Ryul
    • Journal of Veterinary Clinics
    • /
    • v.10 no.1
    • /
    • pp.111-130
    • /
    • 1993
  • Lead toxicity was evaluated in forty-five cats on a balanced diet, Treated with 0(control), 10, 100(low), 1, 000, 2, 000, and 4, 000(high) ppm of lead acetate orally on a body weight basis. The objectives were to establish toxic dosage level of leaf in cats, to characterize changes in behavior and clinical pathology, and to demonstrate what blood lead concentrations correlate with the known dosages of lead. Some high dose cats showed projectile vomiting, hyperactivity, and seizures. The growth rates did not appear to be altered in any of the dosed groups. Normal blood lead concentration in cats were lower than that of humans, dogs, and cattle. Blood lead concentrations of 3 to 20$\mu\textrm{g}$/100$m\ell$ could be termed a 'subclinical' range in the cat. Clinical lead toxicity in cats may have blood lead concentrations ranging 20 to 120$\mu\textrm{g}$/100$m\ell$. Zinc protoporphyrin concentrations were proportional to lead dosages and a significant ZPP elevation, greater than 50$\mu\textrm{g}$/100$m\ell$, may be indicative of clinical lead toxicity. The enzyme aminolevulinic acid dehydratase showed an inverss dose response relationship for all lead dosages and a significant ZPP elevation, greater than 50$\mu\textrm{g}$/100$m\ell$, may be indicative of clinical lead toxicity. The enzyme aminolevulinic acid dehydratase showed an inverse dose response relationship for all lead dosages and appears to be a good indicator of lead exposure in cats. Urinary aminolevuliruc acid concentrations generally increased with lead dosage, but individual values varied. Hair lead concentrations rose proportionately to lead dosages. Lead at least in high doses appears to inhibit chemotactic activity of polymorphonuclear cells and monocytes. No consistent dose response relationships were observed in hemoglobin, RBC, WBC, neutrophil, lymphocyte, monocyte, and eosinophil counts. There were no consistent dose related changes in total protein, plasma protein, BUN, and ALT values. Reticulocyte counts did not increase significantly in most lead dosage levels, and are probably of little value in diagnosing lead toxicity in cats. The fact that no significant changes were found in nerve conduction velocities may support that there was no segmental demyelination resulting from lead ingestion. The lethal dose in cats appear to range from 60 to 150mg/kg body weight. A reliable diagnosis of lead poisoning can be made utilizing blood lead, ZPP, and ALAD, and hair lead.

  • PDF

A study on the rock mass classification in boreholes for a tunnel design using machine learning algorithms (머신러닝 기법을 활용한 터널 설계 시 시추공 내 암반분류에 관한 연구)

  • Lee, Je-Kyum;Choi, Won-Hyuk;Kim, Yangkyun;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.6
    • /
    • pp.469-484
    • /
    • 2021
  • Rock mass classification results have a great influence on construction schedule and budget as well as tunnel stability in tunnel design. A total of 3,526 tunnels have been constructed in Korea and the associated techniques in tunnel design and construction have been continuously developed, however, not many studies have been performed on how to assess rock mass quality and grade more accurately. Thus, numerous cases show big differences in the results according to inspectors' experience and judgement. Hence, this study aims to suggest a more reliable rock mass classification (RMR) model using machine learning algorithms, which is surging in availability, through the analyses based on various rock and rock mass information collected from boring investigations. For this, 11 learning parameters (depth, rock type, RQD, electrical resistivity, UCS, Vp, Vs, Young's modulus, unit weight, Poisson's ratio, RMR) from 13 local tunnel cases were selected, 337 learning data sets as well as 60 test data sets were prepared, and 6 machine learning algorithms (DT, SVM, ANN, PCA & ANN, RF, XGBoost) were tested for various hyperparameters for each algorithm. The results show that the mean absolute errors in RMR value from five algorithms except Decision Tree were less than 8 and a Support Vector Machine model is the best model. The applicability of the model, established through this study, was confirmed and this prediction model can be applied for more reliable rock mass classification when additional various data is continuously cumulated.

The Effect of Distal Hooks in Thoracolumbar Fusion Using a Pedicle Screw in Elderly Patients (척추경 나사못을 이용한 고령 환자의 흉요추부 유합에서 원위부 갈고리의 효과)

  • Lee, Dong-Hyun;Kim, Sung-Soo;Kim, Jung-Hoon;Lim, Dong-Ju;Choi, Byung-Wan;Kim, Jin-Hwan;Kim, Jin-Hyok;Park, Byung-Ook
    • Journal of the Korean Orthopaedic Association
    • /
    • v.52 no.1
    • /
    • pp.83-91
    • /
    • 2017
  • Purpose: To investigate the clinical outcomes of distal hook augmentation using a pedicle screw in thoracolumbar fusion in elderly patients. Materials and Methods: This retrospective multicenter study recruited 20 patients aged 65 years or older, who underwent anterior support and long level posterior fusion in the thoracolumbar junction with a follow-up of one year. To assess the effect of distal hook augmentation, the patients were divided into two groups; the pedicle screw with hook group (PH group, n=10) and the pedicle screw alone group (PA group, n=10). Results: The average age was 72.4 years (65-83 years). The average fusion segment was 4.6 segments (3-6 segments). There were no significant differences in age, sex, causative diseases, bone mineral density of lumbar and proximal femur, number of patients with osteoporosis, and number of fused segments between the two groups (p≥0.05). At 1 year follow-up after surgery, parameters related with distal screw pullout were significantly worse in the PA group. No patients in the PH group had distal screw pullout. However, six patients (60%, 6/10) in the PA group had distal screw pullout. There were no significant differences in the progression of distal junctional kyphosis between the two groups. Conclusion: Distal hook augmentation is an effective procedure in protecting distal pedicle screws against the pullout when long level thoracolumbar fusion was performed in elderly patients aged 65 years or older.

One-probe P300 based concealed information test with machine learning (기계학습을 이용한 단일 관련자극 P300기반 숨김정보검사)

  • Hyuk Kim;Hyun-Taek Kim
    • Korean Journal of Cognitive Science
    • /
    • v.35 no.1
    • /
    • pp.49-95
    • /
    • 2024
  • Polygraph examination, statement validity analysis and P300-based concealed information test are major three examination tools, which are use to determine a person's truthfulness and credibility in criminal procedure. Although polygraph examination is most common in criminal procedure, but it has little admissibility of evidence due to the weakness of scientific basis. In 1990s to support the weakness of scientific basis about polygraph, Farwell and Donchin proposed the P300-based concealed information test technique. The P300-based concealed information test has two strong points. First, the P300-based concealed information test is easy to conduct with polygraph. Second, the P300-based concealed information test has plentiful scientific basis. Nevertheless, the utilization of P300-based concealed information test is infrequent, because of the quantity of probe stimulus. The probe stimulus contains closed information that is relevant to the crime or other investigated situation. In tradition P300-based concealed information test protocol, three or more probe stimuli are necessarily needed. But it is hard to acquire three or more probe stimuli, because most of the crime relevant information is opened in investigative situation. In addition, P300-based concealed information test uses oddball paradigm, and oddball paradigm makes imbalance between the number of probe and irrelevant stimulus. Thus, there is a possibility that the unbalanced number of probe and irrelevant stimulus caused systematic underestimation of P300 amplitude of irrelevant stimuli. To overcome the these two limitation of P300-based concealed information test, one-probe P300-based concealed information test protocol is explored with various machine learning algorithms. According to this study, parameters of the modified one-probe protocol are as follows. In the condition of female and male face stimuli, the duration of stimuli are encouraged 400ms, the repetition of stimuli are encouraged 60 times, the analysis method of P300 amplitude is encouraged peak to peak method, the cut-off of guilty condition is encouraged 90% and the cut-off of innocent condition is encouraged 30%. In the condition of two-syllable word stimulus, the duration of stimulus is encouraged 300ms, the repetition of stimulus is encouraged 60 times, the analysis method of P300 amplitude is encouraged peak to peak method, the cut-off of guilty condition is encouraged 90% and the cut-off of innocent condition is encouraged 30%. It was also conformed that the logistic regression (LR), linear discriminant analysis (LDA), K Neighbors (KNN) algorithms were probable methods for analysis of P300 amplitude. The one-probe P300-based concealed information test with machine learning protocol is helpful to increase utilization of P300-based concealed information test, and supports to determine a person's truthfulness and credibility with the polygraph examination in criminal procedure.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.