• Title/Summary/Keyword: logistic information

Search Result 1,398, Processing Time 0.026 seconds

Optimum Radiotherapy Schedule for Uterine Cervical Cancer based-on the Detailed Information of Dose Fractionation and Radiotherapy Technique (처방선량 및 치료기법별 치료성적 분석 결과에 기반한 자궁경부암 환자의 최적 방사선치료 스케줄)

  • Cho, Jae-Ho;Kim, Hyun-Chang;Suh, Chang-Ok;Lee, Chang-Geol;Keum, Ki-Chang;Cho, Nam-Hoon;Lee, Ik-Jae;Shim, Su-Jung;Suh, Yang-Kwon;Seong, Jinsil;Kim, Gwi-Eon
    • Radiation Oncology Journal
    • /
    • v.23 no.3
    • /
    • pp.143-156
    • /
    • 2005
  • Background: The best dose-fractionation regimen of the definitive radiotherapy for cervix cancer remains to be clearly determined. It seems to be partially attributed to the complexity of the affecting factors and the lack of detailed information on external and intra-cavitary fractionation. To find optimal practice guidelines, our experiences of the combination of external beam radiotherapy (EBRT) and high-dose-rate intracavitary brachytherapy (HDR-ICBT) were reviewed with detailed information of the various treatment parameters obtained from a large cohort of women treated homogeneously at a single institute. Materials and Methods: The subjects were 743 cervical cancer patients (Stage IB 198, IIA 77, IIB 364, IIIA 7, IIIB 89 and IVA 8) treated by radiotherapy alone, between 1990 and 1996. A total external beam radiotherapy (EBRT) dose of $23.4\~59.4$ Gy (Median 45.0) was delivered to the whole pelvis. High-dose-rate intracavitary brachytherapy (HDR-IBT) was also peformed using various fractionation schemes. A Midline block (MLB) was initiated after the delivery of $14.4\~43.2$ Gy (Median 36.0) of EBRT in 495 patients, while In the other 248 patients EBRT could not be used due to slow tumor regression or the huge initial bulk of tumor. The point A, actual bladder & rectal doses were individually assessed in all patients. The biologically effective dose (BED) to the tumor ($\alpha/\beta$=10) and late-responding tissues ($\alpha/\beta$=3) for both EBRT and HDR-ICBT were calculated. The total BED values to point A, the actual bladder and rectal reference points were the summation of the EBRT and HDR-ICBT. In addition to all the details on dose-fractionation, the other factors (i.e. the overall treatment time, physicians preference) that can affect the schedule of the definitive radiotherapy were also thoroughly analyzed. The association between MD-BED $Gy_3$ and the risk of complication was assessed using serial multiple logistic regression models. The associations between R-BED $Gy_3$ and rectal complications and between V-BED $Gy_3$ and bladder complications were assessed using multiple logistic regression models after adjustment for age, stage, tumor size and treatment duration. Serial Coxs proportional hazard regression models were used to estimate the relative risks of recurrence due to MD-BED $Gy_{10}$, and the treatment duration. Results: The overall complication rate for RTOG Grades $1\~4$ toxicities was $33.1\%$. The 5-year actuarial pelvic control rate for ail 743 patients was $83\%$. The midline cumulative BED dose, which is the sum of external midline BED and HDR-ICBT point A BED, ranged from 62.0 to 121.9 $Gy_{10}$ (median 93.0) for tumors and from 93.6 to 187.3 $Gy_3$ (median 137.6) for late responding tissues. The median cumulative values of actual rectal (R-BED $Gy_3$) and bladder Point BED (V-BED $Gy_3$) were 118.7 $Gy_3$ (range $48.8\~265.2$) and 126.1 $Gy_3$ (range: $54.9\~267.5$), respectively. MD-BED $Gy_3$ showed a good correlation with rectal (p=0.003), but not with bladder complications (p=0.095). R-BED $Gy_3$ had a very strong association (p=<0.0001), and was more predictive of rectal complications than A-BED $Gy_3$. B-BED $Gy_3$ also showed significance in the prediction of bladder complications in a trend test (p=0.0298). No statistically significant dose-response relationship for pelvic control was observed. The Sandwich and Continuous techniques, which differ according to when the ICR was inserted during the EBRT and due to the physicians preference, showed no differences in the local control and complication rates; there were also no differences in the 3 vs. 5 Gy fraction size of HDR-ICBT. Conclusion: The main reasons optimal dose-fractionation guidelines are not easily established is due to the absence of a dose-response relationship for tumor control as a result of the high-dose gradient of HDR-ICBT, individual differences In tumor responses to radiation therapy and the complexity of affecting factors. Therefore, in our opinion, there is a necessity for individualized tailored therapy, along with general guidelines, in the definitive radiation treatment for cervix cancer. This study also demonstrated the strong predictive value of actual rectal and bladder reference dosing therefore, vaginal gauze packing might be very Important. To maintain the BED dose to less than the threshold resulting in complication, early midline shielding, the HDR-ICBT total dose and fractional dose reduction should be considered.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.

Environmental Equity Analysis of the Accessibility of Urban Neighborhood Parks in Daegu City (대구시 도시근린공원의 접근성에 따른 환경적 형평성 분석)

  • Seo, Hyun-Jin;Jun, Byong-Woon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.14 no.4
    • /
    • pp.221-237
    • /
    • 2011
  • This study aims to investigate the environmental equity of the accessibility to urban neighborhood parks in the city of Daegu. The spatial distribution of urban neighborhood parks was explored by spatial statistics and the spatial accessibility to them was then evaluated by both minimum distance and coverage approaches. Descriptive and inferential statistics such as proximity ratio, Mann Whitney U test, and logistic regression were used for comparing the socioeconomic characteristics over different accessibilities to the neighborhood parks and then testing the distributional inequity hypothesis. The results from the minimum distance method indicated that Dalseo-gu had the best accessibility to the neighborhood parks while Dong-gu had the worst accessibility. It was apparent with the coverage method that Dalseo-gu had the best accessibility whereas Dong-gu and Nam-gu had the worst accessibility to the neighborhood parks at 500m and 1,000m buffer distances. There existed the spatial pattern of environmental inequity in old towns with respect to population density and the percentage of people under the age of 18. The spatial pattern of environmental inequity in new towns was explored on the basis of the percentage of people over the age of 65, the percentage of people below the poverty level, and the percentage of free of charge rental housing. These results were closely related to the development process of urban parks in Daegu stimulated by the quantitative urban park policy, urban development process, and residential location pattern such as permanent rental housing and free of charge rental housing. This study further extends the existing research topics of environmental justice related to the distributional inequity of environmental disamenities and hazards by focusing on environmental amenities such as urban neighborhood parks. The results from this study can be used in making the decisions for urban park management and setting up urban park policy with considering the social geography of Daegu.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Characteristics of Exposure to Humidifier Disinfectants and Their Association with the Presence of a Person Who Experienced Adverse Health Effects in General Households in Korea (일반 가구의 가습기살균제 노출 특성 및 건강이상 경험과의 연관성)

  • Lee, Eunsun;Cheong, Hae-Kwan;Paek, Domyung;Kim, Solhwee;Leem, Jonghan;Kim, Pangyi;Lee, Kyoung-Mu
    • Journal of Environmental Health Sciences
    • /
    • v.46 no.3
    • /
    • pp.285-296
    • /
    • 2020
  • Objective: The objective of this study was to describe the characteristics of exposure to humidifier disinfectants (HDs) and their association with the presence of a person who experienced the adverse health effects in general households in Korea. Methods: During the month of December 2016, a nationwide online survey was conducted on adults over 20 years of age who had experience of using HDs. It provided information on exposure characteristics and the experience of health effects. The final survey respondents consisted of 1,555 people who provided information on themselves and their household members during the use of HD. Exposure characteristics at the household level included average days of HD use per week, average hours of HD use per day, the duration within which one bottle of HD was emptied, average input frequency of HD, amount of HD (cc) per one time used, and active ingredients of HD products (PHMG, CMIT/MIT, PGH, or others). The risk of the presence of a person who experienced adverse health effects in the household was evaluated by estimating odds ratios (ORs) and 95% confidence intervals (CIs) adjusted for monthly income and region using a multiple logistic regression model. Subgroup analyses were conducted for households with a child (≤7 years) and households with a newborn infant during HD use. Results: The level of exposure to HD tended to be higher for households with a child or newborn infant for several variables including average days of HD use per week (P<0.0001) and average hours of HD use per day (P<0.0001). The proportion of households in which there was at least one person who experienced adverse health effects such as rhinitis, asthma, pneumonia, atopy/skin disease, etc. was 20.6% for all households, 25.3% for households with children, and 29.9% for households with newborn infants. The presence of a person who experienced adverse health effects in the household was significantly associated with average hours of HD use per day (Ptrend<0.001), duration within which one bottle of HD was emptied (Ptrend<0.001), average input frequency of HD (Ptrend<0.001), amount of HD per one use (Ptrend=0.01), and use of HDs containing PHMG (OR=2.23, 95% CI=1.45-3.43). Similar results were observed in subgroup analyses. Conclusion: Our results suggest that level of exposure to HD tended to be higher for households with a child or newborn infant and that exposure to HD is significantly associated with the presence of a person who experienced adverse health effects in the household.

A Study on the Application of Block Chain Technology on EVMS (EVMS 업무의 블록체인 기술 적용 방안 연구)

  • Kim, Il-Han;Kwon, Sun-Dong
    • Management & Information Systems Review
    • /
    • v.39 no.2
    • /
    • pp.39-60
    • /
    • 2020
  • Block chain technology is one of the core elements for realizing the 4th industrial revolution, and many efforts have been made by government and companies to provide services based on block chain technology. In this study we analyzed the benefits of block chain technology for EVMS and designed EVMS block chain platform with increased data security and work efficiency for project management data, which are important assets in monitoring progress, foreseeing future events, and managing post-completion. We did the case studies on the benefits of block chain technology and then conducted the survey study on security, reliability, and efficiency of block chain technology, targeting 18 block chain experts and project developers. And then, we interviewed EVMS system operator on the compatibility between block chain technology and EVM Systems. The result of the case studies showed that block chain technology can be applied to financial, logistic, medical, and public services to simplify the insurance claim process and to improve reliability by distributing transaction data storage and applying security·encryption features. Also, our research on the characteristics and necessity of block chain technology in EVMS revealed the improvability of security, reliability, and efficiency of management and distribution of EVMS data. Finally, we designed a network model, a block structure, and a consensus algorithm model and combined them to construct a conceptual block chain model for EVM system. This study has the following contribution. First, we reviewed that the block chain technology is suitable for application in the defense sector and proposed a conceptual model. Second, the effect that can be obtained by applying block chain technology to EVMS was derived, and the possibility of improving the existing business process was derived.

Consumer Perceptions of Food-Related Hazards and Correlates of Degree of Concerns about Food (주부의 식품안전에 대한 인식과 안전성우려의 관련 요인)

  • Choe, Jeong-Sook;Chun, Hye-Kyung;Hwang, Dae-Yong;Nam, Hee-Jung
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.34 no.1
    • /
    • pp.66-74
    • /
    • 2005
  • This survey was conducted to assess the consumer perceptions of food-related hazard in 500 housewives from all over Korea. The subjects were selected by stratified random sampling method. The survey was performed using structured questionnaire through telephone interview by skilled interviewers. The results showed that 34.6% of the respondents felt secure and were not concerned about food safety, and 65.4% were concerned about food safety. Logistic regression analysis showed that the increasing concern on food brands, food additives (such as food preservatives and artificial color), and imported foodstuffs indicated the current increasing concern on food safety. Other related factors indicating the increasing concern on food safety were education level and care for children's health. The respondents who cared about food safety expressed a high degree of concern on processed foodstuffs such as commercial boxed lunch (93.3%), imported foods (92.7%), fastfoods (89.9%), processed meat products (88.7%), dining out (85.6%), cannery and frozen foods (83.5%), and instant foods (82.0%). The lowest degree of concern was on rice. All the respondents perceived that residues of chemical substances such as pesticides and food additives, and endocrine disrupters were the most potential food risk factors, followed by food-borne pathogens, and GMOs (Genetically Modified Organisms). However, these results were not consistent with scientific judgment. Therefore, more education and information were needed for consumers' awareness of facts and myths about food safety. In addition, the results showed that consumers put lower trust in food products information such as food labels, cultivation methods (organic or not), quality labels, and the place of origin. Nevertheless, the respondents expressed their desire to overcome alienation, and recognized the importance of knowing of the origin or the producers of food. They identified that people who need to take extreme precautions on food contamination were the producers, government officials, food companies, consumers, the consumer's association, and marketers, arranged in the order of highest to lowest. They also believed that the production stage of agriculture was the most important step for improving the level of food safety Therefore, the results indicated that there is a need to introduce safety systems in the production of agricultural products, as follows: Good Agricultural Practice (GAP), Hazard Analysis and Critical Control Point (HACCP), and Traceability System (75).

A Study on Developing Web based Logistic Information System(KT-Logis) (웹 기반 통합물류정보시스템(KT-Logis) 개발에 관한 연구)

  • 오상호;김태준
    • Proceedings of the Korean DIstribution Association Conference
    • /
    • 2001.11b
    • /
    • pp.125-141
    • /
    • 2001
  • In this paper, the current problems of logistics industry in Korea and their possible solutions were discussed. With Korea Telecoms KT-Logis, the supplier and demander of logistics service would not have to invest large sum of money into their computer system. All they need is just a computer with internet connected. What KT-Logis influence on the logistics industry are the following; 1. Many logistics service supplier and demander can do the business on the web with one computer system. 2. This web based computer system does not only work on the office but also apply on the field worker such as delivery personnel or even the forwarder with mobile phone. 3. KT-Logis is an integrated system which cover the broad arrange of logistics management from truck management to customer relations management. 4. Finally, KT-Logis is web based systems which suits for current e-business and mobile environment. In future, more studies should be done to develop more progressive integrated logistics information systems with enterprise resource planning(ERP) and supply chain management(SCM).

  • PDF

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

Building battery deterioration prediction model using real field data (머신러닝 기법을 이용한 납축전지 열화 예측 모델 개발)

  • Choi, Keunho;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.243-264
    • /
    • 2018
  • Although the worldwide battery market is recently spurring the development of lithium secondary battery, lead acid batteries (rechargeable batteries) which have good-performance and can be reused are consumed in a wide range of industry fields. However, lead-acid batteries have a serious problem in that deterioration of a battery makes progress quickly in the presence of that degradation of only one cell among several cells which is packed in a battery begins. To overcome this problem, previous researches have attempted to identify the mechanism of deterioration of a battery in many ways. However, most of previous researches have used data obtained in a laboratory to analyze the mechanism of deterioration of a battery but not used data obtained in a real world. The usage of real data can increase the feasibility and the applicability of the findings of a research. Therefore, this study aims to develop a model which predicts the battery deterioration using data obtained in real world. To this end, we collected data which presents change of battery state by attaching sensors enabling to monitor the battery condition in real time to dozens of golf carts operated in the real golf field. As a result, total 16,883 samples were obtained. And then, we developed a model which predicts a precursor phenomenon representing deterioration of a battery by analyzing the data collected from the sensors using machine learning techniques. As initial independent variables, we used 1) inbound time of a cart, 2) outbound time of a cart, 3) duration(from outbound time to charge time), 4) charge amount, 5) used amount, 6) charge efficiency, 7) lowest temperature of battery cell 1 to 6, 8) lowest voltage of battery cell 1 to 6, 9) highest voltage of battery cell 1 to 6, 10) voltage of battery cell 1 to 6 at the beginning of operation, 11) voltage of battery cell 1 to 6 at the end of charge, 12) used amount of battery cell 1 to 6 during operation, 13) used amount of battery during operation(Max-Min), 14) duration of battery use, and 15) highest current during operation. Since the values of the independent variables, lowest temperature of battery cell 1 to 6, lowest voltage of battery cell 1 to 6, highest voltage of battery cell 1 to 6, voltage of battery cell 1 to 6 at the beginning of operation, voltage of battery cell 1 to 6 at the end of charge, and used amount of battery cell 1 to 6 during operation are similar to that of each battery cell, we conducted principal component analysis using verimax orthogonal rotation in order to mitigate the multiple collinearity problem. According to the results, we made new variables by averaging the values of independent variables clustered together, and used them as final independent variables instead of origin variables, thereby reducing the dimension. We used decision tree, logistic regression, Bayesian network as algorithms for building prediction models. And also, we built prediction models using the bagging of each of them, the boosting of each of them, and RandomForest. Experimental results show that the prediction model using the bagging of decision tree yields the best accuracy of 89.3923%. This study has some limitations in that the additional variables which affect the deterioration of battery such as weather (temperature, humidity) and driving habits, did not considered, therefore, we would like to consider the them in the future research. However, the battery deterioration prediction model proposed in the present study is expected to enable effective and efficient management of battery used in the real filed by dramatically and to reduce the cost caused by not detecting battery deterioration accordingly.