• 제목/요약/키워드: 시스템 파라미터

검색결과 2,376건 처리시간 0.03초

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • 제24권4호
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • 제25권1호
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Comparison of Association Rule Learning and Subgroup Discovery for Mining Traffic Accident Data (교통사고 데이터의 마이닝을 위한 연관규칙 학습기법과 서브그룹 발견기법의 비교)

  • Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • 제21권4호
    • /
    • pp.1-16
    • /
    • 2015
  • Traffic accident is one of the major cause of death worldwide for the last several decades. According to the statistics of world health organization, approximately 1.24 million deaths occurred on the world's roads in 2010. In order to reduce future traffic accident, multipronged approaches have been adopted including traffic regulations, injury-reducing technologies, driving training program and so on. Records on traffic accidents are generated and maintained for this purpose. To make these records meaningful and effective, it is necessary to analyze relationship between traffic accident and related factors including vehicle design, road design, weather, driver behavior etc. Insight derived from these analysis can be used for accident prevention approaches. Traffic accident data mining is an activity to find useful knowledges about such relationship that is not well-known and user may interested in it. Many studies about mining accident data have been reported over the past two decades. Most of studies mainly focused on predict risk of accident using accident related factors. Supervised learning methods like decision tree, logistic regression, k-nearest neighbor, neural network are used for these prediction. However, derived prediction model from these algorithms are too complex to understand for human itself because the main purpose of these algorithms are prediction, not explanation of the data. Some of studies use unsupervised clustering algorithm to dividing the data into several groups, but derived group itself is still not easy to understand for human, so it is necessary to do some additional analytic works. Rule based learning methods are adequate when we want to derive comprehensive form of knowledge about the target domain. It derives a set of if-then rules that represent relationship between the target feature with other features. Rules are fairly easy for human to understand its meaning therefore it can help provide insight and comprehensible results for human. Association rule learning methods and subgroup discovery methods are representing rule based learning methods for descriptive task. These two algorithms have been used in a wide range of area from transaction analysis, accident data analysis, detection of statistically significant patient risk groups, discovering key person in social communities and so on. We use both the association rule learning method and the subgroup discovery method to discover useful patterns from a traffic accident dataset consisting of many features including profile of driver, location of accident, types of accident, information of vehicle, violation of regulation and so on. The association rule learning method, which is one of the unsupervised learning methods, searches for frequent item sets from the data and translates them into rules. In contrast, the subgroup discovery method is a kind of supervised learning method that discovers rules of user specified concepts satisfying certain degree of generality and unusualness. Depending on what aspect of the data we are focusing our attention to, we may combine different multiple relevant features of interest to make a synthetic target feature, and give it to the rule learning algorithms. After a set of rules is derived, some postprocessing steps are taken to make the ruleset more compact and easier to understand by removing some uninteresting or redundant rules. We conducted a set of experiments of mining our traffic accident data in both unsupervised mode and supervised mode for comparison of these rule based learning algorithms. Experiments with the traffic accident data reveals that the association rule learning, in its pure unsupervised mode, can discover some hidden relationship among the features. Under supervised learning setting with combinatorial target feature, however, the subgroup discovery method finds good rules much more easily than the association rule learning method that requires a lot of efforts to tune the parameters.

A Theoretical Model for the Analysis of Residual Motion Artifacts in 4D CT Scans (이론적 모델을 이용한 4DCT에서의 Motion Artifact 분석)

  • Kim, Tae-Ho;Yoon, Jai-Woong;Kang, Seong-Hee;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • 제23권3호
    • /
    • pp.145-153
    • /
    • 2012
  • In this study, we quantify the residual motion artifact in 4D-CT scan using the dynamic lung phantom which could simulate respiratory target motion and suggest a simple one-dimension theoretical model to explain and characterize the source of motion artifacts in 4DCT scanning. We set-up regular 1D sine motion and adjusted three level of amplitude (10, 20, 30 mm) with fixed period (4s). The 4DCT scans are acquired in helical mode and phase information provided by the belt type respiratory monitoring system. The images were sorted into ten phase bins ranging from 0% to 90%. The reconstructed images were subsequently imported into the Treatment Planning System (CorePLAN, SC&J) for target delineation using a fixed contour window and dimensions of the three targets are measured along the direction of motion. Target dimension of each phase image have same changing trend. The error is minimum at 50% phase in all case (10, 20, 30 mm) and we found that ${\Delta}S$ (target dimension change) of 10, 20 and 30 mm amplitude were 0 (0%), 0.1 (5%), 0.1 (5%) cm respectively compare to the static image of target diameter (2 cm). while the error is maximum at 30% and 80% phase ${\Delta}S$ of 10, 20 and 30 mm amplitude were 0.2 (10%), 0.7 (35%), 0.9 (45%) cm respectively. Based on these result, we try to analysis the residual motion artifact in 4D-CT scan using a simple one-dimension theoretical model and also we developed a simulation program. Our results explain the effect of residual motion on each phase target displacement and also shown that residual motion artifact was affected that the target velocity at each phase. In this study, we focus on provides a more intuitive understanding about the residual motion artifact and try to explain the relationship motion parameters of the scanner, treatment couch and tumor. In conclusion, our results could help to decide the appropriate reconstruction phase and CT parameters which reduce the residual motion artifact in 4DCT.

A Performance Comparison of the Mobile Agent Model with the Client-Server Model under Security Conditions (보안 서비스를 고려한 이동 에이전트 모델과 클라이언트-서버 모델의 성능 비교)

  • Han, Seung-Wan;Jeong, Ki-Moon;Park, Seung-Bae;Lim, Hyeong-Seok
    • Journal of KIISE:Information Networking
    • /
    • 제29권3호
    • /
    • pp.286-298
    • /
    • 2002
  • The Remote Procedure Call(RPC) has been traditionally used for Inter Process Communication(IPC) among precesses in distributed computing environment. As distributed applications have been complicated more and more, the Mobile Agent paradigm for IPC is emerged. Because there are some paradigms for IPC, researches to evaluate and compare the performance of each paradigm are issued recently. But the performance models used in the previous research did not reflect real distributed computing environment correctly, because they did not consider the evacuation elements for providing security services. Since real distributed environment is open, it is very vulnerable to a variety of attacks. In order to execute applications securely in distributed computing environment, security services which protect applications and information against the attacks must be considered. In this paper, we evaluate and compare the performance of the Remote Procedure Call with that of the Mobile Agent in IPC paradigms. We examine security services to execute applications securely, and propose new performance models considering those services. We design performance models, which describe information retrieval system through N database services, using Petri Net. We compare the performance of two paradigms by assigning numerical values to parameters and measuring the execution time of two paradigms. In this paper, the comparison of two performance models with security services for secure communication shows the results that the execution time of the Remote Procedure Call performance model is sharply increased because of many communications with the high cryptography mechanism between hosts, and that the execution time of the Mobile Agent model is gradually increased because the Mobile Agent paradigm can reduce the quantity of the communications between hosts.

Feasibility of Mixed-Energy Partial Arc VMAT Plan with Avoidance Sector for Prostate Cancer (전립선암 방사선치료 시 회피 영역을 적용한 혼합 에너지 VMAT 치료 계획의 평가)

  • Hwang, Se Ha;NA, Kyoung Su;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • 제32권
    • /
    • pp.17-29
    • /
    • 2020
  • Purpose: The purpose of this work was to investigate the dosimetric impact of mixed energy partial arc technique on prostate cancer VMAT. Materials and Methods: This study involved prostate only patients planned with 70Gy in 30 fractions to the planning target volume (PTV). Femoral heads, Bladder and Rectum were considered as oragan at risk (OARs). For this study, mixed energy partial arcs (MEPA) were generated with gantry angle set to 180°~230°, 310°~50° for 6MV arc and 130°~50°, 310°~230° for 15MV arc. Each arc set the avoidance sector which is gantry angle 230°~310°, 50°~130° at first arc and 50°~310° at second arc. After that, two plans were summed and were analyzed the dosimetry parameter of each structure such as Maximum dose, Mean dose, D2%, Homogeneity index (HI) and Conformity Index (CI) for PTV and Maximum dose, Mean dose, V70Gy, V50Gy, V30Gy, and V20Gy for OARs and Monitor Unit (MU) with 6MV 1 ARC, 6MV, 10MV, 15MV 2 ARC plan. Results: In MEPA, the maximum dose, mean dose and D2% were lower than 6MV 1 ARC plan(p<0.0005). However, the average difference of maximum dose was 0.24%, 0.39%, 0.60% (p<0.450, 0.321, 0.139) higher than 6MV, 10MV, 15MV 2 ARC plan, respectively and D2% was 0.42%, 0.49%, 0.59% (p<0.073, 0.087, 0.033) higher than compared plans. The average difference of mean dose was 0.09% lower than 10MV 2 ARC plan, but it is 0.27%, 0.12% (p<0.184, 0.521) higher than 6MV 2 ARC, 15MV 2 ARC plan, respectively. HI was 0.064±0.006 which is the lowest value (p<0.005, 0.357, 0.273, 0.801) among the all plans. For CI, there was no significant differences which were 1.12±0.038 in MEPA, 1.12±0.036, 1.11±0.024, 1.11±0.030, 1.12±0.027 in 6MV 1 ARC, 6MV, 10MV, 15MV 2 ARC, respectively. MEPA produced significantly lower rectum dose. Especially, V70Gy, V50Gy, V30Gy, V20Gy were 3.40, 16.79, 37.86, 48.09 that were lower than other plans. For bladder dose, V30Gy, V20Gy were lower than other plans. However, the mean dose of both femoral head were 9.69±2.93, 9.88±2.5 which were 2.8Gy~3.28Gy higher than other plans. The mean MU of MEPA were 19.53% lower than 6MV 1 ARC, 5.7% lower than 10MV 2 ARC respectively. Conclusion: This study for prostate radiotherapy demonstrated that a choice of MEPA VMAT has the potential to minimize doses to OARs and improve homogeneity to PTV at the expense of a moderate increase in maximum and mean dose to the femoral heads.