• 제목/요약/키워드: application time

Search Result 14,885, Processing Time 0.048 seconds

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Development of the Accident Prediction Model for Enlisted Men through an Integrated Approach to Datamining and Textmining (데이터 마이닝과 텍스트 마이닝의 통합적 접근을 통한 병사 사고예측 모델 개발)

  • Yoon, Seungjin;Kim, Suhwan;Shin, Kyungshik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.1-17
    • /
    • 2015
  • In this paper, we report what we have observed with regards to a prediction model for the military based on enlisted men's internal(cumulative records) and external data(SNS data). This work is significant in the military's efforts to supervise them. In spite of their effort, many commanders have failed to prevent accidents by their subordinates. One of the important duties of officers' work is to take care of their subordinates in prevention unexpected accidents. However, it is hard to prevent accidents so we must attempt to determine a proper method. Our motivation for presenting this paper is to mate it possible to predict accidents using enlisted men's internal and external data. The biggest issue facing the military is the occurrence of accidents by enlisted men related to maladjustment and the relaxation of military discipline. The core method of preventing accidents by soldiers is to identify problems and manage them quickly. Commanders predict accidents by interviewing their soldiers and observing their surroundings. It requires considerable time and effort and results in a significant difference depending on the capabilities of the commanders. In this paper, we seek to predict accidents with objective data which can easily be obtained. Recently, records of enlisted men as well as SNS communication between commanders and soldiers, make it possible to predict and prevent accidents. This paper concerns the application of data mining to identify their interests, predict accidents and make use of internal and external data (SNS). We propose both a topic analysis and decision tree method. The study is conducted in two steps. First, topic analysis is conducted through the SNS of enlisted men. Second, the decision tree method is used to analyze the internal data with the results of the first analysis. The dependent variable for these analysis is the presence of any accidents. In order to analyze their SNS, we require tools such as text mining and topic analysis. We used SAS Enterprise Miner 12.1, which provides a text miner module. Our approach for finding their interests is composed of three main phases; collecting, topic analysis, and converting topic analysis results into points for using independent variables. In the first phase, we collect enlisted men's SNS data by commender's ID. After gathering unstructured SNS data, the topic analysis phase extracts issues from them. For simplicity, 5 topics(vacation, friends, stress, training, and sports) are extracted from 20,000 articles. In the third phase, using these 5 topics, we quantify them as personal points. After quantifying their topic, we include these results in independent variables which are composed of 15 internal data sets. Then, we make two decision trees. The first tree is composed of their internal data only. The second tree is composed of their external data(SNS) as well as their internal data. After that, we compare the results of misclassification from SAS E-miner. The first model's misclassification is 12.1%. On the other hand, second model's misclassification is 7.8%. This method predicts accidents with an accuracy of approximately 92%. The gap of the two models is 4.3%. Finally, we test if the difference between them is meaningful or not, using the McNemar test. The result of test is considered relevant.(p-value : 0.0003) This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of enlisted men's data. Additionally, various independent variables used in the decision tree model are used as categorical variables instead of continuous variables. So it suffers a loss of information. In spite of extensive efforts to provide prediction models for the military, commanders' predictions are accurate only when they have sufficient data about their subordinates. Our proposed methodology can provide support to decision-making in the military. This study is expected to contribute to the prevention of accidents in the military based on scientific analysis of enlisted men and proper management of them.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

A Study of Guidelines for Genetic Counseling in Preimplantation Genetic Diagnosis (PGD) (착상전 유전진단을 위한 유전상담 현황과 지침개발을 위한 기초 연구)

  • Kim, Min-Jee;Lee, Hyoung-Song;Kang, Inn-Soo;Jeong, Seon-Yong;Kim, Hyon-J.
    • Journal of Genetic Medicine
    • /
    • v.7 no.2
    • /
    • pp.125-132
    • /
    • 2010
  • Purpose: Preimplantation genetic diagnosis (PGD), also known as embryo screening, is a pre-pregnancy technique used to identify genetic defects in embryos created through in vitro fertilization. PGD is considered a means of prenatal diagnosis of genetic abnormalities. PGD is used when one or both genetic parents has a known genetic abnormality; testing is performed on an embryo to determine if it also carries the genetic abnormality. The main advantage of PGD is the avoidance of selective pregnancy termination as it imparts a high likelihood that the baby will be free of the disease under consideration. The application of PGD to genetic practices, reproductive medicine, and genetic counseling is becoming the key component of fertility practice because of the need to develop a custom PGD design for each couple. Materials and Methods: In this study, a survey on the contents of genetic counseling in PGD was carried out via direct contact or e-mail with the patients and specialists who had experienced PGD during the three months from February to April 2010. Results: A total of 91 persons including 60 patients, 49 of whom had a chromosomal disorder and 11 of whom had a single gene disorder, and 31 PGD specialists responded to the survey. Analysis of the survey results revealed that all respondents were well aware of the importance of genetic counseling in all steps of PGD including planning, operation, and follow-up. The patient group responded that the possibility of unexpected results (51.7%), genetic risk assessment and recurrence risk (46.7%), the reproduction options (46.7%), the procedure and limitation of PGD (43.3%) and the information of PGD technology (35.0%) should be included as a genetic counseling information. In detail, 51.7% of patients wanted to be counseled for the possibility of unexpected results and the recurrence risk, while 46.7% wanted to know their reproduction options (46.7%). Approximately 96.7% of specialists replied that a non-M.D. genetic counselor is necessary for effective and systematic genetic counseling in PGD because it is difficult for physicians to offer satisfying information to patients due to lack of counseling time and specific knowledge of the disorders. Conclusions: The information from the survey provides important insight into the overall present situation of genetic counseling for PGD in Korea. The survey results demonstrated that there is a general awareness that genetic counseling is essential for PGD, suggesting that appropriate genetic counseling may play a important role in the success of PGD. The establishment of genetic counseling guidelines for PGD may contribute to better planning and management strategies for PGD.

An Empirical Study on Perceived Value and Continuous Intention to Use of Smart Phone, and the Moderating Effect of Personal Innovativeness (스마트폰의 지각된 가치와 지속적 사용의도, 그리고 개인 혁신성의 조절효과)

  • Han, Joonhyoung;Kang, Sungbae;Moon, Taesoo
    • Asia pacific journal of information systems
    • /
    • v.23 no.4
    • /
    • pp.53-84
    • /
    • 2013
  • With rapid development of ICT (Information and Communications Technology), new services by the convergence of mobile network and application technology began to appear. Today, smart phone with new ICT convergence network capabilities is exceedingly popular and very useful as a new tool for the development of business opportunities. Previous studies based on Technology Acceptance Model (TAM) suggested critical factors, which should be considered for acquiring new customers and maintaining existing users in smart phone market. However, they had a limitation to focus on technology acceptance, not value based approach. Prior studies on customer's adoption of electronic utilities like smart phone product showed that the antecedents such as the perceived benefit and the perceived sacrifice could explain the causality between what is perceived and what is acquired over diverse contexts. So, this research conceptualizes perceived value as a trade-off between perceived benefit and perceived sacrifice, and we need to research the perceived value to grasp user's continuous intention to use of smart phone. The purpose of this study is to investigate the structured relationship between benefit (quality, usefulness, playfulness) and sacrifice (technicality, cost, security risk) of smart phone users, perceived value, and continuous intention to use. In addition, this study intends to analyze the differences between two subgroups of smart phone users by the degree of personal innovativeness. Personal innovativeness could help us to understand the moderating effect between how perceptions are formed and continuous intention to use smart phone. This study conducted survey through e-mail, direct mail, and interview with smart phone users. Empirical analysis based on 330 respondents was conducted in order to test the hypotheses. First, the result of hypotheses testing showed that perceived usefulness among three factors of perceived benefit has the highest positive impact on perceived value, and then followed by perceived playfulness and perceived quality. Second, the result of hypotheses testing showed that perceived cost among three factors of perceived sacrifice has significantly negative impact on perceived value, however, technicality and security risk have no significant impact on perceived value. Also, the result of hypotheses testing showed that perceived value has significant direct impact on continuous intention to use of smart phone. In this regard, marketing managers of smart phone company should pay more attention to improve task efficiency and performance of smart phone, including rate systems of smart phone. Additionally, to test the moderating effect of personal innovativeness, this research conducted multi-group analysis by the degree of personal innovativeness of smart phone users. In a group with high level of innovativeness, perceived usefulness has the highest positive influence on perceived value than other factors. Instead, the analysis for a group with low level of innovativeness showed that perceived playfulness was the highest positive factor to influence perceived value than others. This result of the group with high level of innovativeness explains that innovators and early adopters are able to cope with higher level of cost and risk, and they expect to develop more positive intentions toward higher performance through the use of an innovation. Also, hedonic behavior in the case of the group with low level of innovativeness aims to provide self-fulfilling value to the users, in contrast to utilitarian perspective, which aims to provide instrumental value to the users. However, with regard to perceived sacrifice, both groups in general showed negative impact on perceived value. Also, the group with high level of innovativeness had less overall negative impact on perceived value compared to the group with low level of innovativeness across all factors. In both group with high level of innovativeness and with low level of innovativeness, perceived cost has the highest negative influence on perceived value than other factors. Instead, the analysis for a group with high level of innovativeness showed that perceived technicality was the positive factor to influence perceived value than others. However, the analysis for a group with low level of innovativeness showed that perceived security risk was the second high negative factor to influence perceived value than others. Unlike previous studies, this study focuses on influencing factors on continuous intention to use of smart phone, rather than considering initial purchase and adoption of smart phone. First, perceived value, which was used to identify user's adoption behavior, has a mediating effect among perceived benefit, perceived sacrifice, and continuous intention to use smart phone. Second, perceived usefulness has the highest positive influence on perceived value, while perceived cost has significant negative influence on perceived value. Third, perceived value, like prior studies, has high level of positive influence on continuous intention to use smart phone. Fourth, in multi-group analysis by the degree of personal innovativeness of smart phone users, perceived usefulness, in a group with high level of innovativeness, has the highest positive influence on perceived value than other factors. Instead, perceived playfulness, in a group with low level of innovativeness, has the highest positive factor to influence perceived value than others. This result shows that early adopters intend to adopt smart phone as a tool to make their job useful, instead market followers intend to adopt smart phone as a tool to make their time enjoyable. In terms of marketing strategy for smart phone company, marketing managers should pay more attention to identify their customers' lifetime value by the phase of smart phone adoption, as well as to understand their behavior intention to accept the risk and uncertainty positively. The academic contribution of this study primarily is to employ the VAM (Value-based Adoption Model) as a conceptual foundation, compared to TAM (Technology Acceptance Model) used widely by previous studies. VAM is useful for understanding continuous intention to use smart phone in comparison with TAM as a new IT utility by individual adoption. Perceived value dominantly influences continuous intention to use smart phone. The results of this study justify our research model adoption on each antecedent of perceived value as a benefit and a sacrifice component. While TAM could be widely used in user acceptance of new technology, it has a limitation to explain the new IT adoption like smart phone, because of customer behavior intention to choose the value of the object. In terms of theoretical approach, this study provides theoretical contribution to the development, design, and marketing of smart phone. The practical contribution of this study is to suggest useful decision alternatives concerned to marketing strategy formulation for acquiring and retaining long-term customers related to smart phone business. Since potential customers are interested in both benefit and sacrifice when evaluating the value of smart phone, marketing managers in smart phone company has to put more effort into creating customer's value of low sacrifice and high benefit so that customers will continuously have higher adoption on smart phone. Especially, this study shows that innovators and early adopters with high level of innovativeness have higher adoption than market followers with low level of innovativeness, in terms of perceived usefulness and perceived cost. To formulate marketing strategy for smart phone diffusion, marketing managers have to pay more attention to identify not only their customers' benefit and sacrifice components but also their customers' lifetime value to adopt smart phone.

The Effects of Storage of Human Saliva on DNA Isolation and Stability (인체타액의 보관이 DNA 분리와 안정도에 미치는 영향)

  • Kim, Yong-Woo;Kim, Young-Ku
    • Journal of Oral Medicine and Pain
    • /
    • v.31 no.1
    • /
    • pp.1-16
    • /
    • 2006
  • The most important progress in diagnostic sciences is the increased sensitivity and specificity in diagnostic procedures due to the development of micromethodologies and increasing availability of immunological and molecular biological reagents. The technological advances led to consider the diagnostic use of saliva for an array of analytes and DNA source. The purpose of the present study was to compare DNA from saliva with those from blood and buccal swab, to evaluate diagnostic and forensic application of saliva, to investigate the changes of genomic DNA in saliva according to the storage temperature and period of saliva samples, and to evaluate the integrity of the DNA from saliva stored under various storage conditions by PCR analysis. Peripheral venous blood, unstimulated whole saliva, stimulated whole saliva, and buccal swab were obtained from healthy 10 subjects (mean age: $29.9{\pm}9.8$ years) and genomic DNA was extracted using commercial kit. For the study of effects of various storage conditions on genomic DNA from saliva, stimulated whole saliva were obtained from healthy 20 subjects (mean age: $32.3{\pm}6.6$ years). After making aliquots from fresh saliva, they were stored at room temperature, $4^{\circ}C$, $-20^{\circ}C$, and $-70^{\circ}C$. Saliva samples after lyophilization and dry-out procedure were stored at room temperature. After 1, 3, and 5 months, the same experiment was performed to investigate the changes in genomic DNA in saliva samples. In case of saliva aliquots stored at room temperature and dry-out samples, the results in 2 weeks were also included. Integrity of DNA from saliva stored under various storage conditions was also evaluated by PCR amplification analysis of $\beta$-globin gene fragments (989-bp). The results were as follows: 1. Concentration of genomic DNA extracted from saliva was lower than that from blood (p<0.05), but there were no significant differences among various types of saliva samples. Purities of genomic DNA extracted from stimulated whole saliva and lyophilized one were significantly higher than that from blood (p<0.05). Purity of genomic DNA extracted from buccal swab was lower than those from various types of saliva samples (p<0.05). 2. Concentration of genomic DNA from saliva stored at room temperature showed gradual reduction after 1 month, and decreased significantly in 3 and 5 months (p<0.05, p<0.01, respectively). Purities of DNA from saliva stored for 3 and 5 months showed significant differences with those of fresh saliva and stored saliva for 1 month (p<0.05). 3. In the case of saliva stored at $4^{\circ}C$ and $-20^{\circ}C$, there were no significant changes of concentration of genomic DNA in 3 months. Concentration of DNA decreased significantly in 5 months (p<0.05). 4. There were no significant differences of concentration of genomic DNA from saliva stored at $-70^{\circ}C$ and from lyophilized one according to storage period. Concentration of DNA showed decreasing tendency in 5 months. 5. Concentration of genomic DNA immediately extracted from saliva dried on Petri dish were 60% compared with that of fresh saliva. Concentration of DNA from saliva stored at room temperature after dry-out showed rapid reduction within 2 weeks (p<0.05). 6. Amplification of $\beta$-globin gene using PCR was successful in all lyophilized saliva stored for 5 months. At the time of 1 month, $\beta$-globin gene was successfully amplified in all saliva samples stored at $-20^{\circ}C$ and $-70^{\circ}C$, and in some saliva samples stored at $4^{\circ}C$. $\beta$-globin gene was failed to amplify in saliva stored at room temperature and dry-out saliva.

Evaluation of Combine IGRT using ExacTrac and CBCT In SBRT (정위적체부방사선치료시 ExacTrac과 CBCT를 이용한 Combine IGRT의 유용성 평가)

  • Ahn, Min Woo;Kang, Hyo Seok;Choi, Byoung Joon;Park, Sang Jun;Jung, Da Ee;Lee, Geon Ho;Lee, Doo Sang;Jeon, Myeong Soo
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.201-208
    • /
    • 2018
  • Purpose : The purpose of this study is to compare and analyze the set-up errors using the Combine IGRT with ExacTrac and CBCT phased in the treatment of Stereotatic Body Radiotherapy. Methods and materials : Patient who were treated Stereotatic Body Radiotherapy in the ulsan university hospital from May 2014 to november 2017 were classified as treatment area three brain, nine spine, three pelvis. First using ExacTrac Set-up error calibrated direction of Lateral(Lat), Longitudinal(Lng), Vertical(Vrt), Roll, Pitch, Yaw, after applied ExacTrac moving data in addition to use CBCT and set-up error calibrated direction of Lat, Lng, Vrt, Rotation(Rtn). Results : When using ExacTrac, the error in the brain region is Lat $0.18{\pm}0.25cm$, Lng $0.23{\pm}0.04cm$, Vrt $0.30{\pm}0.36cm$, Roll $0.36{\pm}0.21^{\circ}$, Pitch $1.72{\pm}0.62^{\circ}$, Yaw $1.80{\pm}1.21^{\circ}$, spine Lat $0.21{\pm}0.24cm$, Lng $0.27{\pm}0.36cm$, Vrt $0.26{\pm}0.42cm$, Roll $1.01{\pm}1.17^{\circ}$, Pitch $0.66{\pm}0.45^{\circ}$, Yaw $0.71{\pm}0.58^{\circ}$, pelvis Lat $0.20{\pm}0.16cm$, Lng $0.24{\pm}0.29cm$, Vrt $0.28{\pm}0.29cm$, Roll $0.83{\pm}0.21^{\circ}$, Pitch $0.57{\pm}0.45^{\circ}$, Yaw $0.52{\pm}0.27^{\circ}$ When CBCT is performed after the couch movement, the error in brain region is Lat $0.06{\pm}0.05cm$, Lng $0.07{\pm}0.06cm$, Vrt $0.00{\pm}0.00cm$, Rtn $0.0{\pm}0.0^{\circ}$, spine Lat $0.06{\pm}0.04cm$, Lng $0.16{\pm}0.30cm$, Vrt $0.08{\pm}0.08cm$, Rtn $0.00{\pm}0.00^{\circ}$, pelvis Lat $0.06{\pm}0.07cm$, Lng $0.04{\pm}0.05cm$, Vrt $0.06{\pm}0.04cm$, Rtn $0.0{\pm}0.0^{\circ}$. Conclusion : Combine IGRT with ExacTrac in addition to CBCT during Stereotatic Body Radiotherapy showed that it was possible to reduce the set-up error of patients compared to single ExacTrac. However, the application of Combine IGRT increases patient set-up verification time and absorption dose in the body for image acquisition. Therefore, depending on the patient's situation that using Combine IGRT to reduce the patient's set-up error can increase the radiation treatment effectiveness.

  • PDF

Antioxidant and Antibacterial Activities of Glycyrrhiza uralensis Fisher (Jecheon, Korea) Extracts Obtained by various Extract Conditions (한국 제천 감초(Glycyrrhiza uralensis Fisher)의 추출 조건별 추출물의 항산화 및 항균 활성 평가)

  • Ha, Ji Hoon;Jeong, Yoon Ju;Seong, Joon Seob;Kim, Kyoung Mi;Kim, A Young;Fu, Min Min;Suh, Ji Young;Lee, Nan Hee;Park, Jino;Park, Soo Nam
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.41 no.4
    • /
    • pp.361-373
    • /
    • 2015
  • This study was carried out to evaluate the antioxidant and antibacterial activities of Glycyrriza uralensis Fisher (Jecheon, Korea) extracts obtained by various extraction conditions (85% ethanol, heating temperatures and times), and to establish the optimal extraction condition of G. uralensis for the application as cosmetic ingredients. The extracts obtained under different conditions were concentrated and made in the powdered (sample-1) and were the crude extract solutions without concentration (sample-2). The antioxidant effects were determined by free radical scavenging activity ($FSC_{50}$), ROS scavenging activity ($OSC_{50}$), and cellular protective effects. Antibacterial activity was determined by minimum inhibitory concentration (MIC) on human skin flora. DPPH free radical scavenging activity of sample-1 ($100{\mu}g/mL$) was 10% higher in group extracted for 6 h than 12 h, but sample-2 didn't show any significant differences. The extraction yield extracted with same temperature for 12 h was 2.6 times higher than 6 h, but total flavonoid content was 1.1 times higher. These results indicated that total flavonoid content hardly increased with increasing extraction time. Free radical scavenging activity, ROS scavenging activity and cellular protective effects were not dependent on the yield of extraction, but total flavonoid content of extraction. Antibacterial activity on three skin flora (S. aureus, B. subtilis, P. acnes)of sample-1 in different extraction conditions were evaluated on same concentration, and the group extracted at 25 and $40^{\circ}C$ showed 16 times higher than methyl paraben ($2,500{\mu}g/mL$). In conclusion, 85% ethanol extracts of G. uralensis extracted at $40^{\circ}C$ for 6 h showed the highest antioxidant and antibacterial activity. These results indicate that the extraction condition is important to be optimized by comprehensive evaluation of extraction yield with various conditions, yield of active component, and activity test with concentrations, and activity of 100% extract, for manufacturing process of products.

Application and Expansion of the Harm Principle to the Restrictions of Liberty in the COVID-19 Public Health Crisis: Focusing on the Revised Bill of the March 2020 「Infectious Disease Control and Prevention Act」 (코로나19 공중보건 위기 상황에서의 자유권 제한에 대한 '해악의 원리'의 적용과 확장 - 2020년 3월 개정 「감염병의 예방 및 관리에 관한 법률」을 중심으로 -)

  • You, Kihoon;Kim, Dokyun;Kim, Ock-Joo
    • The Korean Society of Law and Medicine
    • /
    • v.21 no.2
    • /
    • pp.105-162
    • /
    • 2020
  • In the pandemic of infectious disease, restrictions of individual liberty have been justified in the name of public health and public interest. In March 2020, the National Assembly of the Republic of Korea passed the revised bill of the 「Infectious Disease Control and Prevention Act.」 The revised bill newly established the legal basis for forced testing and disclosure of the information of confirmed cases, and also raised the penalties for violation of self-isolation and treatment refusal. This paper examines whether and how these individual liberty limiting clauses be justified, and if so on what ethical and philosophical grounds. The authors propose the theories of the philosophy of law related to the justifiability of liberty-limiting measures by the state and conceptualized the dual-aspect of applying the liberty-limiting principle to the infected patient. In COVID-19 pandemic crisis, the infected person became the 'Patient as Victim and Vector (PVV)' that posits itself on the overlapping area of 'harm to self' and 'harm to others.' In order to apply the liberty-limiting principle proposed by Joel Feinberg to a pandemic with uncertainties, it is necessary to extend the harm principle from 'harm' to 'risk'. Under the crisis with many uncertainties like COVID-19 pandemic, this shift from 'harm' to 'risk' justifies the state's preemptive limitation on individual liberty based on the precautionary principle. This, at the same time, raises concerns of overcriminalization, i.e., too much limitation of individual liberty without sufficient grounds. In this article, we aim to propose principles regarding how to balance between the precautionary principle for preemptive restrictions of liberty and the concerns of overcriminalization. Public health crisis such as the COVID-19 pandemic requires a population approach where the 'population' rather than an 'individual' works as a unit of analysis. We propose the second expansion of the harm principle to be applied to 'population' in order to deal with the public interest and public health. The new concept 'risk to population,' derived from the two arguments stated above, should be introduced to explain the public health crisis like COVID-19 pandemic. We theorize 'the extended harm principle' to include the 'risk to population' as a third liberty-limiting principle following 'harm to others' and 'harm to self.' Lastly, we examine whether the restriction of liberty of the revised 「Infectious Disease Control and Prevention Act」 can be justified under the extended harm principle. First, we conclude that forced isolation of the infected patient could be justified in a pandemic situation by satisfying the 'risk to the population.' Secondly, the forced examination of COVID-19 does not violate the extended harm principle either, based on the high infectivity of asymptomatic infected people to others. Thirdly, however, the provision of forced treatment can not be justified, not only under the traditional harm principle but also under the extended harm principle. Therefore it is necessary to include additional clauses in the provision in order to justify the punishment of treatment refusal even in a pandemic.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.