• Title/Summary/Keyword: will

Search Result 79,050, Processing Time 0.104 seconds

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

An Analysis of the Comparative Importance of Systematic Attributes for Developing an Intelligent Online News Recommendation System: Focusing on the PWYW Payment Model (지능형 온라인 뉴스 추천시스템 개발을 위한 체계적 속성간 상대적 중요성 분석: PWYW 지불모델을 중심으로)

  • Lee, Hyoung-Joo;Chung, Nuree;Yang, Sung-Byung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.75-100
    • /
    • 2018
  • Mobile devices have become an important channel for news content usage in our daily life. However, online news content readers' resistance to online news monetization is more serious than other digital content businesses, such as webtoons, music sources, videos, and games. Since major portal sites distribute online news content free of charge to increase their traffics, customers have been accustomed to free news content; hence this makes online news providers more difficult to switch their policies on business models (i.e., monetization policy). As a result, most online news providers are highly dependent on the advertising business model, which can lead to increasing number of false, exaggerated, or sensational advertisements inside the news website to maximize their advertising revenue. To reduce this advertising dependencies, many online news providers had attempted to switch their 'free' readers to 'paid' users, but most of them failed. However, recently, some online news media have been successfully applying the Pay-What-You-Want (PWYW) payment model, which allows readers to voluntarily pay fees for their favorite news content. These successful cases shed some lights to the managers of online news content provider regarding that the PWYW model can serve as an alternative business model. In this study, therefore, we collected 379 online news articles from Ohmynews.com that has been successfully employing the PWYW model, and analyzed the comparative importance of systematic attributes of online news content on readers' voluntary payment. More specifically, we derived the six systematic attributes (i.e., Type of Article Title, Image Stimulation, Article Readability, Article Type, Dominant Emotion, and Article-Image Similarity) and three or four levels within each attribute based on previous studies. Then, we conducted content analysis to measure five attributes except Article Readability attribute, measured by Flesch readability score. Before conducting main content analysis, the face reliabilities of chosen attributes were measured by three doctoral level researchers with 37 sample articles, and inter-coder reliabilities of the three coders were verified. Then, the main content analysis was conducted for two months from March 2017 with 379 online news articles. All 379 articles were reviewed by the same three coders, and 65 articles that showed inconsistency among coders were excluded before employing conjoint analysis. Finally, we examined the comparative importance of those six systematic attributes (Study 1), and levels within each of the six attributes (Study 2) through conjoint analysis with 314 online news articles. From the results of conjoint analysis, we found that Article Readability, Article-Image Similarity, and Type of Article Title are the most significant factors affecting online news readers' voluntary payment. First, it can be interpreted that if the level of readability of an online news article is in line with the readers' level of readership, the readers will voluntarily pay more. Second, the similarity between the content of the article and the image within it enables the readers to increase the information acceptance and to transmit the message of the article more effectively. Third, readers expect that the article title would reveal the content of the article, and the expectation influences the understanding and satisfaction of the article. Therefore, it is necessary to write an article with an appropriate readability level, and use images and title well matched with the content to make readers voluntarily pay more. We also examined the comparative importance of levels within each attribute in more details. Based on findings of two studies, two major and nine minor propositions are suggested for future empirical research. This study has academic implications in that it is one of the first studies applying both content analysis and conjoint analysis together to examine readers' voluntary payment behavior, rather than their intention to pay. In addition, online news content creators, providers, and managers could find some practical insights from this research in terms of how they should produce news content to make readers voluntarily pay more for their online news content.

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

The efficacy of continuous positive airway pressure (CPAP) for patient with left breast cancer (좌측 유방암 방사선치료에서 CPAP(Continuous Positive Airway Pressure)의 유용성 평가)

  • Jung, Il Hun;Ha, Jin Sook;Chang, Won Suk;Jeon, Mi Jin;Kim, Sei Joon;Jung, Jin Wook;Park, Byul Nim;Shin, Dong Bong;Lee, Ik Jae
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.2
    • /
    • pp.43-49
    • /
    • 2019
  • Purpose: This study examined changes in the position of the heat and lungs depending on the patient's breathing method during left breast cancer radiotherapy and used treatment plans to compare the resulting radiation dose. Materials and methods: The participants consisted of 10 patients with left breast cancer. A CT simulator(SIMENS SOMATOM AS, Germany) was used to obtain images when using three different breathing methods: free breathing(FB), deep inspiration breath hold(DIBH with Abches, DIBH), inspiration breath hold(IBH with CPAP, CPAP). A Ray Station(5.0.2.35, Sweden) was used for treatment planning, the treatment method was volumetric modulated arc therapy (VMAT) with one partial arc of the same angle, and the prescribed dose to the planning target volume (PTV) was a total dose of 50Gy(2Gy/day). In treatment plan analysis, the 95% dose (D95) to the PTV, the conformity index(CI), and the homogeneity index (HI) were compared. The lungs, heart, and left anterior descending artery (LAD) were selected as the organs at risk(OARs). Results: The mean volume of the ipsilateral lung for FB, DIBH, and CPAP was 1245.58±301.31㎤, 1790.09±362.43 ㎤, 1775.44±476.71 ㎤. The mean D95 for the PTV was 46.67±1.89Gy, 46.85±1.72Gy, 46.97±23.4Gy, and the mean CI and HI were 0.95±0.02, 0.96±0.02, 0.95±0.02 and 0.91±0.01, 0.90±0.01, 0.92±0.02. The V20 of Whole Lung was 10.74±4.50%, 8.29±3.14%, 9.12±3.29% and The V20 of the ipsilateral lung was 20.45±8.65%, 17.18±7.04%, 18.85±7.85%, the Dmean of the heart was 7.82±1.27Gy, 6.10±1.27Gy, 5.67±1.56Gy, and the Dmax of the LAD was 20.41±7.56Gy, 14.88±3.57Gy, 14.96±2.81Gy. The distance from the thoracic wall to the LAD was measured to be 11.33±4.70mm, 22.40±6.01mm, 20.14±6.23mm. Conclusion: During left breast cancer radiotherapy, the lung volume was 46.24% larger for DIBH than for FB, and 43.11% larger for CPAP than FB. The larger lung volume increases the distance between the thoracic wall and the heart. In this way, the LAD, which is one of the nearby OARs, can be more effectively protected while still satisfying the treatment plan. The lung volume was largest for DIBH, and the distance between the LAD and thoracic wall was also the greatest. However, when performing treatment with DIBH, the intra-fraction error cannot be ignored. Moreover, communication between the patient and the radiotherapist is also an important factor in DIBH treatment. When communication is problematic, or if the patient has difficulty holding their breath, we believe that CPAP could be used as an alternative to DIBH. In order to verify the clinical efficacy of CPAP, it will be necessary to perform long-term follow-up of a greater number of patients.

Herbicidal Phytotoxicity under Adverse Environments and Countermeasures (불량환경하(不良環境下)에서의 제초제(除草劑) 약해(藥害)와 경감기술(輕減技術))

  • Kwon, Y.W.;Hwang, H.S.;Kang, B.H.
    • Korean Journal of Weed Science
    • /
    • v.13 no.4
    • /
    • pp.210-233
    • /
    • 1993
  • The herbicide has become indispensable as much as nitrogen fertilizer in Korean agriculture from 1970 onwards. It is estimated that in 1991 more than 40 herbicides were registered for rice crop and treated to an area 1.41 times the rice acreage ; more than 30 herbicides were registered for field crops and treated to 89% of the crop area ; the treatment acreage of 3 non-selective foliar-applied herbicides reached 2,555 thousand hectares. During the last 25 years herbicides have benefited the Korean farmers substantially in labor, cost and time of farming. Any herbicide which causes crop injury in ordinary uses is not allowed to register in most country. Herbicides, however, can cause crop injury more or less when they are misused, abused or used under adverse environments. The herbicide use more than 100% of crop acreage means an increased probability of which herbicides are used wrong or under adverse situation. This is true as evidenced by that about 25% of farmers have experienced the herbicide caused crop injury more than once during last 10 years on authors' nationwide surveys in 1992 and 1993 ; one-half of the injury incidences were with crop yield loss greater than 10%. Crop injury caused by herbicide had not occurred to a serious extent in the 1960s when the herbicides fewer than 5 were used by farmers to the field less than 12% of total acreage. Farmers ascribed about 53% of the herbicidal injury incidences at their fields to their misuses such as overdose, careless or improper application, off-time application or wrong choice of the herbicide, etc. While 47% of the incidences were mainly due to adverse natural conditions. Such misuses can be reduced to a minimum through enhanced education/extension services for right uses and, although undesirable, increased farmers' experiences of phytotoxicity. The most difficult primary problem arises from lack of countermeasures for farmers to cope with various adverse environmental conditions. At present almost all the herbicides have"Do not use!" instructions on label to avoid crop injury under adverse environments. These "Do not use!" situations Include sandy, highly percolating, or infertile soils, cool water gushing paddy, poorly draining paddy, terraced paddy, too wet or dry soils, days of abnormally cool or high air temperature, etc. Meanwhile, the cultivated lands are under poor conditions : the average organic matter content ranges 2.5 to 2.8% in paddy soil and 2.0 to 2.6% in upland soil ; the canon exchange capacity ranges 8 to 12 m.e. ; approximately 43% of paddy and 56% of upland are of sandy to sandy gravel soil ; only 42% of paddy and 16% of upland fields are on flat land. The present situation would mean that about 40 to 50% of soil applied herbicides are used on the field where the label instructs "Do not use!". Yet no positive effort has been made for 25 years long by government or companies to develop countermeasures. It is a really sophisticated social problem. In the 1960s and 1970s a subside program to incoporate hillside red clayish soil into sandy paddy as well as campaign for increased application of compost to the field had been operating. Yet majority of the sandy soils remains sandy and the program and campaign had been stopped. With regard to this sandy soil problem the authors have developed a method of "split application of a herbicide onto sandy soil field". A model case study has been carried out with success and is introduced with key procedure in this paper. Climate is variable in its nature. Among the climatic components sudden fall or rise in temperature is hardly avoidable for a crop plant. Our spring air temperature fluctuates so much ; for example, the daily mean air temperature of Inchon city varied from 6.31 to $16.81^{\circ}C$ on April 20, early seeding time of crops, within${\times}$2Sd range of 30 year records. Seeding early in season means an increased liability to phytotoxicity, and this will be more evident in direct water-seeding of rice. About 20% of farmers depend on the cold underground-water pumped for rice irrigation. If the well is deep over 70m, the fresh water may be about $10^{\circ}C$ cold. The water should be warmed to about $20^{\circ}C$ before irrigation. This is not so practiced well by farmers. In addition to the forementioned adverse conditions there exist many other aspects to be amended. Among them the worst for liquid spray type herbicides is almost total lacking in proper knowledge of nozzle types and concern with even spray by the administrative, rural extension officers, company and farmers. Even not available in the market are the nozzles and sprayers appropriate for herbicides spray. Most people perceive all the pesticide sprayers same and concern much with the speed and easiness of spray, not with correct spray. There exist many points to be improved to minimize herbicidal phytotoxicity in Korea and many ways to achieve the goal. First of all it is suggested that 1) the present evaluation of a new herbicide at standard and double doses in registration trials is to be an evaluation for standard, double and triple doses to exploit the response slope in making decision for approval and recommendation of different dose for different situation on label, 2) the government is to recognize the facts and nature of the present problem to correct the present misperceptions and to develop an appropriate national program for improvement of soil conditions, spray equipment, extention manpower and services, 3) the researchers are to enhance researches on the countermeasures and 4) the herbicide makers/dealers are to correct their misperceptions and policy for sales, to develop database on the detailed use conditions of consumer one by one and to serve the consumers with direct counsel based on the database.

  • PDF

Retrograde Autologous Priming: Is It Really Effective in Reducing Red Blood Cell Transfusions during Extracorporeal Circulation? (역행성 자가혈액 충전법: 체외순환 중 동종적혈구 수혈량을 줄일 수 있는가?)

  • Lim, Cheong;Son, Kuk-Hui;Park, Kay-Hyun;Jheon, Sang-Hoon;Sung, Sook-Whan
    • Journal of Chest Surgery
    • /
    • v.42 no.4
    • /
    • pp.473-479
    • /
    • 2009
  • Background: Retrograde autologous priming (RAP) is known to be useful in decreasing the need of transfusions in cardiac surgery because it prevents excessive hemodilution due to the crystalloid priming of cardiopulmonary bypass circuit. However, there are also negative side effects in terms of blood conservation. We analyzed the intraoperative blood-conserving effect of RAP and also investigated the efficacy of autotransfusion and ultrafiltration as a supplemental method for RAP. Material and Method: From January 2005 to December 2007, 117 patients who underwent isolated coronary artery bypass operations using cardiopulmonary bypass (CPB) were enrolled. Mean age was 63.9$\pm$9.1 years (range 36$\sim$83 years) and 34 patients were female. There were 62 patients in the RAP group and 55 patients in he control group. Intraoperative autotransfusion was performed via the arterial line. RAP was done just before initiating CPB using retrograde drainage of the crystalloid priming solution. Both conventional (CUF) and modified (MUF) ultrafiltrations were done during and after CPB, respectively. The transfusion threshold was less than 20% in hematocrit. Result: Autotransfusions were done in 79 patients (67.5%) and the average amount was 142.5$\pm$65.4 mL (range 30$\sim$320 mL). Homologous red blood cell (RBC) transfusion was done in 47 patients (40.2%) and mean amount of transfused RBC was 404.3$\pm$222.6 mL. Risk factors for transfusions were body surface area (OR 0.01, 95% CI 0.00 $\sim$ 0.63, p=0.030) and cardiopulmonary bypass time (OR 1.04, 95% CI 1.01 $\sim$ 1.08, p=0.019). RAP was not effective in terms of the rate of transfusion (34.5% vs 45.2%, p=0.24). However, the amount of transfused RBC was significantly decreased (526.3$\pm$242.3ml vs 321.4$\pm$166.3 mL, p=0.001). Autotransfusion and ultrafiltration revealed additive and cumulative effects decreasing transfusion amount (one; 600.0$\pm$231.0 mL, two; 533.3$\pm$264.6 mL, three; 346.7$\pm$176.7 mL, four; 300.0$\pm$146.1 mL, p=0.002). Conclusion: Even though RAP did not appear to be effective in terms of the number of patients receiving intraoperative RBC transfusions, it could conserve blood in terms of the amount transfused and with the additive effects of autotransfusion and ultrafiltration. If we want to maximize the blood conserving effect of RAP, more aggressive control will be necessary - such as high threshold of transfusion trigger or strict regulation of crystalloid infusion, and so forth.

Effect of Potassium Application Time on Rice Plant under The Limed Condition (석회(石灰)의 시용(施用)과 가리추비량(加里追肥量)에 관한 연구(硏究))

  • Oh, W.K.;Kim, T.S.;Han, K.W.;Park, C.H..;Kim, S.B.
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.12 no.3
    • /
    • pp.141-151
    • /
    • 1979
  • To investigate the relationship between the effect of potassium basal and top dressing amount on rice plant under the limed condition, a pot experiment was conducted with Milyang 21, rice variety. Growing status, yield components and chemical component of rice, plant were determined and soils were analyzed along with the growing stages and obtained results are as follows. 1. Control treatment, without lime application shows a good vegetative growth as compared with lime treated one. However grain yield was higher in lime treated pot when potassium was applied as basal and top dressing. 2. There was no big difference between potassium applied and control treatment on growing status of nice plant until 20 days after transplanting. However in case of lime treated pot big difference were observed 20 days after transplanting resulting lower grain yield as compared with control treatment. This trend were severe in a lime treated treatment. 3. In control treatment, potassium basal dressing shows higher grain yield as increase the amount of basal dressing and the highest yield obtained in all basal potassium application treatment. However in case of lime treated pot when two-third of potassium were applied as basal dressing, potassium content of rice became lower at reproductive stage and resulted lower yield. When we applied all the potassium as a basal dressing, there were no differences as compared with control treatment in terms of grain yield. 4. The soil condition that affects potassium absorption disorder in rice plant such as unlimed condition, potassium application should be done as a basal dressing. However in limed condition that potassium absorption disorder occurs scarcely, and potassium content exists unsufficient amount in soil, large amount of potassium as basal dressing and the rest as top dressing are recommended. 5. The higher content of potassium in rice plant at the reproductive growing stage results heavier tillers as compared with lower one so that heavier tillers produce more grain yield. 6. At vigorous growing stage there was a positive correlation between electric conductivity of soil and amount of potasium absorbed by rice plant. This fact suggests that to obtain higher yield large amount of potassium top dressing at late of vegetative growing stage are necessary so that the content of potassium in rice plant will increase and results higher yield.

  • PDF

Analysis of Bone Mineral Density and Related Factors after Pelvic Radiotherapy in Patients with Cervical Cancer (골반부 방사선 치료를 받은 자궁경부암 환자의 골밀도 변화와 관련 인자 분석)

  • Yi, Sun-Shin;Jeung, Tae-Sig
    • Radiation Oncology Journal
    • /
    • v.27 no.1
    • /
    • pp.15-22
    • /
    • 2009
  • Purpose: This study was designed to evaluate the effects on bone mineral density (BMD) and related factors according to the distance from the radiation field at different sites. This study was conducted on patients with uterine cervical cancer who received pelvic radiotherapy. Materials and Methods: We selected 96 patients with cervical cancer who underwent determination of BMD from November 2002 to December 2006 after pelvic radiotherapy at Kosin University Gospel Hospital. The T-score and Z-score for the first lumbar spine (L1), fourth lumbar spine (L4) and femur neck (F) were analyzed to determine the difference in BMD among the sites by the use of ANOVA and the post-hoc test. The study subjects were evaluated for age, body weight, body mass index (BMI), post-radiotherapy follow-up duration, intracavitary radiotherapy (ICR) and hormonal replacement therapy (HRT). Association between the characteristics of the study subjects and T-score for each site was evaluated by the use of Pearson's correlation and multiple regression analysis. Results: The average T-score for all ages was -1.94 for the L1, -0.42 for the L4 and -0.53 for the F. The average Z-score for all ages was -1.11 for the L1, -0.40 for the L4 and -0.48 for the F. The T-score and Z-score for the L4 and F were significantly different from the scores for the L1 (p<0.05). There was no significant difference between the L4 and F. Results for patients younger than 60 years were the same as for all ages. Age and ICR were negatively correlated and body weight and HRT were positively correlated with the T-score for all sites (p<0.05). BMI was positively correlated with the T-score for the L4 and F (p<0.05). Based on the use of multiple regression analysis, age was negatively associated with the T-score for the L1 and F and was positively correlated for the L4 (p<0.05). Body weight was positively associated with the T-score for all sites (p<0.05). ICR was negatively associated with the T-score for the L1 (p<0.05). HRT was positively associated with the T-score for the L4 and F (p<0.05). Conclusion: The T-score and Z-score for the L4 and F were significantly higher than the scores for the L1, a finding in contrast to some previous studies on normal women. It was thought that radiation could partly influence BMD because of a higher T-score and Z-score for sites around the radiotherapy field. We suggest that a further long-term study is necessary to determine the clinical significance of these findings, which will influence the diagnosis of osteoporosis based on BMD in patients with cervical cancer who have received radiotherapy.