• 제목/요약/키워드: 수로

Search Result 354,083, Processing Time 0.292 seconds

Development of the Accident Prediction Model for Enlisted Men through an Integrated Approach to Datamining and Textmining (데이터 마이닝과 텍스트 마이닝의 통합적 접근을 통한 병사 사고예측 모델 개발)

  • Yoon, Seungjin;Kim, Suhwan;Shin, Kyungshik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.1-17
    • /
    • 2015
  • In this paper, we report what we have observed with regards to a prediction model for the military based on enlisted men's internal(cumulative records) and external data(SNS data). This work is significant in the military's efforts to supervise them. In spite of their effort, many commanders have failed to prevent accidents by their subordinates. One of the important duties of officers' work is to take care of their subordinates in prevention unexpected accidents. However, it is hard to prevent accidents so we must attempt to determine a proper method. Our motivation for presenting this paper is to mate it possible to predict accidents using enlisted men's internal and external data. The biggest issue facing the military is the occurrence of accidents by enlisted men related to maladjustment and the relaxation of military discipline. The core method of preventing accidents by soldiers is to identify problems and manage them quickly. Commanders predict accidents by interviewing their soldiers and observing their surroundings. It requires considerable time and effort and results in a significant difference depending on the capabilities of the commanders. In this paper, we seek to predict accidents with objective data which can easily be obtained. Recently, records of enlisted men as well as SNS communication between commanders and soldiers, make it possible to predict and prevent accidents. This paper concerns the application of data mining to identify their interests, predict accidents and make use of internal and external data (SNS). We propose both a topic analysis and decision tree method. The study is conducted in two steps. First, topic analysis is conducted through the SNS of enlisted men. Second, the decision tree method is used to analyze the internal data with the results of the first analysis. The dependent variable for these analysis is the presence of any accidents. In order to analyze their SNS, we require tools such as text mining and topic analysis. We used SAS Enterprise Miner 12.1, which provides a text miner module. Our approach for finding their interests is composed of three main phases; collecting, topic analysis, and converting topic analysis results into points for using independent variables. In the first phase, we collect enlisted men's SNS data by commender's ID. After gathering unstructured SNS data, the topic analysis phase extracts issues from them. For simplicity, 5 topics(vacation, friends, stress, training, and sports) are extracted from 20,000 articles. In the third phase, using these 5 topics, we quantify them as personal points. After quantifying their topic, we include these results in independent variables which are composed of 15 internal data sets. Then, we make two decision trees. The first tree is composed of their internal data only. The second tree is composed of their external data(SNS) as well as their internal data. After that, we compare the results of misclassification from SAS E-miner. The first model's misclassification is 12.1%. On the other hand, second model's misclassification is 7.8%. This method predicts accidents with an accuracy of approximately 92%. The gap of the two models is 4.3%. Finally, we test if the difference between them is meaningful or not, using the McNemar test. The result of test is considered relevant.(p-value : 0.0003) This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of enlisted men's data. Additionally, various independent variables used in the decision tree model are used as categorical variables instead of continuous variables. So it suffers a loss of information. In spite of extensive efforts to provide prediction models for the military, commanders' predictions are accurate only when they have sufficient data about their subordinates. Our proposed methodology can provide support to decision-making in the military. This study is expected to contribute to the prevention of accidents in the military based on scientific analysis of enlisted men and proper management of them.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Stock-Index Invest Model Using News Big Data Opinion Mining (뉴스와 주가 : 빅데이터 감성분석을 통한 지능형 투자의사결정모형)

  • Kim, Yoo-Sin;Kim, Nam-Gyu;Jeong, Seung-Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.143-156
    • /
    • 2012
  • People easily believe that news and stock index are closely related. They think that securing news before anyone else can help them forecast the stock prices and enjoy great profit, or perhaps capture the investment opportunity. However, it is no easy feat to determine to what extent the two are related, come up with the investment decision based on news, or find out such investment information is valid. If the significance of news and its impact on the stock market are analyzed, it will be possible to extract the information that can assist the investment decisions. The reality however is that the world is inundated with a massive wave of news in real time. And news is not patterned text. This study suggests the stock-index invest model based on "News Big Data" opinion mining that systematically collects, categorizes and analyzes the news and creates investment information. To verify the validity of the model, the relationship between the result of news opinion mining and stock-index was empirically analyzed by using statistics. Steps in the mining that converts news into information for investment decision making, are as follows. First, it is indexing information of news after getting a supply of news from news provider that collects news on real-time basis. Not only contents of news but also various information such as media, time, and news type and so on are collected and classified, and then are reworked as variable from which investment decision making can be inferred. Next step is to derive word that can judge polarity by separating text of news contents into morpheme, and to tag positive/negative polarity of each word by comparing this with sentimental dictionary. Third, positive/negative polarity of news is judged by using indexed classification information and scoring rule, and then final investment decision making information is derived according to daily scoring criteria. For this study, KOSPI index and its fluctuation range has been collected for 63 days that stock market was open during 3 months from July 2011 to September in Korea Exchange, and news data was collected by parsing 766 articles of economic news media M company on web page among article carried on stock information>news>main news of portal site Naver.com. In change of the price index of stocks during 3 months, it rose on 33 days and fell on 30 days, and news contents included 197 news articles before opening of stock market, 385 news articles during the session, 184 news articles after closing of market. Results of mining of collected news contents and of comparison with stock price showed that positive/negative opinion of news contents had significant relation with stock price, and change of the price index of stocks could be better explained in case of applying news opinion by deriving in positive/negative ratio instead of judging between simplified positive and negative opinion. And in order to check whether news had an effect on fluctuation of stock price, or at least went ahead of fluctuation of stock price, in the results that change of stock price was compared only with news happening before opening of stock market, it was verified to be statistically significant as well. In addition, because news contained various type and information such as social, economic, and overseas news, and corporate earnings, the present condition of type of industry, market outlook, the present condition of market and so on, it was expected that influence on stock market or significance of the relation would be different according to the type of news, and therefore each type of news was compared with fluctuation of stock price, and the results showed that market condition, outlook, and overseas news was the most useful to explain fluctuation of news. On the contrary, news about individual company was not statistically significant, but opinion mining value showed tendency opposite to stock price, and the reason can be thought to be the appearance of promotional and planned news for preventing stock price from falling. Finally, multiple regression analysis and logistic regression analysis was carried out in order to derive function of investment decision making on the basis of relation between positive/negative opinion of news and stock price, and the results showed that regression equation using variable of market conditions, outlook, and overseas news before opening of stock market was statistically significant, and classification accuracy of logistic regression accuracy results was shown to be 70.0% in rise of stock price, 78.8% in fall of stock price, and 74.6% on average. This study first analyzed relation between news and stock price through analyzing and quantifying sensitivity of atypical news contents by using opinion mining among big data analysis techniques, and furthermore, proposed and verified smart investment decision making model that could systematically carry out opinion mining and derive and support investment information. This shows that news can be used as variable to predict the price index of stocks for investment, and it is expected the model can be used as real investment support system if it is implemented as system and verified in the future.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Studies on the Estimation of Growth Pattern Cut-up Parts in Four Broiler Strain in Growing Body Weight (육용계에 있어서 계통간 산육능력 및 체중증가에 따른 각 부위별 증가양상 추정에 관한 연구)

  • 양봉국;조병욱
    • Korean Journal of Poultry Science
    • /
    • v.17 no.3
    • /
    • pp.141-156
    • /
    • 1990
  • The experiments were conducted to investigate the possibility of improving the effectiveness of the existing method to estimate the edible meat weight in the live broiler chicken. A total of 360 birds, five male and female chicks from each line were sacrificed at Trial 1 (body weight 900-1, 000g), Trial 2 (body weight 1.200-1, 400g), Trial 3(body weight 1, 600-1, 700), and Trial 4(body weight 2, 000g) in order to measure the body weight, edible meat weight of breast, thigh and drumsticks, and various components of body weight. Each line was reared at the Poultry Breeding Farm, Seoul National University from the second of july, 1987 to the thirteenth of September, 1987. The results obtained from this study were summarized as follows : 1. The average body weights of each line( H. T, M, A) were $2150.5\pm$34.9, $2133.0\pm$26.2, $1960.0\pm$23.1, and $2319.3\pm$27.9, respectively. at 7 weeks of age. The feed to body weight eain ratio for each line chicks was 2.55, 2.13, 2.08, and 2.03, respectively, for 0 to 7 weeks of age. The viability of each line was 99.7. 99.7, 100.0, and 100.0%, respectively, for 0 to 7 weeks of age.01 was noticed that A Line chicks grow significantly heavier than did T, H, M line chic ks from 0 to 7 weeks of age. The regression coefficients of growth curves from each line chicks were bA=1.015, bH=0.265, bM=0.950 and bT=0.242, respectively. 2. Among the body weight components, the feather. abdominal fat, breast, and thigh and drumsticks increased in their weight percentage as the birds grew older, while neck. head, giblets and inedible viscera decreased. No difference wat apparent in shank, wings and hack. 3. The weight percentages of breast in edible part for each line thicks were 19.2, 19.0, 19.9 and 19.0% at Trial 4, respectively. The weight percentages of thigh and drumsticks in edible part for each line chicks were 23.1, 23.3, 22.8, and 23.0% at Trial 4. respective1y. 4. The values for the percentage meat yield from breast were 77.2. 78.9 73.5 and 74.8% at Trial 4 in H, T, M and A Line chicks. respectively. For thigh and drumstick, the values of 80.3, 78.4. 79.7 and 80.2% were obtained. These data indicate that the percentage meat yield increase as the birds grow older. 5. The correlation coefficients between body weight and blood. head, shanks. breast. thigh-drumstick were high. The degree if correlation between abdominal fat(%) and percentage of edible meat were extremely low at all times, but those between abdominal fat (%) and inedible viscera were significantly high.

  • PDF

Clinical Outcomes of Corrective Surgical Treatment for Esophageal Cancer (식도암의 외과적 근치 절제술에 대한 임상적 고찰)

  • Ryu Se Min;Jo Won Min;Mok Young Jae;Kim Hyun Koo;Cho Yang Hyun;Sohn Young-sang;Kim Hark Jei;Choi Young Ho
    • Journal of Chest Surgery
    • /
    • v.38 no.2 s.247
    • /
    • pp.157-163
    • /
    • 2005
  • Background: Clinical outcomes of esophageal cancer have not been satisfactory in spite of the development of surgical skills and protocols of adjuvant therapy. We analyzed the results of corrective surgical patients for esophageal cancer from January 1992 to July 2002. Material and Method: Among 129 patients with esophageal cancer, this study was performed in 68 patients who received corrective surgery. The ratio of sex was 59 : 9 (male : female) and mean age was $61.07\pm7.36$ years old. Chief complaints of this patients were dysphagia, epigastric pain and weight loss, etc. The locations of esophageal cancer were 4 in upper esophagus, 36 in middle, 20 in lower, 8 in esophagogastric junction. 60 patients had squamous cell cancer and 7 had adenocarcinoma, and 1 had malignant melanoma. Five patients had neoadjuvant chemotherapy. Result: The postoperative stage I, IIA, IIB, III, IV patients were 7, 25, 12, 17 and 7, respectively. The conduit for replacement of esophagus were stomach (62 patients) and colon (6 patients). The neck anastomosis was performed in 28 patients and intrathoracic anastomosis in 40 patients. The technique of anastomosis were hand sewing method (44 patients) and stapling method (24 patients). One of the early complications was anastomosis leakage (3 patients) which had only radiologic leakage that recovered spontaneously. The anastomosis technique had no correlation with postoperative leakage, which stapling method (2 patients) and hand sewing method (1 patient). There were 3 respiratory failures, 6 pneumonia, 1 fulminant hepatitis, 1 bleeding and 1 sepsis. The 2 early postoperative deaths were fulminant hepatitis and sepsis. Among 68 patients, 23 patients had postoperative adjuvant therapy and 55 paitents were followed up. The follow up period was $23.73\pm22.18$ months ($1\~76$ month). There were 5 patients in stage I, 21 in stage 2A, 9 in stage IIB, 15 in stage III and 5 in stage IV. The 1, 3, 5 year survival rates of the patients who could be followed up completely was $58.43\pm6.5\%,\;35.48\pm7.5\%\;and\;18.81\pm7.7\%$, respectively. Statistical analysis showed that long-term survival difference was associated with a stage, T stage, and N stage (p<0.05) but not associated with histology, sex, anastomosis location, tumor location, and pre and postoperative adjuvant therapy. Conclusion: The early diagnosis, aggressive operative resection, and adequate postoperative treatment may have contributed to the observed increase in survival for esophageal cancer patients.

The Comparison Study of Early and Midterm Clinical Outcome of Off-Pump versus On-Pump Coronary Artery Bypass Grafting in Patients with Severe Left Ventricular Dysfunction (LVEF${\le}35{\%}$) (심한 좌심실 부전을 갖는 환자에서 시행한 Off-Pump CABG와 On-Pump CABG의 중단기 성적비교)

  • Youn Young Nam;Lee Kyo Joon;Bae Mi Kyung;Shim Yeon Hee;Yoo Kyung-Jong
    • Journal of Chest Surgery
    • /
    • v.39 no.3 s.260
    • /
    • pp.184-193
    • /
    • 2006
  • Background: Off-pump coronary artery bypass grafting (OPCAB) has been proven to result in less morbidity. The patients who have left ventricular dysfunction may have benefits by avoiding the adverse effects of the cardiopulmonary bypass. The present study compared early and midterm outcomes of off-pump versus on-pump coronary artery bypass grafting (On pump CABG) in patients with severe left ventricular dysfunction. Material and Method: Ninety hundred forth six patients underwent isolated coronary artery bypass grafting by one surgeon between January 2001 and Febrary 2005.. Data were collected in 100 patients who had left ventricular ejection fraction (L VEF) less than $35\%$ (68 OPCAB; 32 On pump CABG). Mean age of patients were 62.9$\pm$9.0 years in OPCAS group and 63.8$\pm$8.0 years in On pump CABG group. We compared the preoperative risk factors and evaluated early and midterm outcomes. Result: In OPCAB and On pump CABG group, mean number of used grafts per patient were 2.75$\pm$0.72, 2.78$\pm$0.55 and mean number of distal anastomoses were 3.00$\pm$0.79, 3.16$\pm$0.72 respectively. There was one perioperative death in OPCAB group ($1.5\%$). The operation time, ventilation time, ICU stay time, CK-MB on the first postoperative day, and occurrence rate of complications were significantly low in OPCAB group. Mean follow-up time was 26.6$\pm$12.8 months (4${\~}$54 months). Mean LVEF of OPCAB and On pump CABG group improved significantly from $27.1\pm4.5\%$ to $40.7\pm13.0\%$ and $26.9\pm5.4\%$ to $33.3\pm13.7\%$. The 4-year actuarial survival rate of OPCAB and On pump CABG group were $92.2\%,\;88.3\%$ and the 4-year freedom rates from cardiac death were $97.7\%,\;96.4\%$ respectively. There were no significant differences between two groups in 4 year freedom rate from cardiac event and angina. Conclusion: OPCAS improves myocardial function and favors early and mid-term outcomes in patients with severe left ventricular dysfunction compared to On pump CABG group. Therefore, OPCAB is a preferable operative strategy even in patients with severe left ventricular dysfunction.

Influence of Change of Atmospheric Pressure and Temperature on the Occurrence of Spontaneous Pneumothorax (기압과 기온변화가 자발성 기흉 발생에 미치는 영향)

  • Lee, Gun;Lim, Chang-Young;Lee, Hyeon-Jae
    • Journal of Chest Surgery
    • /
    • v.40 no.2 s.271
    • /
    • pp.122-127
    • /
    • 2007
  • Background: Spontaneous pneumothorax is a common respiratory condition and has been postulated that it develops because of rupture of subpleural blebs. Although the morphology and ultrastructure of causative lesions are well known, the reason for rupture of sbupleural blebs is not absolutely clear. Broad consensus concerning the role of meteorological factors in spontaneous pneumothorax dose not exist. The aim of the study was to examine the influence of change of atmospheric pressure and temperature on the occurrence of spontaneous pneumothorax. Material and Method: One hundred twenty eight consecutive spontaneous pnemothorax events that occurred between January 2003 and December 2004 were selected. Changes of meteorological factors of particular days from the day before for 5 consecutive days were calculated and compared between the days with pneumothorax occurrence (SP days) and the days without pneumothorax occurrence (Non SP days). The correation between change of pressure and temperature and the occurrence of SP was evaluated. Result: SP occurred on 117 days (16.0%) in the 2-year period. Although there was no significant differences in change of pressure factors prior 4 days of SP occurrence compare to the 4 days prior Non SP day, change of mean pressure was higher (+0.934 vs. -0.191hPa, RR 1.042, Cl $1.003{\sim}1.082$, p=0.033), and change of maximum pressure fall was lower (3.280 vs. 4.791 hPa, RR 1.051, Cl $1.013{\sim}l.090$, p=0.009) on the 4 days prior SP day. There were significant differences in change of temperature factors prior 2 days and the day of SP, Changes of mean temperature (-0.576 vs.+$0.099^{\circ}C$, RR 0.886, 95% Cl $0.817{\sim}0.962$, p=0.004) and maximum temperature rise (7.231 vs. $8.079^{\circ}C$, RR 0.943 Cl $0.896{\sim}0.993$, p=0.027) were lower on the 2 days prior SP. But changes of mean temperature (0.533 vs. $-0.103^{\circ}C$, RR 1.141, Cl $1.038{\sim}l.255$, p=0.006) and maximum temperature rise (9.209 vs. $7.754^{\circ}C$, RR 1.123, Cl $1.061{\sim}1.190$, p=0.006) and maximum temperature rise (9.209 vs. $7.754^{\circ}C$ RR 1.123, Cl $1.061{\sim}l.190$, p=0.000) were higher on the SP days. Conclusion: Charge of atmospheric pressure and temperature seems to influence the chance of occurrence of SP. Meteorological phenomena that pressure rise 4 day prior to SP and following temperature fall and rise might explain the occurrence of SP. Further studies should be continued in the future.

Postoperstive Chemoradiotherapy in Locally Advanced Rectal Cancer (국소 진행된 직장암에서 수술 후 화학방사선요법)

  • Chai, Gyu-Young;Kang, Ki-Mun;Choi, Sang-Gyeong
    • Radiation Oncology Journal
    • /
    • v.20 no.3
    • /
    • pp.221-227
    • /
    • 2002
  • Purpose : To evaluate the role of postoperative chemoradiotherapy in locally advanced rectal cancer, we retrospectively analyzed the treatment results of patients treated by curative surgical resection and postoperative chemoradiotherapy. Materials and Methods : From April 1989 through December 1998, 119 patients were treated with curative surgery and postoperative chemoradiotherapy for rectal carcinoma in Gyeongsang National University Hospital. Patient age ranged from 32 to 73 years, with a median age of 56 years. Low anterior resection was peformed in 59 patients, and abdominoperineal resection in 60. Forty-three patients were AJCC stage II and 76 were stage III. Radiation was delivered with 6 MV X rays using either AP-PA two fields, AP-PA both lateral four fields, or PA both lateral three fields. Total radiation dose ranged from 40 Gy to 56 Gy. In 73 patients, bolus infusions of 5-FU $(400\;mg/m^2)$ were given during the first and fourth weeks of radiotherapy. After completion of radiotherapy, an additional four to six cycles of 5-FU were given. Oral 5-FU (Furtulone) was given for nine months in 46 patients. Results : Forty $(33.7\%)$ of the 119 patients showed treatment failure. Local failure occurred in 16 $(13.5\%)$ patients, 1 $(2.3\%)$ of 43 stage II patients and 15 $(19.7\%)$ of 76 stage III patients. Distant failure occurred in 31 $(26.1\%)$ patients, among whom 5 $(11.6\%)$ were stage II and 26 $(34.2\%)$ were stage III. Five-year actuarial survival was $56.2\%$ overall, $71.1\%$ in stage II patients and $49.1\%$ in stage III patients (p=0.0008). Five-year disease free survival was $53.3\%$ overall, $68.1\%$ in stage II and $45.8\%$ in stage III (p=0.0006). Multivariate analysis showed that T stage and N stage were significant prognostic factors for five year survival, and that T stage, N stage, and preoperative CEA value were significant prognostic factors for five year disease free survival. Bowel complication occurred in 22 patients, and was treated surgically in 15 $(12.6\%)$, and conservatively in 7 $(5.9\%)$. Conclusion : Postoperative chemoradiotherapy was confirmed to be an effective modality for local control of rectal cancer, but the distant failure rate remained high. More effective modalities should be investigated to lower the distant failure rate.