• Title/Summary/Keyword: Determining

Search Result 11,358, Processing Time 0.037 seconds

Clinical Characteristics of Recurred Patients with Stage I,II Non-Small Cell Lung Cancer (근치적 절제 후 재발한 1,2기 비소세포폐암 환자의 임상상)

  • Ham, Hyoung-Suk;Kang, Soo-Jung;An, Chang-Hyeok;Ahn, Jong-Woon;Kim, Ho-Cheol;Lim, Si-Young;Suh, Gee-Young;Kim, Kwhan-Mien;Chung, Man-Pyo;Kim, Ho-Joong;Kim, Jhin-Gook;Kwon, O-Jung;Shim, Yong-Mog;Rhee, Choong-H.
    • Tuberculosis and Respiratory Diseases
    • /
    • v.48 no.4
    • /
    • pp.428-437
    • /
    • 2000
  • Background : Five year survival rate of postoperative stage I non-small cell lung cancer(NSCLC) reaches to 66%. In the remaining one third of patients, however, cancer recurs and the overall survival of NSCLC remains dismal. To evaluate clinical and pathologic characteristics of recurred NSCLC, the patterns and factors for postoperative recurrence in patients with staged I and II NSCLC were studied. Method : A retrospective analysis was performed in 234 patients who underwent radical resection for pathologic stage I and II NSCLC. All patients who were followed up for at least one year were included in this study. Results : 1) There were 177 men and 57 women The median age was 63. The median duration of the follow up period was 732 days (range 365~1,695 days). The overall recurrence rate was 26.5%, and the recurrence occurred $358.8{\pm}239.8$ days after operation. 2) The ages of recurred NSCLC patients were higher ($63.2{\pm}8.8$ years) than those of non-recurred patients ($60.3{\pm}9.8$ years)(p=0.043). The recurrence rate was higher in stage II (46.9%) than in stage I (18.8%) NSCLC p<0.001. The size of primary lung mass was larger in recurred ($5.45{\pm}3.22\;cm$) than that of non-recurred NSCLC ($3.74{\pm}1.75\;cm$, p<0.001). Interestingly, there were no recurrent cases when the resected primary tumor was less than 2cm. 3) Distant recurrence was more frequent than locoregional recurrence (66.1% vs. 33.9%). Distant recurrence rate was higher in females and in cases of adenocarcinoma. Brain metastasis was more frequent in patients with adenocarcinoma than in those with squamous cell carcinoma (p=0.024). Conclusion: The tumor size and stage were two important factors for determining the possibility of a recurrence. Because distant brain metastasis was more frequent in patients with adenocarinoma, a prospective study should be conducted to evaluate the effectiveness of preoperative brain imaging.

  • PDF

An Analysis on the Conditions for Successful Economic Sanctions on North Korea : Focusing on the Maritime Aspects of Economic Sanctions (대북경제제재의 효과성과 미래 발전 방향에 대한 고찰: 해상대북제재를 중심으로)

  • Kim, Sang-Hoon
    • Strategy21
    • /
    • s.46
    • /
    • pp.239-276
    • /
    • 2020
  • The failure of early economic sanctions aimed at hurting the overall economies of targeted states called for a more sophisticated design of economic sanctions. This paved way for the advent of 'smart sanctions,' which target the supporters of the regime instead of the public mass. Despite controversies over the effectiveness of economic sanctions as a coercive tool to change the behavior of a targeted state, the transformation from 'comprehensive sanctions' to 'smart sanctions' is gaining the status of a legitimate method to impose punishment on states that do not conform to international norms, the nonproliferation of weapons of mass destruction in this particular context of the paper. The five permanent members of the United Nations Security Council proved that it can come to an accord on imposing economic sanctions over adopting resolutions on waging military war with targeted states. The North Korean nuclear issue has been the biggest security threat to countries in the region, even for China out of fear that further developments of nuclear weapons in North Korea might lead to a 'domino-effect,' leading to nuclear proliferation in the Northeast Asia region. Economic sanctions had been adopted by the UNSC as early as 2006 after the first North Korean nuclear test and has continually strengthened sanctions measures at each stage of North Korean weapons development. While dubious of the effectiveness of early sanctions on North Korea, recent sanctions that limit North Korea's exports of coal and imports of oil seem to have an impact on the regime, inducing Kim Jong-un to commit to peaceful talks since 2018. The purpose of this paper is to add a variable to the factors determining the success of economic sanctions on North Korea: preventing North Korea's evasion efforts by conducting illegal transshipments at sea. I first analyze the cause of recent success in the economic sanctions that led Kim Jong-un to engage in talks and add the maritime element to the argument. There are three conditions for the success of the sanctions regime, and they are: (1) smart sanctions, targeting commodities and support groups (elites) vital to regime survival., (2) China's faithful participation in the sanctions regime, and finally, (3) preventing North Korea's maritime evasion efforts.

The Expression of Adhesion Molecules on Alveolar Macrophages and Lymphocytes and Soluble ICAM-1 Level in Serum and Bronchoalveolar Lavge(BAL) Fluid of Patients with Diffuse Interstitial Lung Diseases(DILD) (간질성 폐질환환자들의 기관지 폐포세척액내 폐포 대식세포와 임파구의 접착분자 발현 및 Soluble ICAM-1 농도에 관한 연구)

  • Kim, Dong-Soon;Choi, Kang-Hyun;Yeom, Ho-Kee;Park, Myung-Jae;Lim, Chai-Man;Koh, Yoon-Suck;Kim, Woo-Sung;Kim, Won-Dong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.42 no.4
    • /
    • pp.569-583
    • /
    • 1995
  • Background: The expression of the adhesion molecules on the cell surface is important in the movement of cells and the modulation of immune response. DILD starts as an alveolitis and progresses to pulmonary fibrosis. So adhesion molecules in these patients is expected to be increased. There are several reports about adhesion molecules in DILD in terms of the percentage of positive cells in immuno-stain, in which the interpretation is subjective and the data were variable. Methods: So we measured the relative median fluorescence intensity(RMFI) which is the ratio of the FI emitted by bound primary monoclonal antibody to FI emitted by isotypic control antibody of the cells in BALF of 28 patients with DILD(IPF:10, collagen disease:7, sarcoidosis:9, hypersensitivity pneumonitis:2) and 9 healthy control. Results: RMFI of the ICAM-1 on AM($3.30{\pm}1.16$) and lymphocyte($5.39{\pm}.70$) of DILD were increased significantly than normal control($0.93{\pm}0.18$, $1.06{\pm}0.21$, respectively, p=0.001, P=0.003). RMFI of the CD18 on lymphocyte was also higher($24.9{\pm}14.9$) than normal($4.59{\pm}3.77$, p=0.0023). And there was a correlation between RMFI of ICAM on AM and the % of AM(r=-0.66, p=0.0001) and lymphocyte(r=0.447, p=0.0116) in BALF. Also RMFI of ICAM on lymphocyte had a significant (r=0.593, p=0.075) correlation with the % of IL-2R(+) lymphocyte in BALF. The soluble ICAM(sICAM) in serum was also significantly elevated in DILD($499.7{\pm}222.2\;ng/ml$) compred to normal($199.0{\pm}38.9$) (p=0.00097) and sICAM in BAL fluid was also significantly higher than normal control group($41.8{\pm}23.0\;ng/ml$ vs $20.1{\pm}13.6\;ng/ml$). There was a Significant correlation between sICAM level in serum and the expression of ICAM-l on AM(r=0.554, p=0.0259).Conclusion: These data suggest that in DILD the expression of adhesion molecules is increased in the AM and BAL lymphocytes with elevated serum sICAM, and these parameter may be useful in determining disease activity.

  • PDF

Comparison of Imposed Work of Breathing Between Pressure-Triggered and Flow-Triggered Ventilation During Mechanical Ventilation (기계환기시 압력유발법과 유량유발법 차이에 의한 부가적 호흡일의 비교)

  • Choi, Jeong-Eun;Lim, Chae-Man;Koh, Youn-Suck;Lee, Sang-Do;Kim, Woo-Sung;Kim, Dong-Soon;Kim, Won-Dong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.44 no.3
    • /
    • pp.592-600
    • /
    • 1997
  • Background : The level of imposed work of breathing (WOB) is important for patient-ventilator synchrony and during weaning from mechanical ventilation. Triggering methods and the sensitivity of demand system are important determining factors of the imposed WOB. Flow triggering method is available on several modern ventilator and is believed to impose less work to a patient-triggered breath than pressure triggering method. We intended to compare the level of imposed WOB on two different methods of triggering and also at different levels of sensitivities on each triggering method (0.7 L/min vs 2.0 L/min on flow triggering ; $-1\;cmH_2O$ vs $-2cm\;H_2O$ on pressure triggering). Methods : The subjects were 12 patients ($64.8{\pm}4.2\;yrs$) on mechanical ventilation and were stable in respiratory pattern on CPAP $3\;cmH_2O$. Four different triggering sensitivities were applied at random order. For determination of imposed WOB, tracheal end pressure was measured through the monitoring lumen of Hi-Lo Jet tracheal tube (Mallincrodt, New York, USA) using pneumotachograph/pressure transducer (CP-100 pulmonary monitor, Bicore, Irvine, CA, USA). Other data of respiratory mechanics were also obtained by CP-100 pulmonary monitor. Results : The imposed WOB was decreased by 37.5% during 0.7 L/min on flow triggering compared to $-2\;cmH_2O$ on pressure triggering and also decreased by 14% during $-1\;cmH_2O$ compared to $-2\;cmH_2O$ on pressure triggering (p < 0.05 in each). The PTP(Pressure Time Product) was also decreased significantly during 0.7 L/min on flow triggering and $-1\;cmH_2O$ on pressure triggering compared to $-2\;cmH_2O$ on pressure triggering (p < 0.05 in each). The proportions of imposed WOB in total WOB were ranged from 37% to 85% and no significant changes among different methods and sensitivities. The physiologic WOB showed no significant changes among different triggering methods and sensitivities. Conclusion : To reduce the imposed WOB, flow triggering with sensitivity of 0.7 L/min would be better method than pressure triggering with sensitivity of $-2\;cm\;H_2O$.

  • PDF

Clinical Study of Corrosive Esophagitis (부식성 식도염에 관한 임상적 고찰)

  • 이원상;정승규;최홍식;김상기;김광문;홍원표
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1981.05a
    • /
    • pp.6-7
    • /
    • 1981
  • With the improvement of living standard and educational level of the people, there is an increasing awareness about the dangers of toxic substances and lethal drugs. In addition to the above, the governmental control of these substances has led to a progressive decrease in the accidents with corrosive substances. However there are still sporadic incidences of suicidal attempts with the substances due to the unbalance between the cultural development in society and individual emotion. The problem is explained by the fact that there is a variety of corrosive agents easily available to the people due to the considerable industrial development and industrialization. Salzen(1920), Bokey(1924) were pioneers on the subject of the corrosive esophagitis and esophageal stenosis by dilatation method. Since then there had been a continuing improvement on the subject with researches on various acid(Pitkin, 1935, Carmody, 1936) and alkali (Tree, 1942, Tucker, 1951) corrosive agents, and the use of steroid (Spain, 1950) and antibiotics. Recently, early esophagoscopic examination is emphasized on the purpose of determining the way of the treatment in corrosive esophagitis patients. In order to find the effective treatment of such patients in future, the authors selected 96 corrosive esophagitis patients who were admitted and treated at the ENT department of Severance hospital from 1971 to March, 1981 to attempt a clinical study. 1. Sex incidence……male: female=1 : 1.7, Age incidence……21-30 years age group; 38 cases (39.6%). 2. Suicidal attempt……80 cases(83.3%), Accidental ingestion……16 cases (16.7%). Among those who ingested the substance accidentally, children below ten years were most numerous with nine patients. 3. Incidence acetic acid……41 cases(41.8%), lye…20 cases (20.4%), HCI……17 cases (17.3%). There was a trend of rapid rise in the incidence of acidic corrosive agents especially acetic acid. 4. Lavage……57 cases (81.1%). 5. Nasogastric tube insertion……80 cases (83.3%), No insertion……16 cases(16.7%), late admittance……10 cases, failure…4 cases, other……2 cases. 6. Tracheostomy……17 cases(17.7%), respiratory problems(75.0%), mental problems (25.0%). 7. Early endoscopy……11 cases(11.5%), within 48 hours……6 cases (54.4%). Endoscopic results; moderate mucosal ulceration…8 cases (72.7%), mild mucosal erythema……2 cases (18.2%), severe mucosal ulceration……1 cases (9.1%) and among those who took early endoscopic examination; 6 patients were confirmed mild lesion and so they were discharged after endoscopy. Average period of admittance in the cases of nasogastric tube insertion was 4 weeks. 8. Nasogastric tube indwelling period……average 11.6 days, recently our treatment trend in the corrosive esophagitis patients with nasogastric tube indwelling is determined according to the finding of early endoscopy. 9. The No. of patients who didn't given and delayed administration of steroid……7 cases(48.9%): causes; kind of drug(acid, unknown)……12 cases, late admittance……11 cases, mild case…9 cases, contraindication……7 cases, other …8 cases. 10. Management of stricture; bougienage……7 cases, feeding gastrostomy……6 cases, other surgical management……4 cases. 11. Complication……27 cases(28.1%); cardio-pulmonary……10 cases, visceral rupture……8 cases, massive bleeding……6 cases, renal failure……4 cases, other…2 cases, expire and moribund discharge…8 cases. 12. No. of follow-up case……23 cases; esophageal stricture……13 cases and site of stricture; hypopharynx……1 case, mid third of esophagus…5 cases, upper third of esophagus…3 cases, lower third of esophagus……3 cases pylorus……1 case, diffuse esophageal stenosis……1 case.

  • PDF

Effects of firm strategies on customer acquisition of Software as a Service (SaaS) providers: A mediating and moderating role of SaaS technology maturity (SaaS 기업의 차별화 및 가격전략이 고객획득성과에 미치는 영향: SaaS 기술성숙도 수준의 매개효과 및 조절효과를 중심으로)

  • Chae, SeongWook;Park, Sungbum
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.151-171
    • /
    • 2014
  • Firms today have sought management effectiveness and efficiency utilizing information technologies (IT). Numerous firms are outsourcing specific information systems functions to cope with their short of information resources or IT experts, or to reduce their capital cost. Recently, Software-as-a-Service (SaaS) as a new type of information system has become one of the powerful outsourcing alternatives. SaaS is software deployed as a hosted and accessed over the internet. It is regarded as the idea of on-demand, pay-per-use, and utility computing and is now being applied to support the core competencies of clients in areas ranging from the individual productivity area to the vertical industry and e-commerce area. In this study, therefore, we seek to quantify the value that SaaS has on business performance by examining the relationships among firm strategies, SaaS technology maturity, and business performance of SaaS providers. We begin by drawing from prior literature on SaaS, technology maturity and firm strategy. SaaS technology maturity is classified into three different phases such as application service providing (ASP), Web-native application, and Web-service application. Firm strategies are manipulated by the low-cost strategy and differentiation strategy. Finally, we considered customer acquisition as a business performance. In this sense, specific objectives of this study are as follows. First, we examine the relationships between customer acquisition performance and both low-cost strategy and differentiation strategy of SaaS providers. Secondly, we investigate the mediating and moderating effects of SaaS technology maturity on those relationships. For this purpose, study collects data from the SaaS providers, and their line of applications registered in the database in CNK (Commerce net Korea) in Korea using a questionnaire method by the professional research institution. The unit of analysis in this study is the SBUs (strategic business unit) in the software provider. A total of 199 SBUs is used for analyzing and testing our hypotheses. With regards to the measurement of firm strategy, we take three measurement items for differentiation strategy such as the application uniqueness (referring an application aims to differentiate within just one or a small number of target industry), supply channel diversification (regarding whether SaaS vendor had diversified supply chain) as well as the number of specialized expertise and take two items for low cost strategy like subscription fee and initial set-up fee. We employ a hierarchical regression analysis technique for testing moderation effects of SaaS technology maturity and follow the Baron and Kenny's procedure for determining if firm strategies affect customer acquisition through technology maturity. Empirical results revealed that, firstly, when differentiation strategy is applied to attain business performance like customer acquisition, the effects of the strategy is moderated by the technology maturity level of SaaS providers. In other words, securing higher level of SaaS technology maturity is essential for higher business performance. For instance, given that firms implement application uniqueness or a distribution channel diversification as a differentiation strategy, they can acquire more customers when their level of SaaS technology maturity is higher rather than lower. Secondly, results indicate that pursuing differentiation strategy or low cost strategy effectively works for SaaS providers' obtaining customer, which means that continuously differentiating their service from others or making their service fee (subscription fee or initial set-up fee) lower are helpful for their business success in terms of acquiring their customers. Lastly, results show that the level of SaaS technology maturity mediates the relationships between low cost strategy and customer acquisition. That is, based on our research design, customers usually perceive the real value of the low subscription fee or initial set-up fee only through the SaaS service provide by vender and, in turn, this will affect their decision making whether subscribe or not.

Intelligent VOC Analyzing System Using Opinion Mining (오피니언 마이닝을 이용한 지능형 VOC 분석시스템)

  • Kim, Yoosin;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.113-125
    • /
    • 2013
  • Every company wants to know customer's requirement and makes an effort to meet them. Cause that, communication between customer and company became core competition of business and that important is increasing continuously. There are several strategies to find customer's needs, but VOC (Voice of customer) is one of most powerful communication tools and VOC gathering by several channels as telephone, post, e-mail, website and so on is so meaningful. So, almost company is gathering VOC and operating VOC system. VOC is important not only to business organization but also public organization such as government, education institute, and medical center that should drive up public service quality and customer satisfaction. Accordingly, they make a VOC gathering and analyzing System and then use for making a new product and service, and upgrade. In recent years, innovations in internet and ICT have made diverse channels such as SNS, mobile, website and call-center to collect VOC data. Although a lot of VOC data is collected through diverse channel, the proper utilization is still difficult. It is because the VOC data is made of very emotional contents by voice or text of informal style and the volume of the VOC data are so big. These unstructured big data make a difficult to store and analyze for use by human. So that, the organization need to automatic collecting, storing, classifying and analyzing system for unstructured big VOC data. This study propose an intelligent VOC analyzing system based on opinion mining to classify the unstructured VOC data automatically and determine the polarity as well as the type of VOC. And then, the basis of the VOC opinion analyzing system, called domain-oriented sentiment dictionary is created and corresponding stages are presented in detail. The experiment is conducted with 4,300 VOC data collected from a medical website to measure the effectiveness of the proposed system and utilized them to develop the sensitive data dictionary by determining the special sentiment vocabulary and their polarity value in a medical domain. Through the experiment, it comes out that positive terms such as "칭찬, 친절함, 감사, 무사히, 잘해, 감동, 미소" have high positive opinion value, and negative terms such as "퉁명, 뭡니까, 말하더군요, 무시하는" have strong negative opinion. These terms are in general use and the experiment result seems to be a high probability of opinion polarity. Furthermore, the accuracy of proposed VOC classification model has been compared and the highest classification accuracy of 77.8% is conformed at threshold with -0.50 of opinion classification of VOC. Through the proposed intelligent VOC analyzing system, the real time opinion classification and response priority of VOC can be predicted. Ultimately the positive effectiveness is expected to catch the customer complains at early stage and deal with it quickly with the lower number of staff to operate the VOC system. It can be made available human resource and time of customer service part. Above all, this study is new try to automatic analyzing the unstructured VOC data using opinion mining, and shows that the system could be used as variable to classify the positive or negative polarity of VOC opinion. It is expected to suggest practical framework of the VOC analysis to diverse use and the model can be used as real VOC analyzing system if it is implemented as system. Despite experiment results and expectation, this study has several limits. First of all, the sample data is only collected from a hospital web-site. It means that the sentimental dictionary made by sample data can be lean too much towards on that hospital and web-site. Therefore, next research has to take several channels such as call-center and SNS, and other domain like government, financial company, and education institute.

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.