• Title/Summary/Keyword: classification of pattern

Search Result 1,918, Processing Time 0.032 seconds

Antibiotic Susceptibility of Salmonella spp. Isolated From Diarrhoea Patients in Seoul From 1996 to 2001 (서울 시내 설사환자에서 분리한 살모넬라의 항생제 감수성의 년도별 변화 추이)

  • 박석기;박성규;정지헌;진영희
    • Journal of Food Hygiene and Safety
    • /
    • v.17 no.2
    • /
    • pp.61-70
    • /
    • 2002
  • In order to investigate the classification and antibiotic resistance of Salmonella species,718 isolates were isolated from patient in Seoul from 1996 to 2001. The two hundred and ninety eight isolates (41.5%) were identified as Sal. Enteritidis, followed by Sal. Typhi 218 isolates (30.4%), and Sal. Typhimurium 87 isolates (12.1%). The identified Salmonella species were most resistant to tetracycline (32.7%), followed by streptomycin (28.0%), ticarcillin (18.1%) and ampicillin (12.4%). Among isolates,34.7% of Sal. Enteritidis were resistant to tetracycline, 32.3% to streptomycin,23.2% to ticarcillin,13.5% to ampicillin, respectively. 13.8% of Sal. Typhi were resistant to streptomycin,10.6% to tetracycline, respectively.66.7% of Sal. Typhimurium were resistant to tetracycline, 42.5% to streptomycin, 28.7% to ticarcillin, 26.4% to ampicillin and 17.2% to chloramphenicol, respectively. Of 718 isolates, 324 isolates (45.1%) were resistant to 1 or more drugs and 64 isolates (19.8%) were resistant to 1 drug, 132 isolates (40.7%) were resistant to 2 drugs,50 isolates (15.4%) were resistant to 3 drugs, 27 isolates (8.3%) to 4 drugs,27 isolates (8.3%) to 5 drugs,22 Isolates (6.8%) to 6 drugs. The most prevalent multiple resistant pattern was tetracycline-kanamycin (35.5%), followed by tetracycline-kanamycin-ticarcillin (8.3%), and tetracycline-kanamycin-ticarcillin-ampicillin (7.4%) . Antibiotic resistant rate of Sal. Typhimurium was 73.6%,1311owe4 by Sal. Enteritidis 53.7% and Sal. Typhi 19.3%. Most Sal. Enteritidis was resistant to 1 drug o.2 drugs, whereas Sal. Typhi. and Sal.. Typhunurium were more .resistant to 5 (16.7%) or 6 drugs (26.6%). The old generation antibiotics such as ampicillin, tetracycline, and streptomycin were annually more resistant than the new generation antibiotics such as ceftriaxone, ciprofloxacin or cefoxitin.

On-Line Determination Steady State in Simulation Output (시뮬레이션 출력의 안정상태 온라인 결정에 관한 연구)

  • 이영해;정창식;경규형
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 1996.05a
    • /
    • pp.1-3
    • /
    • 1996
  • 시뮬레이션 기법을 이용한 시스템의 분석에 있어서 실험의 자동화는 현재 많은 연구와 개발이 진행 중인 분야이다. 컴퓨터와 정보통신 시스템에 대한 시뮬레이션의 예를 들어 보면, 수많은 모델을 대한 시뮬레이션을 수행할 경우 자동화된 실험의 제어가 요구되고 있다. 시뮬레이션 수행회수, 수행길이, 데이터 수집방법 등과 관련하여 시뮬레이션 실험방법이 자동화가 되지 않으면, 시뮬레이션 실험에 필요한 시간과 인적 자원이 상당히 커지게 되며 출력데이터에 대한 분석에 있어서도 어려움이 따르게 된다. 시뮬레이션 실험방법을 자동화하면서 효율적인 시뮬레이션 출력분석을 위해서는 시뮬레이션을 수행하는 경우에 항상 발생하는 초기편의 (initial bias)를 제거하는 문제가 선결되어야 한다. 시뮬레이션 출력분석에 사용되는 데이터들이 초기편의를 반영하지 않는 안정상태에서 수집된 것이어야만 실제 시스템에 대한 올바른 해석이 가능하다. 실제로 시뮬레이션 출력분석과 관련하여 가장 중요하면서도 어려운 문제는 시뮬레이션의 출력데이터가 이루는 추계적 과정 (stochastic process)의 안정상태 평균과 이 평균에 대한 신뢰구간(confidence interval: c. i.)을 구하는 것이다. 한 신뢰구간에 포함되어 있는 정보는 의사결정자에게 얼마나 정확하게 평균을 추정할 구 있는지 알려 준다. 그러나, 신뢰구간을 구성하는 일은 하나의 시뮬레이션으로부터 얻어진 출력데이터가 일반적으로 비정체상태(nonstationary)이고 자동상관(autocorrelated)되어 있기 때문에, 전통적인 통계적인 기법을 직접적으로 이용할 수 없다. 이러한 문제를 해결하기 위해 시뮬레이션 출력데이터 분석기법이 사용된다.본 논문에서는 초기편의를 제거하기 위해서 필요한 출력데이터의 제거시점을 찾는 새로운 기법으로, 유클리드 거리(Euclidean distance: ED)를 이용한 방법과 현재 패턴 분류(pattern classification) 문제에 널리 사용 중인 역전파 신경망(backpropagation neural networks: BNN) 알고리듬을 이용하는 방법을 제시한다. 이 기법들은 대다수의 기존의 기법과는 달리 시험수행(pilot run)이 필요 없으며, 시뮬레이션의 단일수행(single run) 중에 제거시점을 결정할 수 있다. 제거시점과 관련된 기존 연구는 다음과 같다. 콘웨이방법은 현재의 데이터가 이후 데이터의 최대값이나 최소값이 아니면 이 데이터를 제거시점으로 결정하는데, 알고기듬 구조상 온라인으로 제거시점 결정이 불가능하다. 콘웨이방법이 알고리듬의 성격상 온라인이 불가능한 반면, 수정콘웨이방법 (Modified Conway Rule: MCR)은 현재의 데이터가 이전 데이터와 비교했을 때 최대값이나 최소값이 아닌 경우 현재의 데이터를 제거시점으로 결정하기 때문에 온라인이 가능하다. 평균교차방법(Crossings-of-the-Mean Rule: CMR)은 누적평균을 이용하면서 이 평균을 중심으로 관측치가 위에서 아래로, 또는 아래서 위로 교차하는 회수로 결정한다. 이 기법을 사용하려면 교차회수를 결정해야 하는데, 일반적으로 결정된 교차회수가 시스템에 상관없이 일반적으로 적용가능하지 않다는 문제점이 있다. 누적평균방법(Cumulative-Mean Rule: CMR2)은 여러 번의 시험수행을 통해서 얻어진 출력데이터에 대한 총누적평균(grand cumulative mean)을 그래프로 그린 다음, 안정상태인 점을 육안으로 결정한다. 이 방법은 여러 번의 시뮬레이션을 수행에서 얻어진 데이터들의 평균들에 대한 누적평균을 사용하기 매문에 온라인 제거시점 결정이 불가능하며, 작업자가 그래프를 보고 임의로 결정해야 하는 단점이 있다. Welch방법(Welch's Method: WM)은 브라운 브리지(Brownian bridge) 통계량()을 사용하는데, n이 무한에 가까워질 때, 이 브라운 브리지 분포(Brownian bridge distribution)에 수렴하는 성질을 이용한다. 시뮬레이션 출력데이터를 가지고 배치를 구성한 후 하나의 배치를 표본으로 사용한다. 이 기법은 알고리듬이 복잡하고, 값을 추정해야 하는 단점이 있다. Law-Kelton방법(Law-Kelton's Method: LKM)은 회귀 (regression)이론에 기초하는데, 시뮬레이션이 종료된 후 누적평균데이터에 대해서 회귀직선을 적합(fitting)시킨다. 회귀직선의 기울기가 0이라는 귀무가설이 채택되면 그 시점을 제거시점으로 결정한다. 일단 시뮬레이션이 종료된 다음, 데이터가 모아진 순서의 반대 순서로 데이터를 이용하기 때문에 온라인이 불가능하다. Welch절차(Welch's Procedure: WP)는 5회이상의 시뮬레이션수행을 통해 수집한 데이터의 이동평균을 이용해서 시각적으로 제거시점을 결정해야 하며, 반복제거방법을 사용해야 하기 때문에 온라인 제거시점의 결정이 불가능하다. 또한, 한번에 이동할 데이터의 크기(window size)를 결정해야 한다. 지금까지 알아 본 것처럼, 기존의 방법들은 시뮬레이션의 단일 수행 중의 온라인 제거시점 결정의 관점에서는 미약한 면이 있다. 또한, 현재의 시뮬레이션 상용소프트웨어는 작업자로 하여금 제거시점을 임의로 결정하도록 하기 때문에, 실험중인 시스템에 대해서 정확하고도 정량적으로 제거시점을 결정할 수 없게 되어 있다. 사용자가 임의로 제거시점을 결정하게 되면, 초기편의 문제를 효과적으로 해결하기 어려울 뿐만 아니라, 필요 이상으로 너무 많은 양을 제거하거나 초기편의를 해결하지 못할 만큼 너무 적은 양을 제거할 가능성이 커지게 된다. 또한, 기존의 방법들의 대부분은 제거시점을 찾기 위해서 시험수행이 필요하다. 즉, 안정상태 시점만을 찾기 위한 시뮬레이션 수행이 필요하며, 이렇게 사용된 시뮬레이션은 출력분석에 사용되지 않기 때문에 시간적인 손실이 크게 된다.

  • PDF

Research on The Utility of Acquisition of Oblique Views of Bilateral Orbit During the Dacryoscintigraphy (눈물길 조영검사 시 양측 안 와 사위 상 획득의 유용성에 대한 연구)

  • Park, Jwa-Woo;Lee, Bum-Hee;Park, Seung-Hwan;Park, Su-Young;Jung, Chan-Wook;Ryu, Hyung-Gi;Kim, Ho-Shin
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.76-81
    • /
    • 2014
  • Purpose: Diversity and the lachrymal duct deformities and the passage inside the nasal cavity except for anterior image such as epiphora happens during the test were able to express more precisely during the dacryoscintigraphy. Also, we thought about the necessity of a method to classify the passage into the naso-lachrymal duct from epiphora. Therefore, we are to find the validity of the method to obtain both oblique views except for anterior views. Materials and Methods: The targets of this research are 78 patients with epiphora due to the blockage at the lachrymal duct from January 2013 to August 2013. Average age was $56.96{\pm}13.36$. By using a micropipette, we dropped 1-2 drops of $^{99m}TcO4^-$ of 3.7 MBq (0.1 mCi) with $10{\mu}L$ of each drop into the inferior conjunctival fold, then we performed dynamic check for 20 minutes with 20 frames of each minute. In case of we checked the passage from both eyes to nasal cavity immediately after the dynamic check, we obtained oblique view immediately. If we didn't see the passage in either side of the orbit, we obtained oblique views of the orbit after checking the frontal film in 40 minutes. The instrument we used was Pin-hole Collimator with Gamma Camera(Siemens Orbiter, Hoffman Estates, IL, USA). Results: Among the 78 patients with dacryoscintigraphy, 35 patients were confirmed with passage into the nasal cavity from the anterior view. Among those 35 patients, 15 patients were confirmed with passage into the nasal cavity on both eyes, and it was able to observe better passage patterns through oblique view with a result of 8 on both eyes, 2 on left eye, and 1 on right eye. 20 patients had passage in left eye or right eye, among those patients 10 patients showed clear passage compared to the anterior view. 13 patients had possible passage, and 30 patients had no proof of motion of the tracer. To sum up, 21 patients (60%) among 35 patients showed clear pattern of passage with additional oblique views compared to anterior view. People responded obtaining oblique views though 5 points scale about the utility of passage identification helps make diagnoses the passage, passage delayed, and blockage of naso-lachrymal duct by showing the well-seen portions from anterior view. Also, when classifying passage to naso-lachrymal duct and flow to the skin, oblique views has higher chance of classification in case of epiphora (anterior:$4.14{\pm}0.3$, oblique:$4.55{\pm}0.4$). Conclusion: It is considered that if you obtain oblique views of the bilateral orbits in addition to anterior view during the dacryoscintigraphy, the ability of diagnose for reading will become higher because you will be able to see the areas that you could not observe from the anterior view so that you can see if it emitted after the naso-lachrymal duct and the flow of epiphora on the skin.

  • PDF

Case Analysis of the Promotion Methodologies in the Smart Exhibition Environment (스마트 전시 환경에서 프로모션 적용 사례 및 분석)

  • Moon, Hyun Sil;Kim, Nam Hee;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.171-183
    • /
    • 2012
  • In the development of technologies, the exhibition industry has received much attention from governments and companies as an important way of marketing activities. Also, the exhibitors have considered the exhibition as new channels of marketing activities. However, the growing size of exhibitions for net square feet and the number of visitors naturally creates the competitive environment for them. Therefore, to make use of the effective marketing tools in these environments, they have planned and implemented many promotion technics. Especially, through smart environment which makes them provide real-time information for visitors, they can implement various kinds of promotion. However, promotions ignoring visitors' various needs and preferences can lose the original purposes and functions of them. That is, as indiscriminate promotions make visitors feel like spam, they can't achieve their purposes. Therefore, they need an approach using STP strategy which segments visitors through right evidences (Segmentation), selects the target visitors (Targeting), and give proper services to them (Positioning). For using STP Strategy in the smart exhibition environment, we consider these characteristics of it. First, an exhibition is defined as market events of a specific duration, which are held at intervals. According to this, exhibitors who plan some promotions should different events and promotions in each exhibition. Therefore, when they adopt traditional STP strategies, a system can provide services using insufficient information and of existing visitors, and should guarantee the performance of it. Second, to segment automatically, cluster analysis which is generally used as data mining technology can be adopted. In the smart exhibition environment, information of visitors can be acquired in real-time. At the same time, services using this information should be also provided in real-time. However, many clustering algorithms have scalability problem which they hardly work on a large database and require for domain knowledge to determine input parameters. Therefore, through selecting a suitable methodology and fitting, it should provide real-time services. Finally, it is needed to make use of data in the smart exhibition environment. As there are useful data such as booth visit records and participation records for events, the STP strategy for the smart exhibition is based on not only demographical segmentation but also behavioral segmentation. Therefore, in this study, we analyze a case of the promotion methodology which exhibitors can provide a differentiated service to segmented visitors in the smart exhibition environment. First, considering characteristics of the smart exhibition environment, we draw evidences of segmentation and fit the clustering methodology for providing real-time services. There are many studies for classify visitors, but we adopt a segmentation methodology based on visitors' behavioral traits. Through the direct observation, Veron and Levasseur classify visitors into four groups to liken visitors' traits to animals (Butterfly, fish, grasshopper, and ant). Especially, because variables of their classification like the number of visits and the average time of a visit can estimate in the smart exhibition environment, it can provide theoretical and practical background for our system. Next, we construct a pilot system which automatically selects suitable visitors along the objectives of promotions and instantly provide promotion messages to them. That is, based on the segmentation of our methodology, our system automatically selects suitable visitors along the characteristics of promotions. We adopt this system to real exhibition environment, and analyze data from results of adaptation. As a result, as we classify visitors into four types through their behavioral pattern in the exhibition, we provide some insights for researchers who build the smart exhibition environment and can gain promotion strategies fitting each cluster. First, visitors of ANT type show high response rate for promotion messages except experience promotion. So they are fascinated by actual profits in exhibition area, and dislike promotions requiring a long time. Contrastively, visitors of GRASSHOPPER type show high response rate only for experience promotion. Second, visitors of FISH type appear favors to coupon and contents promotions. That is, although they don't look in detail, they prefer to obtain further information such as brochure. Especially, exhibitors that want to give much information for limited time should give attention to visitors of this type. Consequently, these promotion strategies are expected to give exhibitors some insights when they plan and organize their activities, and grow the performance of them.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

Radiation Therapy for Carcinoma of the Oropharynx (구인두암의 방사선치료)

  • Park, In-Kyu;Kim, Jae-Choel
    • Radiation Oncology Journal
    • /
    • v.14 no.2
    • /
    • pp.95-103
    • /
    • 1996
  • Purpose : A retrospective analysis for patients with oropharyngeal carcinoma who were treated with radiation was performed to assess the results of treatment and patterns of failure, and to identify the factors that might influence survival. materials and methods : From March 1985 through June 1993, 53 patients with oropharyngeal carcinoma were treated with either radiation therapy alone or combination of neoadjuvant chemotherapy and radiation therapy at the Department of Radiation Oncology, Kyungpook National University Hospital. Patients' ages ranged from 31 to 73 years with a median age of 54 years. There were 47 men and 6 women, Forty-two Patients ($79.2\%$) had squamous cell carcinoma, 10 patients ($18.9\%$) had undifferentiated carcinoma and 1 patient ($19\%$) had adenoid cystic carcinoma. There were 2 patients with stage I, 12 patients with stage II, 12 Patients with stage III and 27 patients with stage IV. According to the TNM classification, patients were distributed as follows: T1 7, T2 28, T3 10, T4 7, TX 1, and N0 17, Nl 13, N2 21, N3 2. The primary tumor sites were tonsillar region in 36 patients ($67.9\%$), base of the tongue in 12 patients ($22.6\%$), and soft palate in 5 patients ($9.4\%$). Twenty-five patients were treated with radiation therapy alone and twenty-eight Patients were treated with one to three courses of chemotherapy followed by radiation therapy. Chemotherapeutic regimens used were either CF (cisplatin and 5-fluorouracil) or CVB (cisplatin, vincristine and bleomycin). Radiation therapy was delivered 180-200 cGy daily, five times a week using 6 MV X-ray with or without 8-10 MeV electron beams A tumor dose ranged from 4500 cGy to 7740 cGy with a median dose of 7100 cGy. The follow-up time ranged from 4 months to 99 months with a median of 21 months. Results : Thirty-seven patients ($69.8\%$) achieved a CR (complete response) and PR (partial response) in 16 patients ($30.2\%$) after radiation therapy. The overall survival rates were $47\%$ at 2 years and $42\%$ at 3 years, respectively. The median survival time was 23 months. Overall stage (p=0.02) and response to radiation therapy (p=0.004) were significant prognostic factors for overall survival. The 2-year disease-free survival rate was $45.5\%$. T-stage (p=0.03), N-stage (p=0.04) and overall stage (P=0.04) were significant prognostic factors for disease-free survival. Age, sex, histology, primary site of the tumor, radiation dose, combination of chemotherapy were not significantly associated with disease-free survival. Among evaluable 32 Patients with CR to radiation therapy, 12 patients were considered to have failed Among these, 8 patients failed locoregionally and 4 Patients failed distantly. Conclusion : T-stage, N-stage and overall stage were significant prognostic factors for disease-free survival in the treatment of oropharyngeal cancer Since locoregional failure was the predominant pattern of relapse, potential methods to improve locoregional control with radiation therapy should be attempted. More controlled clinical, trials should be completed before acceptance of chemotherapy as a part of treatment of oropharyngeal carcinoma.

  • PDF

Performance Improvement on Short Volatility Strategy with Asymmetric Spillover Effect and SVM (비대칭적 전이효과와 SVM을 이용한 변동성 매도전략의 수익성 개선)

  • Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.119-133
    • /
    • 2020
  • Fama asserted that in an efficient market, we can't make a trading rule that consistently outperforms the average stock market returns. This study aims to suggest a machine learning algorithm to improve the trading performance of an intraday short volatility strategy applying asymmetric volatility spillover effect, and analyze its trading performance improvement. Generally stock market volatility has a negative relation with stock market return and the Korean stock market volatility is influenced by the US stock market volatility. This volatility spillover effect is asymmetric. The asymmetric volatility spillover effect refers to the phenomenon that the US stock market volatility up and down differently influence the next day's volatility of the Korean stock market. We collected the S&P 500 index, VIX, KOSPI 200 index, and V-KOSPI 200 from 2008 to 2018. We found the negative relation between the S&P 500 and VIX, and the KOSPI 200 and V-KOSPI 200. We also documented the strong volatility spillover effect from the VIX to the V-KOSPI 200. Interestingly, the asymmetric volatility spillover was also found. Whereas the VIX up is fully reflected in the opening volatility of the V-KOSPI 200, the VIX down influences partially in the opening volatility and its influence lasts to the Korean market close. If the stock market is efficient, there is no reason why there exists the asymmetric volatility spillover effect. It is a counter example of the efficient market hypothesis. To utilize this type of anomalous volatility spillover pattern, we analyzed the intraday volatility selling strategy. This strategy sells short the Korean volatility market in the morning after the US stock market volatility closes down and takes no position in the volatility market after the VIX closes up. It produced profit every year between 2008 and 2018 and the percent profitable is 68%. The trading performance showed the higher average annual return of 129% relative to the benchmark average annual return of 33%. The maximum draw down, MDD, is -41%, which is lower than that of benchmark -101%. The Sharpe ratio 0.32 of SVS strategy is much greater than the Sharpe ratio 0.08 of the Benchmark strategy. The Sharpe ratio simultaneously considers return and risk and is calculated as return divided by risk. Therefore, high Sharpe ratio means high performance when comparing different strategies with different risk and return structure. Real world trading gives rise to the trading costs including brokerage cost and slippage cost. When the trading cost is considered, the performance difference between 76% and -10% average annual returns becomes clear. To improve the performance of the suggested volatility trading strategy, we used the well-known SVM algorithm. Input variables include the VIX close to close return at day t-1, the VIX open to close return at day t-1, the VK open return at day t, and output is the up and down classification of the VK open to close return at day t. The training period is from 2008 to 2014 and the testing period is from 2015 to 2018. The kernel functions are linear function, radial basis function, and polynomial function. We suggested the modified-short volatility strategy that sells the VK in the morning when the SVM output is Down and takes no position when the SVM output is Up. The trading performance was remarkably improved. The 5-year testing period trading results of the m-SVS strategy showed very high profit and low risk relative to the benchmark SVS strategy. The annual return of the m-SVS strategy is 123% and it is higher than that of SVS strategy. The risk factor, MDD, was also significantly improved from -41% to -29%.