• Title/Summary/Keyword: Performance Distribution

Search Result 7,547, Processing Time 0.039 seconds

The Role of Radiation Therapy in the Treatment of Intracranial Glioma : Retrospective Analysis of 96 Cases (뇌 교종 96예에 대한 방사선치료 성적의 후향적 분석)

  • Kim Yeon Sil;Kang Ki Mun;Choi Byung Ock;Yoon Sei Chul;Shinn Kyung Sub;Kang Jun Gi
    • Radiation Oncology Journal
    • /
    • v.11 no.2
    • /
    • pp.249-258
    • /
    • 1993
  • Between March 1983 and December 1989, ninety-six patients with intracranial glioma were treated in the Department of Therapeutic Radiology, Kangnam St. Mary's Hospital, Catholic University Medical College. We retrospectively reviewed each case to evaluate variable factors influencing the treatment results and to develop an optimal therapy Policy. Median follow-up is 57 months (range: 31~133 months). Of the 96 patients, 60 $(63\%)$ were males and 36 $(37\%)$ were females. Ages ranged from 3 to 69 years (median 42 years). The most common presenting symtoms were headeche $(67\%)$ followed by cerebral motor and sensory discrepancy $(54\%),$ nausea and vomiting $(34\%),$ seizure $(19\%),$ mental change $(10\%)$ and memory and calculation impairment $(8\%).$ Eighty five $(88.5\%)$ patients all, except 11 $(11.5\%)$ brain stem lesions, were biopsy proven intracranial glioma. The distribution by histologic type was 64 astrocytomas $(75\%),$ 4 mixed oligoastrocytomas $(5\%),$ and 17 oligodendrogliomas $(20\%).$ Fourty nine patients $(58\%$ were grade I, II histology and 36 $(42\%)$ patients were grade III, IV histology. Of the 96 patients, 64 $(67\%)$ recieved postoperative RT and 32 $(33\%)$ were treated with primary radiotherapy. Gross total resection was peformed in 14 $(16\%)$ patients, subtotal resection En 29 $(34\%),$ partial resection in 21 $(25\%),$ and biopsy only in 21 $(25\%).$ Median survival time was 53 months (range 2~ 133 months), and 2- and, 5-year survival rate were $69\%,49\%$ respectively. 5-year survival rate by histologic grade was grade I, $70\%,$ grade II, $58\%,$ grade III, $28\%,$ and grade IV, $15\%.$ Multivariated analysis demonstrate that age at diagnosis (p=0.0121), Karnofsky performance Status (KPS) (p=0.0002), histologic grade (p=0.0001), postoperative radiation therapy (p=0.0278), surgical extent (p =0.024), cerebellar location of tumor (p=0.0095) were significant prognostic factors influencing on survival.

  • PDF

Trends of Assessment Research in Science Education (과학 교육에서의 평가 연구 동향)

  • Chung, Sue-Im;Shin, Dong-Hee
    • Journal of The Korean Association For Science Education
    • /
    • v.36 no.4
    • /
    • pp.563-579
    • /
    • 2016
  • This study seeks educational implication by analyzing research papers dealing with science assessment in the most recent 30 years in Korea. The main purpose of the study is to analyze the trends in published papers on science assessment, their purpose, methodology, and key words, especially concentrating on the cognitive and affective domains. We selected 273 research articles and categorized them by research object, subject, methodology, and contents. To examine the factors that affect the research trend, we also tried to contextualize papers' theme in terms of changes in national curriculum and assessment system during the contemporary period. As a result, an overall research trend reflects changes in science curriculum and assessment events such as implementation of college scholastic ability test or performance assessment. There is an unequal distribution in various aspects of the researches, showing a superiority in cognitive domains than the affective ones. By using standardized data obtained through the national and international assessment of educational achievement in science, quantitative researches were superior to qualitative ones. Studies on cognitive domain use variously written- and performance-based tests, whereas most studies of the affective ones prefer written tests. Applied research and evaluation research are predominant comparing to basic ones, which most of the research methodology is based on statistics. Lastly, we found out that key words and subjects tend to be subdivided and detailed rather than general and comprehensive, as time goes on. Such trend will be helpful to elaborate and refine assessment tools that have been regarded as a problem.

Recent Progress in Air-Conditioning and Refrigeration Research: A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2008 (설비공학 분야의 최근 연구 동향: 2008년 학회지 논문에 대한 종합적 고찰)

  • Han, Hwa-Taik;Choi, Chang-Ho;Lee, Dae-Young;Kim, Seo-Young;Kwon, Yong-Il;Choi, Jong-Min
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.21 no.12
    • /
    • pp.715-732
    • /
    • 2009
  • This article reviews the papers published in the Korean Journal of Air-Conditioning and Refrigeration Engineering during 2008. It is intended to understand the status of current research in the areas of heating, cooling, ventilation, sanitation, and indoor environments of buildings and plant facilities. Conclusions are as follows. (1) Research trends in thermal and fluid engineering have been surveyed in the categories of general fluid flow, fluid machinery and piping, new and renewable energy, and fire. Well-developed CFD technologies were widely applied in developing facilities and their systems. New research topics include fire, fuel cell, and solar energy. Research was mainly focused on flow distribution and optimization in the fields of fluid machinery and piping. Topics related to the development of fans and compressors had been popular, but were no longer investigated widely. Research papers on micro heat exchangers using nanofluids and micro pumps were also not presented during this period. There were some studies on thermal reliability and performance in the fields of new and renewable energy. Numerical simulations of smoke ventilation and the spread of fire were the main topics in the field of fire. (2) Research works on heat transfer presented in 2008 have been reviewed in the categories of heat transfer characteristics, industrial heat exchangers, and ground heat exchangers. Research on heat transfer characteristics included thermal transport in cryogenic vessels, dish solar collectors, radiative thermal reflectors, variable conductance heat pipes, and flow condensation and evaporation of refrigerants. In the area of industrial heat exchangers, examined are research on micro-channel plate heat exchangers, liquid cooled cold plates, fin-tube heat exchangers, and frost behavior of heat exchanger fins. Measurements on ground thermal conductivity and on the thermal diffusion characteristics of ground heat exchangers were reported. (3) In the field of refrigeration, many studies were presented on simultaneous heating and cooling heat pump systems. Switching between various operation modes and optimizing the refrigerant charge were considered in this research. Studies of heat pump systems using unutilized energy sources such as sewage water and river water were reported. Evaporative cooling was studied both theoretically and experimentally as a potential alternative to the conventional methods. (4) Research papers on building facilities have been reviewed and divided into studies on heat and cold sources, air conditioning and air cleaning, ventilation, automatic control of heat sources with piping systems, and sound reduction in hydraulic turbine dynamo rooms. In particular, considered were efficient and effective uses of energy resulting in reduced environmental pollution and operating costs. (5) In the field of building environments, many studies focused on health and comfort. Ventilation. system performance was considered to be important in improving indoor air conditions. Due to high oil prices, various tests were planned to examine building energy consumption and to cut life cycle costs.

Detection of Phantom Transaction using Data Mining: The Case of Agricultural Product Wholesale Market (데이터마이닝을 이용한 허위거래 예측 모형: 농산물 도매시장 사례)

  • Lee, Seon Ah;Chang, Namsik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.161-177
    • /
    • 2015
  • With the rapid evolution of technology, the size, number, and the type of databases has increased concomitantly, so data mining approaches face many challenging applications from databases. One such application is discovery of fraud patterns from agricultural product wholesale transaction instances. The agricultural product wholesale market in Korea is huge, and vast numbers of transactions have been made every day. The demand for agricultural products continues to grow, and the use of electronic auction systems raises the efficiency of operations of wholesale market. Certainly, the number of unusual transactions is also assumed to be increased in proportion to the trading amount, where an unusual transaction is often the first sign of fraud. However, it is very difficult to identify and detect these transactions and the corresponding fraud occurred in agricultural product wholesale market because the types of fraud are more intelligent than ever before. The fraud can be detected by verifying the overall transaction records manually, but it requires significant amount of human resources, and ultimately is not a practical approach. Frauds also can be revealed by victim's report or complaint. But there are usually no victims in the agricultural product wholesale frauds because they are committed by collusion of an auction company and an intermediary wholesaler. Nevertheless, it is required to monitor transaction records continuously and to make an effort to prevent any fraud, because the fraud not only disturbs the fair trade order of the market but also reduces the credibility of the market rapidly. Applying data mining to such an environment is very useful since it can discover unknown fraud patterns or features from a large volume of transaction data properly. The objective of this research is to empirically investigate the factors necessary to detect fraud transactions in an agricultural product wholesale market by developing a data mining based fraud detection model. One of major frauds is the phantom transaction, which is a colluding transaction by the seller(auction company or forwarder) and buyer(intermediary wholesaler) to commit the fraud transaction. They pretend to fulfill the transaction by recording false data in the online transaction processing system without actually selling products, and the seller receives money from the buyer. This leads to the overstatement of sales performance and illegal money transfers, which reduces the credibility of market. This paper reviews the environment of wholesale market such as types of transactions, roles of participants of the market, and various types and characteristics of frauds, and introduces the whole process of developing the phantom transaction detection model. The process consists of the following 4 modules: (1) Data cleaning and standardization (2) Statistical data analysis such as distribution and correlation analysis, (3) Construction of classification model using decision-tree induction approach, (4) Verification of the model in terms of hit ratio. We collected real data from 6 associations of agricultural producers in metropolitan markets. Final model with a decision-tree induction approach revealed that monthly average trading price of item offered by forwarders is a key variable in detecting the phantom transaction. The verification procedure also confirmed the suitability of the results. However, even though the performance of the results of this research is satisfactory, sensitive issues are still remained for improving classification accuracy and conciseness of rules. One such issue is the robustness of data mining model. Data mining is very much data-oriented, so data mining models tend to be very sensitive to changes of data or situations. Thus, it is evident that this non-robustness of data mining model requires continuous remodeling as data or situation changes. We hope that this paper suggest valuable guideline to organizations and companies that consider introducing or constructing a fraud detection model in the future.

The KMA Global Seasonal forecasting system (GloSea6) - Part 2: Climatological Mean Bias Characteristics (기상청 기후예측시스템(GloSea6) - Part 2: 기후모의 평균 오차 특성 분석)

  • Hyun, Yu-Kyung;Lee, Johan;Shin, Beomcheol;Choi, Yuna;Kim, Ji-Yeong;Lee, Sang-Min;Ji, Hee-Sook;Boo, Kyung-On;Lim, Somin;Kim, Hyeri;Ryu, Young;Park, Yeon-Hee;Park, Hyeong-Sik;Choo, Sung-Ho;Hyun, Seung-Hwon;Hwang, Seung-On
    • Atmosphere
    • /
    • v.32 no.2
    • /
    • pp.87-101
    • /
    • 2022
  • In this paper, the performance improvement for the new KMA's Climate Prediction System (GloSea6), which has been built and tested in 2021, is presented by assessing the bias distribution of basic variables from 24 years of GloSea6 hindcasts. Along with the upgrade from GloSea5 to GloSea6, the performance of GloSea6 can be regarded as notable in many respects: improvements in (i) negative bias of geopotential height over the tropical and mid-latitude troposphere and over polar stratosphere in boreal summer; (ii) cold bias of tropospheric temperature; (iii) underestimation of mid-latitude jets; (iv) dry bias in the lower troposphere; (v) cold tongue bias in the equatorial SST and the warm bias of Southern Ocean, suggesting the potential of improvements to the major climate variability in GloSea6. The warm surface temperature in the northern hemisphere continent in summer is eliminated by using CDF-matched soil-moisture initials. However, the cold bias in high latitude snow-covered area in winter still needs to be improved in the future. The intensification of the westerly winds of the summer Asian monsoon and the weakening of the northwest Pacific high, which are considered to be major errors in the GloSea system, had not been significantly improved. However, both the use of increased number of ensembles and the initial conditions at the closest initial dates reveals possibility to improve these biases. It is also noted that the effect of ensemble expansion mainly contributes to the improvement of annual variability over high latitudes and polar regions.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A New Exploratory Research on Franchisor's Provision of Exclusive Territories (가맹본부의 배타적 영업지역보호에 대한 탐색적 연구)

  • Lim, Young-Kyun;Lee, Su-Dong;Kim, Ju-Young
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.37-63
    • /
    • 2012
  • In franchise business, exclusive sales territory (sometimes EST in table) protection is a very important issue from an economic, social and political point of view. It affects the growth and survival of both franchisor and franchisee and often raises issues of social and political conflicts. When franchisee is not familiar with related laws and regulations, franchisor has high chance to utilize it. Exclusive sales territory protection by the manufacturer and distributors (wholesalers or retailers) means sales area restriction by which only certain distributors have right to sell products or services. The distributor, who has been granted exclusive sales territories, can protect its own territory, whereas he may be prohibited from entering in other regions. Even though exclusive sales territory is a quite critical problem in franchise business, there is not much rigorous research about the reason, results, evaluation, and future direction based on empirical data. This paper tries to address this problem not only from logical and nomological validity, but from empirical validation. While we purse an empirical analysis, we take into account the difficulties of real data collection and statistical analysis techniques. We use a set of disclosure document data collected by Korea Fair Trade Commission, instead of conventional survey method which is usually criticized for its measurement error. Existing theories about exclusive sales territory can be summarized into two groups as shown in the table below. The first one is about the effectiveness of exclusive sales territory from both franchisor and franchisee point of view. In fact, output of exclusive sales territory can be positive for franchisors but negative for franchisees. Also, it can be positive in terms of sales but negative in terms of profit. Therefore, variables and viewpoints should be set properly. The other one is about the motive or reason why exclusive sales territory is protected. The reasons can be classified into four groups - industry characteristics, franchise systems characteristics, capability to maintain exclusive sales territory, and strategic decision. Within four groups of reasons, there are more specific variables and theories as below. Based on these theories, we develop nine hypotheses which are briefly shown in the last table below with the results. In order to validate the hypothesis, data is collected from government (FTC) homepage which is open source. The sample consists of 1,896 franchisors and it contains about three year operation data, from 2006 to 2008. Within the samples, 627 have exclusive sales territory protection policy and the one with exclusive sales territory policy is not evenly distributed over 19 representative industries. Additional data are also collected from another government agency homepage, like Statistics Korea. Also, we combine data from various secondary sources to create meaningful variables as shown in the table below. All variables are dichotomized by mean or median split if they are not inherently dichotomized by its definition, since each hypothesis is composed by multiple variables and there is no solid statistical technique to incorporate all these conditions to test the hypotheses. This paper uses a simple chi-square test because hypotheses and theories are built upon quite specific conditions such as industry type, economic condition, company history and various strategic purposes. It is almost impossible to find all those samples to satisfy them and it can't be manipulated in experimental settings. However, more advanced statistical techniques are very good on clean data without exogenous variables, but not good with real complex data. The chi-square test is applied in a way that samples are grouped into four with two criteria, whether they use exclusive sales territory protection or not, and whether they satisfy conditions of each hypothesis. So the proportion of sample franchisors which satisfy conditions and protect exclusive sales territory, does significantly exceed the proportion of samples that satisfy condition and do not protect. In fact, chi-square test is equivalent with the Poisson regression which allows more flexible application. As results, only three hypotheses are accepted. When attitude toward the risk is high so loyalty fee is determined according to sales performance, EST protection makes poor results as expected. And when franchisor protects EST in order to recruit franchisee easily, EST protection makes better results. Also, when EST protection is to improve the efficiency of franchise system as a whole, it shows better performances. High efficiency is achieved as EST prohibits the free riding of franchisee who exploits other's marketing efforts, and it encourages proper investments and distributes franchisee into multiple regions evenly. Other hypotheses are not supported in the results of significance testing. Exclusive sales territory should be protected from proper motives and administered for mutual benefits. Legal restrictions driven by the government agency like FTC could be misused and cause mis-understandings. So there need more careful monitoring on real practices and more rigorous studies by both academicians and practitioners.

  • PDF

Antecedents of Manufacturer's Private Label Program Engagement : A Focus on Strategic Market Management Perspective (제조업체 Private Labels 도입의 선행요인 : 전략적 시장관리 관점을 중심으로)

  • Lim, Chae-Un;Yi, Ho-Taek
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.65-86
    • /
    • 2012
  • The $20^{th}$ century was the era of manufacturer brands which built higher brand equity for consumers. Consumers moved from generic products of inconsistent quality produced by local factories in the $19^{th}$ century to branded products from global manufacturers and manufacturer brands reached consumers through distributors and retailers. Retailers were relatively small compared to their largest suppliers. However, sometime in the 1970s, things began to slowly change as retailers started to develop their own national chains and began international expansion, and consolidation of the retail industry from mom-and-pop stores to global players was well under way (Kumar and Steenkamp 2007, p.2) In South Korea, since the middle of the 1990s, the bulking up of retailers that started then has changed the balance of power between manufacturers and retailers. Retailer private labels, generally referred to as own labels, store brands, distributors own private-label, home brand or own label brand have also been performing strongly in every single local market (Bushman 1993; De Wulf et al. 2005). Private labels now account for one out of every five items sold every day in U.S. supermarkets, drug chains, and mass merchandisers (Kumar and Steenkamp 2007), and the market share in Western Europe is even larger (Euromonitor 2007). In the UK, grocery market share of private labels grew from 39% of sales in 2008 to 41% in 2010 (Marian 2010). Planet Retail (2007, p.1) recently concluded that "[PLs] are set for accelerated growth, with the majority of the world's leading grocers increasing their own label penetration." Private labels have gained wide attention both in the academic literature and popular business press and there is a glowing academic research to the perspective of manufacturers and retailers. Empirical research on private labels has mainly studies the factors explaining private labels market shares across product categories and/or retail chains (Dahr and Hoch 1997; Hoch and Banerji, 1993), factors influencing the private labels proneness of consumers (Baltas and Doyle 1998; Burton et al. 1998; Richardson et al. 1996) and factors how to react brand manufacturers towards PLs (Dunne and Narasimhan 1999; Hoch 1996; Quelch and Harding 1996; Verhoef et al. 2000). Nevertheless, empirical research on factors influencing the production in terms of a manufacturer-retailer is rather anecdotal than theory-based. The objective of this paper is to bridge the gap in these two types of research and explore the factors which influence on manufacturer's private label production based on two competing theories: S-C-P (Structure - Conduct - Performance) paradigm and resource-based theory. In order to do so, the authors used in-depth interview with marketing managers, reviewed retail press and research and presents the conceptual framework that integrates the major determinants of private labels production. From a manufacturer's perspective, supplying private labels often starts on a strategic basis. When a manufacturer engages in private labels, the manufacturer does not have to spend on advertising, retailer promotions or maintain a dedicated sales force. Moreover, if a manufacturer has weak marketing capabilities, the manufacturer can make use of retailer's marketing capability to produce private labels and lessen its marketing cost and increases its profit margin. Figure 1. is the theoretical framework based on a strategic market management perspective, integrated concept of both S-C-P paradigm and resource-based theory. The model includes one mediate variable, marketing capabilities, and the other moderate variable, competitive intensity. Manufacturer's national brand reputation, firm's marketing investment, and product portfolio, which are hypothesized to positively affected manufacturer's marketing capabilities. Then, marketing capabilities has negatively effected on private label production. Moderating effects of competitive intensity are hypothesized on the relationship between marketing capabilities and private label production. To verify the proposed research model and hypotheses, data were collected from 192 manufacturers (212 responses) who are producing private labels in South Korea. Cronbach's alpha test, explanatory / comfirmatory factor analysis, and correlation analysis were employed to validate hypotheses. The following results were drawing using structural equation modeling and all hypotheses are supported. Findings indicate that manufacturer's private label production is strongly related to its marketing capabilities. Consumer marketing capabilities, in turn, is directly connected with the 3 strategic factors (e.g., marketing investment, manufacturer's national brand reputation, and product portfolio). It is moderated by competitive intensity between marketing capabilities and private label production. In conclusion, this research may be the first study to investigate the reasons manufacturers engage in private labels based on two competing theoretic views, S-C-P paradigm and resource-based theory. The private label phenomenon has received growing attention by marketing scholars. In many industries, private labels represent formidable competition to manufacturer brands and manufacturers have a dilemma with selling to as well as competing with their retailers. The current study suggests key factors when manufacturers consider engaging in private label production.

  • PDF