• Title/Summary/Keyword: Manufacturing Error

Search Result 1,116, Processing Time 0.022 seconds

A Hybrid Forecasting Framework based on Case-based Reasoning and Artificial Neural Network (사례기반 추론기법과 인공신경망을 이용한 서비스 수요예측 프레임워크)

  • Hwang, Yousub
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.43-57
    • /
    • 2012
  • To enhance the competitive advantage in a constantly changing business environment, an enterprise management must make the right decision in many business activities based on both internal and external information. Thus, providing accurate information plays a prominent role in management's decision making. Intuitively, historical data can provide a feasible estimate through the forecasting models. Therefore, if the service department can estimate the service quantity for the next period, the service department can then effectively control the inventory of service related resources such as human, parts, and other facilities. In addition, the production department can make load map for improving its product quality. Therefore, obtaining an accurate service forecast most likely appears to be critical to manufacturing companies. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average simulation. However, these methods are only efficient for data with are seasonal or cyclical. If the data are influenced by the special characteristics of product, they are not feasible. In our research, we propose a forecasting framework that predicts service demand of manufacturing organization by combining Case-based reasoning (CBR) and leveraging an unsupervised artificial neural network based clustering analysis (i.e., Self-Organizing Maps; SOM). We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the service forecasting domain. Our proposed approach has several appealing features : (1) We applied CBR and SOM in a new forecasting domain such as service demand forecasting. (2) We proposed our combined approach between CBR and SOM in order to overcome limitations of traditional statistical forecasting methods and We have developed a service forecasting tool based on the proposed approach using an unsupervised artificial neural network and Case-based reasoning. In this research, we conducted an empirical study on a real digital TV manufacturer (i.e., Company A). In addition, we have empirically evaluated the proposed approach and tool using real sales and service related data from digital TV manufacturer. In our empirical experiments, we intend to explore the performance of our proposed service forecasting framework when compared to the performances predicted by other two service forecasting methods; one is traditional CBR based forecasting model and the other is the existing service forecasting model used by Company A. We ran each service forecasting 144 times; each time, input data were randomly sampled for each service forecasting framework. To evaluate accuracy of forecasting results, we used Mean Absolute Percentage Error (MAPE) as primary performance measure in our experiments. We conducted one-way ANOVA test with the 144 measurements of MAPE for three different service forecasting approaches. For example, the F-ratio of MAPE for three different service forecasting approaches is 67.25 and the p-value is 0.000. This means that the difference between the MAPE of the three different service forecasting approaches is significant at the level of 0.000. Since there is a significant difference among the different service forecasting approaches, we conducted Tukey's HSD post hoc test to determine exactly which means of MAPE are significantly different from which other ones. In terms of MAPE, Tukey's HSD post hoc test grouped the three different service forecasting approaches into three different subsets in the following order: our proposed approach > traditional CBR-based service forecasting approach > the existing forecasting approach used by Company A. Consequently, our empirical experiments show that our proposed approach outperformed the traditional CBR based forecasting model and the existing service forecasting model used by Company A. The rest of this paper is organized as follows. Section 2 provides some research background information such as summary of CBR and SOM. Section 3 presents a hybrid service forecasting framework based on Case-based Reasoning and Self-Organizing Maps, while the empirical evaluation results are summarized in Section 4. Conclusion and future research directions are finally discussed in Section 5.

The Relationship between Internet Search Volumes and Stock Price Changes: An Empirical Study on KOSDAQ Market (개별 기업에 대한 인터넷 검색량과 주가변동성의 관계: 국내 코스닥시장에서의 산업별 실증분석)

  • Jeon, Saemi;Chung, Yeojin;Lee, Dongyoup
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.81-96
    • /
    • 2016
  • As the internet has become widespread and easy to access everywhere, it is common for people to search information via online search engines such as Google and Naver in everyday life. Recent studies have used online search volume of specific keyword as a measure of the internet users' attention in order to predict disease outbreaks such as flu and cancer, an unemployment rate, and an index of a nation's economic condition, and etc. For stock traders, web search is also one of major information resources to obtain data about individual stock items. Therefore, search volume of a stock item can reflect the amount of investors' attention on it. The investor attention has been regarded as a crucial factor influencing on stock price but it has been measured by indirect proxies such as market capitalization, trading volume, advertising expense, and etc. It has been theoretically and empirically proved that an increase of investors' attention on a stock item brings temporary increase of the stock price and the price recovers in the long run. Recent development of internet environment enables to measure the investor attention directly by the internet search volume of individual stock item, which has been used to show the attention-induced price pressure. Previous studies focus mainly on Dow Jones and NASDAQ market in the United States. In this paper, we investigate the relationship between the individual investors' attention measured by the internet search volumes and stock price changes of individual stock items in the KOSDAQ market in Korea, where the proportion of the trades by individual investors are about 90% of the total. In addition, we examine the difference between industries in the influence of investors' attention on stock return. The internet search volume of stocks were gathered from "Naver Trend" service weekly between January 2007 and June 2015. The regression model with the error term with AR(1) covariance structure is used to analyze the data since the weekly prices in a stock item are systematically correlated. The market capitalization, trading volume, the increment of trading volume, and the month in which each trade occurs are included in the model as control variables. The fitted model shows that an abnormal increase of search volume of a stock item has a positive influence on the stock return and the amount of the influence varies among the industry. The stock items in IT software, construction, and distribution industries have shown to be more influenced by the abnormally large internet search volume than the average across the industries. On the other hand, the stock items in IT hardware, manufacturing, entertainment, finance, and communication industries are less influenced by the abnormal search volume than the average. In order to verify price pressure caused by investors' attention in KOSDAQ, the stock return of the current week is modelled using the abnormal search volume observed one to four weeks ahead. On average, the abnormally large increment of the search volume increased the stock return of the current week and one week later, and it decreased the stock return in two and three weeks later. There is no significant relationship with the stock return after 4 weeks. This relationship differs among the industries. An abnormal search volume brings particularly severe price reversal on the stocks in the IT software industry, which are often to be targets of irrational investments by individual investors. An abnormal search volume caused less severe price reversal on the stocks in the manufacturing and IT hardware industries than on average across the industries. The price reversal was not observed in the communication, finance, entertainment, and transportation industries, which are known to be influenced largely by macro-economic factors such as oil price and currency exchange rate. The result of this study can be utilized to construct an intelligent trading system based on the big data gathered from web search engines, social network services, and internet communities. Particularly, the difference of price reversal effect between industries may provide useful information to make a portfolio and build an investment strategy.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Modern Paper Quality Control

  • Olavi Komppa
    • Proceedings of the Korea Technical Association of the Pulp and Paper Industry Conference
    • /
    • 2000.06a
    • /
    • pp.16-23
    • /
    • 2000
  • The increasing functional needs of top-quality printing papers and packaging paperboards, and especially the rapid developments in electronic printing processes and various computer printers during past few years, set new targets and requirements for modern paper quality. Most of these paper grades of today have relatively high filler content, are moderately or heavily calendered , and have many coating layers for the best appearance and performance. In practice, this means that many of the traditional quality assurance methods, mostly designed to measure papers made of pure. native pulp only, can not reliably (or at all) be used to analyze or rank the quality of modern papers. Hence, introduction of new measurement techniques is necessary to assure and further develop the paper quality today and in the future. Paper formation , i.e. small scale (millimeter scale) variation of basis weight, is the most important quality parameter of paper-making due to its influence on practically all the other quality properties of paper. The ideal paper would be completely uniform so that the basis weight of each small point (area) measured would be the same. In practice, of course, this is not possible because there always exists relatively large local variations in paper. However, these small scale basis weight variations are the major reason for many other quality problems, including calender blacking uneven coating result, uneven printing result, etc. The traditionally used visual inspection or optical measurement of the paper does not give us a reliable understanding of the material variations in the paper because in modern paper making process the optical behavior of paper is strongly affected by using e.g. fillers, dye or coating colors. Futhermore, the opacity (optical density) of the paper is changed at different process stages like wet pressing and calendering. The greatest advantage of using beta transmission method to measure paper formation is that it can be very reliably calibrated to measure true basis weight variation of all kinds of paper and board, independently on sample basis weight or paper grade. This gives us the possibility to measure, compare and judge papers made of different raw materials, different color, or even to measure heavily calendered, coated or printed papers. Scientific research of paper physics has shown that the orientation of the top layer (paper surface) fibers of the sheet paly the key role in paper curling and cockling , causing the typical practical problems (paper jam) with modern fax and copy machines, electronic printing , etc. On the other hand, the fiber orientation at the surface and middle layer of the sheet controls the bending stiffness of paperboard . Therefore, a reliable measurement of paper surface fiber orientation gives us a magnificent tool to investigate and predict paper curling and coclking tendency, and provides the necessary information to finetune, the manufacturing process for optimum quality. many papers, especially heavily calendered and coated grades, do resist liquid and gas penetration very much, bing beyond the measurement range of the traditional instruments or resulting invonveniently long measuring time per sample . The increased surface hardness and use of filler minerals and mechanical pulp make a reliable, nonleaking sample contact to the measurement head a challenge of its own. Paper surface coating causes, as expected, a layer which has completely different permeability characteristics compared to the other layer of the sheet. The latest developments in sensor technologies have made it possible to reliably measure gas flow in well controlled conditions, allowing us to investigate the gas penetration of open structures, such as cigarette paper, tissue or sack paper, and in the low permeability range analyze even fully greaseproof papers, silicon papers, heavily coated papers and boards or even detect defects in barrier coatings ! Even nitrogen or helium may be used as the gas, giving us completely new possibilities to rank the products or to find correlation to critical process or converting parameters. All the modern paper machines include many on-line measuring instruments which are used to give the necessary information for automatic process control systems. hence, the reliability of this information obtained from different sensors is vital for good optimizing and process stability. If any of these on-line sensors do not operate perfectly ass planned (having even small measurement error or malfunction ), the process control will set the machine to operate away from the optimum , resulting loss of profit or eventual problems in quality or runnability. To assure optimum operation of the paper machines, a novel quality assurance policy for the on-line measurements has been developed, including control procedures utilizing traceable, accredited standards for the best reliability and performance.

Numerical Prediction for Fluidized Bed Chlorination Reaction of Ilmenite Ore (일메나이트광의 유동층 염화반응에 대한 수치적 예측)

  • Chung, Dong-Kyu;Jung, Eun-Jin;Lee, Mi Sun;Kim, Jinyoung;Song, Duk-Yong
    • Clean Technology
    • /
    • v.25 no.2
    • /
    • pp.107-113
    • /
    • 2019
  • Numerical model that considered the shrinking core model and elutriation and degradation of particles was developed to predict selective chlorination of ilmenite and carbo-chlorination of $TiO_2$ in a two stage fluidized bed chlorination furnace. It is possible to analyze the fluidized bed chlorination reaction to be able to reflect particle distribution for mass balances and the chlorination reaction. The numerical model showed an accuracy with error less than 6% compared with fluidized bed experiments. The chlorination degree with particle size change was greater with a smaller particle size, and there was a 100 min difference to obtain a chlorination degree of 1 between $75{\mu}m$ and $275{\mu}m$. This was not shown to such a great extent with variation of temperature ($800{\sim}1000^{\circ}C$), and there was only a 10 min difference to obtain a chlorination degree of 0.9. In the first selective chlorination process, the mass reduction rate approached to the theoretical value of 0.4735 after 180 min, and chlorination changed the Fe component into $FeCl_2$ or $FeCl_3$ and showed nearly 1. In the second carbo-chlorination process, the chlorination degree of $TiO_2$ approached 0.98 and the mass fraction reached 0.02 with conversion into $TiCl_4$. In the first selective chlorination process, 98% of $TiO_2$ was produced at 180 min, and this was changed into 99% of $TiCl_4$ after an additional 90 min. Also the mass reduction rate of $TiO_2$ was reduced to 99% in the second continuous carbo-chlorination process.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF