• Title/Summary/Keyword: error reduction

Search Result 1,415, Processing Time 0.029 seconds

On the vibration influence to the running power plant facilities when the foundation excavated of the cautious blasting works. (노천굴착에서 발파진동의 크기를 감량 시키기 위한 정밀파실험식)

  • Huh Ginn
    • Explosives and Blasting
    • /
    • v.9 no.1
    • /
    • pp.3-13
    • /
    • 1991
  • The cautious blasting works had been used with emulsion explosion electric M/S delay caps. Drill depth was from 3m to 6m with Crawler Drill ${\phi}70mm$ on the calcalious sand stone (soft -modelate -semi hard Rock). The total numbers of test blast were 88. Scale distance were induced 15.52-60.32. It was applied to propagation Law in blasting vibration as follows. Propagtion Law in Blasting Vibration $V=K(\frac{D}{W^b})^n$ were V : Peak partical velocity(cm/sec) D : Distance between explosion and recording sites(m) W : Maximum charge per delay-period of eight milliseconds or more (kg) K : Ground transmission constant, empirically determind on the Rocks, Explosive and drilling pattern ets. b : Charge exponents n : Reduced exponents where the quantity $\frac{D}{W^b}$ is known as the scale distance. Above equation is worked by the U.S Bureau of Mines to determine peak particle velocity. The propagation Law can be catagorized in three groups. Cubic root Scaling charge per delay Square root Scaling of charge per delay Site-specific Scaling of charge Per delay Plots of peak particle velocity versus distoance were made on log-log coordinates. The data are grouped by test and P.P.V. The linear grouping of the data permits their representation by an equation of the form ; $V=K(\frac{D}{W^{\frac{1}{3}})^{-n}$ The value of K(41 or 124) and n(1.41 or 1.66) were determined for each set of data by the method of least squores. Statistical tests showed that a common slope, n, could be used for all data of a given components. Charge and reduction exponents carried out by multiple regressional analysis. It's divided into under loom over loom distance because the frequency is verified by the distance from blast site. Empirical equation of cautious blasting vibration is as follows. Over 30m ------- under l00m ${\cdots\cdots\cdots}{\;}41(D/sqrt[2]{W})^{-1.41}{\;}{\cdots\cdots\cdots\cdots\cdots}{\;}A$ Over 100m ${\cdots\cdots\cdots\cdots\cdots}{\;}121(D/sqrt[3]{W})^{-1.66}{\;}{\cdots\cdots\cdots\cdots\cdots}{\;}B$ where ; V is peak particle velocity In cm / sec D is distance in m and W, maximLlm charge weight per day in kg K value on the above equation has to be more specified for further understaring about the effect of explosives, Rock strength. And Drilling pattern on the vibration levels, it is necessary to carry out more tests.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Usefulness of Abdominal Compressor Using Stereotactic Body Radiotherapy with Hepatocellular Carcinoma Patients (토모테라피를 이용한 간암환자의 정위적 방사선치료시 복부압박장치의 유용성 평가)

  • Woo, Joong-Yeol;Kim, Joo-Ho;Kim, Joon-Won;Baek, Jong-Geal;Park, Kwang-Soon;Lee, Jong-Min;Son, Dong-Min;Lee, Sang-Kyoo;Jeon, Byeong-Chul;Cho, Jeong-Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.24 no.2
    • /
    • pp.157-165
    • /
    • 2012
  • Purpose: We evaluated usefulness of abdominal compressor for stereotactic body radiotherapy (SBRT) with unresectable hepatocellular carcinoma (HCC) patients and hepato-biliary cancer and metastatic liver cancer patients. Materials and Methods: From November 2011 to March 2012, we selected HCC patients who gained reduction of diaphragm movement >1 cm through abdominal compressor (diaphragm control, elekta, sweden) for HT (Hi-Art Tomotherapy, USA). We got planning computed tomography (CT) images and 4 dimensional (4D) images through 4D CT (somatom sensation, siemens, germany). The gross tumor volume (GTV) included a gross tumor and margins considering tumor movement. The planning target volume (PTV) included a 5 to 7 mm safety margin around GTV. We classified patients into two groups according to distance between tumor and organs at risk (OAR, stomach, duodenum, bowel). Patients with the distance more than 1 cm are classified as the 1st group and they received SBRT of 4 or 5 fractions. Patients with the distance less than 1 cm are classified as the 2nd group and they received tomotherapy of 20 fractions. Megavoltage computed tomography (MVCT) were performed 4 or 10 fractions. When we verify a MVCT fusion considering priority to liver than bone-technique. We sent MVCT images to Mim_vista (Mimsoftware, ver .5.4. USA) and we re-delineated stomach, duodenum and bowel to bowel_organ and delineated liver. First, we analyzed MVCT images to check the setup variation. Second we compared dose difference between tumor and OAR based on adaptive dose through adaptive planning station and Mim_vista. Results: Average setup variation from MVCT was $-0.66{\pm}1.53$ mm (left-right) $0.39{\pm}4.17$ mm (superior-inferior), $0.71{\pm}1.74$ mm (anterior-posterior), $-0.18{\pm}0.30$ degrees (roll). 1st group ($d{\geq}1$) and 2nd group (d<1) were similar to setup variation. 1st group ($d{\geq}1$) of $V_{diff3%}$ (volume of 3% difference of dose) of GTV through adaptive planing station was $0.78{\pm}0.05%$, PTV was $9.97{\pm}3.62%$, $V_{diff5%}$ was GTV 0.0%, PTV was $2.9{\pm}0.95%$, maximum dose difference rate of bowel_organ was $-6.85{\pm}1.11%$. 2nd Group (d<1) GTV of $V_{diff3%}$ was $1.62{\pm}0.55%$, PTV was $8.61{\pm}2.01%$, $V_{diff5%}$ of GTV was 0.0%, PTV was $5.33{\pm}2.32%$, maximum dose difference rate of bowel_organ was $28.33{\pm}24.41%$. Conclusion: Despite we saw diaphragm movement more than 5 mm with flouroscopy after use an abdominal compressor, average setup_variation from MVCT was less than 5 mm. Therefore, we could estimate the range of setup_error within a 5 mm. Target's dose difference rate of 1st group ($d{\geq}1$) and 2nd group (d<1) were similar, while 1st group ($d{\geq}1$) and 2nd group (d<1)'s bowel_organ's maximum dose difference rate's maximum difference was more than 35%, 1st group ($d{\geq}1$)'s bowel_organ's maximum dose difference rate was smaller than 2nd group (d<1). When applicating SBRT to HCC, abdominal compressor is useful to control diaphragm movement in selected patients with more than 1 cm bowel_organ distance.

  • PDF

The Effect of Corporate Association on the Perceived Risk of the Product (소비자의 제품 지각 위험에 대한 기업연상과 효과: 지식과 관여의 조절적 역활을 중심으로)

  • Cho, Hyun-Chul;Kang, Suk-Hou;Kim, Jin-Yong
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.4
    • /
    • pp.1-32
    • /
    • 2008
  • Brown and Dacin (1997) have investigated the relationship between corporate associations and product evaluations. Their study focused on the effects of associations with a company's corporate ability (CA) and its corporate social responsibility (CSR) on consumers' product evaluations. Their study has found that both of CA and CSR influenced product evaluation but CA association has a stronger effect than CSR associations. Brown and Dacin (1997) have, however, claimed that there are few researches on how corporate association impacts product responses. Accordingly, some of researchers have found the variables to moderate or to mediate the relationship between the corporate association and the product responses. In particular, there has been existed a few of studies that tested the influence of the reputation on the product-relevant perceived risk, but the effects of two types of the corporate association on the product-relevant perceived risk were not identified so far. The primary goal of this article is to identify and empirically examine some variables to moderate the effects of CA association and CSR association on the perceived risk of the product. In this articles, we take the concept of the corporate associations that Brown and Dacin (1997) had proposed. CA association is those association related to the company's expertise in producing and delivering its outputs and CSR association reflected the organization's status and activities with respect to its perceived societal obligations. Also, this study defines the risk, which is the uncertainty or loss of the product and corporate that consumers have taken in a particular purchase decision or after having purchased. The risk is classified into product-relevant performance risk and financial risk. Performance risk is the possibility or the consequence of a product not functioning at some expected level and financial risk is the monetary loss one perceives to be incurring if a product does not function at some expected level. In relation to consumer's knowledge, expert consumers have much of the experiences or knowledge of the product in consumer position and novice consumers does not. The model tested in this article are shown in Figure 1. The model indicates that both of CA association and CSR association influence on performance risk and financial risk. In addition, the effects of CA and CSR are moderated by product category knowledge (product knowledge) and product category involvement (product involvement). In this study, the relationships between the corporate association and product-relevant perceived risk are hypothesized as the following form. For example, Hypothesis 1a($H_{1a}$) is represented that CA association has a positive influence on the performance risk of consumer. Also, the hypotheses that identified some variables to moderate the effects of two types of corporate association on the perceived risk of the product are laid down. One of the hypotheses of the interaction effect is Hypothesis 3a($H_{3a}$), it is described that consumer's knowledges of the product moderates the negative relationship between CA association and product-relevant performance risk. A field experiment was conducted in order to examine our model. The company tested was not real but imagined to meet the internal validity. Water purifiers were used for our study. Four scenarios have been developed and described as the imaginary company: Type A with both of superior CA and CSR, Type B with superior CSR and inferior CA, Type C with superior CA and inferior CSR, and Type D with both inferior of CA and CSR. The respondents of this study were classified into four groups. One type of four scenarios (Type A, B, C, or D) in its questionnaire was given to the respondent who filled out questions. Data were collected by means of a self-administered questionnaire to the respondents, chosen in convenience. A total of 300 respondents filled out the questionnaire but 207 were used for further analysis. Table 1 indicates that the scales in this study are reliable because the range of coefficients of Cronbach's $\alpha$ are from 0.85 to 0.92. The composite reliability is in the range of 0,85 to 0,92 and average variance extracted is in 0.72-0.98 range that is higher than the base level of 0.6. As shown in Table 2, the values for CFI, NNFI, root-mean-square error approximation (RMSEA), and standardized root-mean-square residual (SRMR) are acceptably close to the standards suggested by Hu and Bentler (1999):.95 for CFI and NNFI,.06 for RMSEA, and.08 for SRMR. We also tested discriminant validity provided by Fornell and Larcker (1981). As shown in Table 2, we found strong evidence for discriminant validity between each possible pair of latent constructs in all samples. Given that these batteries of overall goodness-of-fit indices were accurate and that the model was developed on theoretical bases, and given the high level of consistency across samples, this enables us to proceed the previously defined scales. We used the moderated hierarchical regression analysis to test the influence of the corporate association(CA and CSR associations) on product-relevant perceived risk(performance and financial risks) and to identify the variables moderating the relationship between the corporate association and product-relevant performance risk. In this study, dependent variables are performance and financial risk. CA and CSR associations are described the independent variables. The moderating variables are product category knowledge and product category involvement. The results are, as expected, found that CA association has statistically a significant influence on the perceived risk of the product, but CSR association does not. Product category knowledge and involvement moderate the relationship between the CA association and the perceived risk of the product. However, the effect of CSR association on the perceived risk of the product is not moderated by the consumers' knowledge and involvement. For this result, it is necessary for a corporate to inform its customers CA association more than CSR association so that they could be felt to be the reduction of the perceived risk. The important theoretical contribution of this research is the meanings that two types of corporate association that Brown and Dacin(1997), and Brown(1998) have proposed replicated the difference of the effects on product evaluation. According to Hunter(2001), it was an important affair to accomplish the validity of a particular study and we had to take about ten studies to deduce a strict study. Next, there is the contribution of the this study to find that the effects of corporate association on the perceived risk of the product are varied by the moderator variables. In particular, the moderating effect of knowledge on the relationship between corporate association and product-relevant perceived risk has not been tested in Korea. In the managerial implications of this research, we suggest the necessity to stress the ability that corporate manufactures the product well(CA association) than the accomplishment of corporate's social obligation(CSR association). This study suffers from various limitations that imply future research directions. The moderating effects of product category knowledge and involvement on the relationship between corporate association and perceived risk need to be replicated. Next, future research could explore whether the mediated effects of the perceived risk has the relationship between corporate association and consumer's product purchase. In addition, to ensure the external validity of the study will be needed to use realistic company, not artificial.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF