• Title/Summary/Keyword: Average run length

Search Result 138, Processing Time 0.021 seconds

Design of the Robust CV Control Chart using Location Parameter (위치모수를 이용한 로버스트 CV 관리도의 설계)

  • Chun, Dong-Jin;Chung, Young-Bae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.1
    • /
    • pp.116-122
    • /
    • 2016
  • Recently, the production cycle in manufacturing process has been getting shorter and different types of product have been produced in the same process line. In this case, the control chart using coefficient of variation would be applicable to the process. The theory that random variables are located in the three times distance of the deviation from mean value is applicable to the control chart that monitor the process in the manufacturing line, when the data of process are changed by the type of normal distribution. It is possible to apply to the control chart of coefficient of variation too. ${\bar{x}}$, s estimates that taken in the coefficient of variation have just used all of the data, but the upper control limit, center line and lower control limit have been settled by the effect of abnormal values, so this control chart could be in trouble of detection ability of the assignable value. The purpose of this study was to present the robust control chart than coefficient of variation control chart in the normal process. To perform this research, the location parameter, ${\bar{x_{\alpha}}}$, $s_{\alpha}$ were used. The robust control chart was named Tim-CV control chart. The result of simulation were summarized as follows; First, P values, the probability to get away from control limit, in Trim-CV control chart were larger than CV control chart in the normal process. Second, ARL values, average run length, in Trim-CV control chart were smaller than CV control chart in the normal process. Particularly, the difference of performance of two control charts was so sure when the change of the process was getting to bigger. Therefore, the Trim-CV control chart proposed in this paper would be more efficient tool than CV control chart in small quantity batch production.

A Design of Economic CUSUM Control Chart Incorporating Quality Loss Function (품질손실을 고려한 경제적 CUSUM 관리도)

  • Kim, Jungdae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.41 no.4
    • /
    • pp.203-212
    • /
    • 2018
  • Quality requirements of manufactured products or parts are given in the form of specification limits on the quality characteristics of individual units. If a product is to meet the customer's fitness for use criteria, it should be produced by a process which is stable or repeatable. In other words, it must be capable of operating with little variability around the target value or nominal value of the product's quality characteristic. In order to maintain and improve product quality, we need to apply statistical process control techniques such as histogram, check sheet, Pareto chart, cause and effect diagram, or control charts. Among those techniques, the most important one is control charting. The cumulative sum (CUSUM) control charts have been used in statistical process control (SPC) in industries for monitoring process shifts and supporting online measurement. The objective of this research is to apply Taguchi's quality loss function concept to cost based CUSUM control chart design. In this study, a modified quality loss function was developed to reflect quality loss situation where general quadratic loss curve is not appropriate. This research also provided a methodology for the design of CUSUM charts using Taguchi quality loss function concept based on the minimum cost per hour criterion. The new model differs from previous models in that the model assumes that quality loss is incurred even in the incontrol period. This model was compared with other cost based CUSUM models by Wu and Goel, According to numerical sensitivity analysis, the proposed model results in longer average run length in in-control period compared to the other two models.

Characterisation of runs of homozygosity and inbreeding coefficients in the red-brown Korean native chickens

  • John Kariuki Macharia;Jaewon Kim;Minjun Kim;Eunjin Cho;Jean Pierre Munyaneza;Jun Heon Lee
    • Animal Bioscience
    • /
    • v.37 no.8
    • /
    • pp.1355-1366
    • /
    • 2024
  • Objective: The analysis of runs of homozygosity (ROH) has been applied to assess the level of inbreeding and identify selection signatures in various livestock species. The objectives of this study were to characterize the ROH pattern, estimate the rate of inbreeding, and identify signatures of selection in the red-brown Korean native chickens. Methods: The Illumina 60K single nucleotide polymorphism chip data of 651 chickens was used in the analysis. Runs of homozygosity were analysed using the PLINK v1.9 software. Inbreeding coefficients were estimated using the GCTA software and their correlations were examined. Genomic regions with high levels of ROH were explored to identify selection signatures. Results: A total of 32,176 ROH segments were detected in this study. The majority of the ROH segments were shorter than 4 Mb. The average ROH inbreeding coefficients (FROH) varied with the length of ROH segments. The means of inbreeding coefficients calculated from different methods were also variable. The correlations between different inbreeding coefficients were positive and highly variable (r = 0.18-1). Five ROH islands harbouring important quantitative trait loci were identified. Conclusion: This study assessed the level of inbreeding and patterns of homozygosity in Red-brown native Korean chickens. The results of this study suggest that the level of recent inbreeding is low which indicates substantial progress in the conservation of red-brown Korean native chickens. Additionally, Candidate genomic regions associated with important production traits were detected in homozygous regions.

An Experimental Study on Development of the Opening Apparatus for Oil Boom (오일펜스 전개장치 개발에 관한 실험적 연구)

  • Jang Duck-Jong;Na Sun-Chol
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.9 no.1
    • /
    • pp.45-54
    • /
    • 2006
  • The study was to review methods by which a ship can unfold and tow an oil boom by attaching the opening apparatus to an oil boom through experiments. The shape and dimension of the opening apparatus were designed with the measurement value of the towing tension load of the oil boom and the dimension of winch drum of the oil boom installed in the ship considered. For the field experiment to identify the performance of the opening apparatus, opening apparatuses were prepared to have the dimension of $3.0m^2$ and $6.0m^2$ which is 91% and 75% of the calculation value for type B and C respectively. As a result, T(kg), the value of tension in type B oil boom according to the towing speed(v) change when two ships are towed together were proved to be $T=920v^{1.1}\;and\;T=500v^{0.9}$ in case the distance is 100 m and 50 m. Based on the result, the dimension of the opening apparatus for type B and C oil boom was calculated as $3.3m^2$ and $8.0m^2$ respectively. When unfolding and towing by attaching the opening apparatus and 200 m of towing line at both ends of type B and type C oil boom, the maximum width of the opening apparatus was shown as 114 m and 95 m in average(width of opening/total length of oil boom: 33% and 57%) in the towing speed of 1.5 kt. It was evaluated that the opening apparatus could concentrate the spilled oil in a good performance. However as far as the increase rate of oil boom opening width according to the length of the towing line is debatable, the increase rate is remarkably reduced when it is lengthened from 100 m to 150 m and to 200 m although it showed extreme increase of 31% and 40% when the length of the towing line was changed from 50 m to 100 m. Therefore, it is inferred that the towing line should be maintained more or less 100 m to get good spread efficiency of the opening apparatus. Additionally, if the towing speed is faster than 1.5 kt, the opening width was narrowed because of the reduced spread efficiency and the shape of the oil boom can be unstable because of the partial sinking of the oil boom, run over waves, or flap of skirt. Thus the reasonable towing speed can be within 1.5 kt for the operation of the opening apparatus.

  • PDF

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

Studies on Soil Conservation Effects of the Straw-mat Mulching (III) -Effects of the Mat Structures and Its Practicality- (볏짚거적덮기공법(工法)의 사방효과(砂防效果)에 관(關)한 연구(硏究)(III) -거적 밀도(密度)의 영향(影響) 및 공법(工法)의 실용성(實用性)-)

  • Woo, Bo-Myeong
    • Journal of Korean Society of Forest Science
    • /
    • v.27 no.1
    • /
    • pp.5-14
    • /
    • 1975
  • Eroded sloping faces in hillsides including cut-bank slopes are liable to both surface erosion and land-slides and the key to control of these form of erosion lies with drainages of excessive run-off and dense vegetation establishment including surface mulching on the slopes. Micro-plots having $1.6m^2$ (1 metre in width and 1.6 metres in slope length, and 1:1.2 in gradient) of banking slopes on coarse sand soil are used to establish the order of magnititude of the difference in controlling of soil erosion and water runoff, and in rating of survival, performed on the repetetions of three-experiment plots consisted of such three levels as 90% (Dense), 70% (Moderate), and 50% Sparse of the density of the coarse straw-mat mulchings. The main results obtained may be summarized as follows: 1. The rates of surface runoff are calculated as 13.13% from the dense mulchings, 14.21% from the moderate mulchings, and 15.57% from the sparse mulchings respectively. 2. The total amounts of soil loss are measured as about 1.24 tons/ha. from the dense mulchings, about 1.33 tons/ha. from the moderate mulchings, and about 1.44 tons/ha. from the sparse mulchings respectively. The amounts of soil loss under these treatments are much lower than the standard of erosion in USDA (1939 Bennet). 3. Average numbers of germination by treatment are counted as 80 seedlings at the dense mulchings. 132 at the moderates and 121 at the sparse respectively. Large numbers of seedling are suppressed and died during the growing at the dense mulchings due to mainly mechanical obstruction. 4. Coarse straw-mat having about 70% of coverage density is the most suitable mulches in both soil erosion control and vegetation establishment. 5. The method of coarse straw-mat mulching is the most recommendable measure for establishing the vegetation cover with less soil erosion on the denuded gentle slopes in hillsides at present in Korea.

  • PDF

Distribution and Frequency of SSR Motifs in the Chrysanthemum SSR-enriched Library through 454 Pyrosequencing Technology (국화 SSR-enriched library에서 SSR 반복염기의 분포 및 빈도)

  • Moe, Kyaw Thu;Ra, Sang-Bog;Lee, Gi-An;Lee, Myung-Chul;Park, Ha-Seung;Kim, Dong-Chan;Lee, Cheol-Hwi;Choi, Hyun-Gu;Jeon, Nak-Beom;Choi, Byung-Jun;Jung, Ji-Youn;Lee, Kyu-Min;Park, Yong-Jin
    • Journal of the Korean Society of International Agriculture
    • /
    • v.23 no.5
    • /
    • pp.546-551
    • /
    • 2011
  • Chrysanthemums, often called mums or chrysanths, belong to the genus Chrysanthemum, which includes about 30 species of perennial flowering plants in the family Asteraceae. We extracted DNA from Dendranthema grandiflorum ('Smileball') to construct a simple sequence repeat (SSR)-enriched library, using a modified biotin-streptavidin capture method. GS FLX (Genome Sequencer FLX System which provides the flexibility to perform the broad range of applications) sequencing (at the 1/8 run specification) resulted in 18.83 mega base pairs (Mbp) with an average read length of 280.06 bp. Sequence analyses of all SSR-containing clones revealed a predominance of di-nucleotide motifs (16,375, 61.5%) followed by tri-nucleotide motifs (6,616, 24.8%), tetra-nucleotide motifs (1,674, 6.3%), penta-nucleotide motifs (1,283, 4.8%), and hexa-nucleotide motifs (693, 2.6%). Among the di-nucleotide motifs, the AC/CA class was the most frequently identified (93.5% of all di-nucleotide types), followed by the GA/AG class (6.1%), the AT/TA class (0.4%), and the CG/GC class (0.03%). When we analyzed the distribution of different repeat motifs and their respective numbers of repeats, regardless of the motif class, of 100 SSR markers, we found a higher number of di-nucleotide motifs with 70 to 80 repeats; we also found two di-nucleotide motifs with 83 and 89 repeats, respectively, but their product lengths were within optimum size (297 and 300 bp). In future work, we will screen for polymorphisms of possible primer pairs. The results will provide a useful tool for assessing molecular diversity and investigating the population structure among and within Chrysanthemum species.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF