• Title/Summary/Keyword: input reduction

Search Result 1,378, Processing Time 0.029 seconds

Feasibility of Tax Increase in Korean Welfare State via Estimation of Optimal Tax burden Ratio (적정조세부담률 추정을 통한 한국 복지국가 증세가능성에 관한 연구)

  • Kim, SeongWook
    • 한국사회정책
    • /
    • v.20 no.3
    • /
    • pp.77-115
    • /
    • 2013
  • The purpose of this study is to present empirical evidence for discussion of financing social welfare via estimating optimal tax burden in the main member countries of the OECD by using Hausman-Taylor method considering endogeneity of explanatory variables. Also, the author produced an international tax comparison index reflecting theoretical hypotheses on revenue-expenditure nexus within a model to compare real tax burden by countries and to examine feasibility of tax increase in Korea. As a result of the analysis, the higher the level of tax burden was, the higher the level of welfare expenditure was, indicating the connection between high burden and high welfare from the aspect of scale. The results also indicated that the subject countries recently entered into the state of low tax burden. Meanwhile, Korea had maintained low burden until the late 1990s but the tax burden soared up since the financial crisis related to the IMF. However, due to the impact of foreign economy and the tax reduction policy, it reentered into the low-burden state after 2009. On the other hand, the degree of social welfare expenditure's reducing tax burden has been gradually enhanced since the crisis. In this context, the current optimal tax burden ratio of Korea as of 2010 may be 25.8%~26.5% of GDP based on input of welfare expenditure variables, a percent that Korea was investigated to be a 'high tax burden-low ITC' country whose tax increase of 0.7~1.4%p may be feasible and that the success of tax system reform for tax increase might be higher probability when compare to others. However, measures of increasing social security contributions and consumption tax were analyzed to be improper from the aspect of managing finance when compared to increase in other tax items, considering the relatively higher ITC. Tax increase is not necessarily required though there may be room for tax increase; the optimal tax burden ratio can be understood as the level that may be achieved on average when compared to other nations, not as the "proper" level. Thus, discussion of tax increase should be accompanied with comprehensive understanding of models of economic developmental difference from nations and institutional & historical attributes included in specific tax mix.

Analysis of Economic and Environmental Effects of Remanufactured Furniture Through Case Studies (사례분석을 통한 사용 후 가구 재제조의 경제적·환경적 효과 분석)

  • Lee, Jong-Hyo;Kang, Hong-Yoon;Hwang, Yong Woo;Hwang, Hyeon-Jeong
    • Resources Recycling
    • /
    • v.31 no.5
    • /
    • pp.67-76
    • /
    • 2022
  • The furniture industry has a high possibility to create value-added and a high potential to create new occupations due to the characteristics of the industry, which mainly consists of small and medium-sized enterprises (SMEs). However, the used furniture, which has sufficient reuse value, is also crushed and used as solid refuse fuel (SRF) recently. Besides, the number of waste treatment companies continues to decrease, and it occurs congestion of wood waste. As a way to solve the issue, a business model development of remanufacturing used furniture can be suggested as an alternative due to its high circular economic efficiency. Remanufacturing business including furniture industry creates positive effects in various aspects such as economic, environmental and job creation. In other words, remanufacturing is an effective recycling way to reduce input resources and energy in the production process. The results of economic analysis show that the expected annual revenue from the single worker furniture remanufacturing site was 104 million won which is 3.11 times more than the average income of a single-worker household in Korea and its B/C ratio was estimated about 30 which means high business feasibility. Revenue through furniture remanufacturing also showed 320 times higher than that of SRF production from the perspective of weight. In addition, it is shown that the GHGs reduction from the furniture remanufacturing is 2.2 ton CO2-eq. per year, which is similar to the amount of GHGs absorption effect of 937 pine trees or 622 Korean oak trees annually. Thus the results of this study demonstrate that it is important to adopt an appropriate recycling method considering the economic and environmental effects at the end-of-life stage.

A Study on the Resource Recovery of Fe-Clinker generated in the Recycling Process of Electric Arc Furnace Dust (전기로 제강분진의 재활용과정에서 발생되는 Fe-Clinker의 자원화에 관한 연구)

  • Jae-hong Yoon;Chi-hyun Yoon;Hirofumi Sugimoto;Akio Honjo
    • Resources Recycling
    • /
    • v.32 no.1
    • /
    • pp.50-59
    • /
    • 2023
  • The amount of dust generated during the dissolution of scrap in an electric arc furnace is approximately 1.5% of the scrap metal input, and it is primarily collected in a bag filter. Electric arc furnace dust primarily consists of zinc and ion. The processing of zinc starts with its conversion into pellet form by the addition of a carbon-based reducing agent(coke, anthracite) and limestone (C/S control). These pellets then undergo reduction, volatilization, and re-oxidation in rotary kiln or RHF reactor to recover crude zinc oxide (60%w/w). Next, iron is discharged from the electric arc furnace dust as a solid called Fe clinker (secondary by-product of the Fe-base). Several methods are then used to treat the Fe clinker, which vary depending on the country, including landfilling and recycling (e.g., subbase course material, aggregate for concrete, Fe-source for cement manufacturing). However, landfilling has several drawbacks, including environmental pollution due to leaching, high landfill costs, and wastage of iron resources. To improve Fe recovery in the clinker, we pulverized it into optimal -sized particles and employed specific gravity and magnetic force selection methods to isolate this metal. A carbon-based reducing agent and a binding material were added to the separated coarse powder (>10㎛) to prepare briquette clinker. A small amount (1-3%w/w) of the briquette clinker was charged with the scrap in an electric arc furnace to evaluate its feasibility as an additives (carbonaceous material, heat-generating material, and Fe source).

A Study on the Optimal Process Parameters for Recycling of Electric Arc Furnace Dust (EAFD) by Rotary Kiln (Rotary Kiln에 의한 전기로 제강분진(EAFD)의 재활용을 위한 최적의 공정변수에 관한 연구)

  • Jae-hong Yoon;Chi-hyun Yoon;Myoung-won Lee
    • Resources Recycling
    • /
    • v.33 no.4
    • /
    • pp.47-61
    • /
    • 2024
  • As a recycling technology for recovering zinc contained in large amounts in electric arc furnace dust (EAFD), the most commercialized technology in the world is the Wealz Kiln Process. The Wealz Kiln Process is a process in which components such as Zn and Pb in EAFD are reduced/volatile (endothermic reaction) in high-temperature Kiln and then re-oxidized (exothermic reaction) in the gas phase and recovered in the form of Crude zinc oxide (60wt%Zn) in the Bag Filter installed at the rear end of Kiln. In this study, an experimental Wealz kiln was produced to investigate the optimal process variable value for practical application to the recycling process of large-scale kiln on a commercial scale. Additionally, Pellets containing EAFD, reducing agents, and limestone were continuously loaded into Kiln, and the amount of input, heating temperature, and residence time were examined to obtain the optimal crude zinc oxide recovery rate. In addition, the optimal manufacturing conditions of Pellets (drum tilt angle, moisture addition, mixing time, etc.) were also investigated. In addition, referring to the SiO2-CaO-FeO ternary system diagram, the formation behavior of a low melting point compound, a reaction product inside Kiln according to the change in the basicity of Pellet, and the reactivity (adhesion) with the castable constructed on the inner wall of Kiln were investigated. In addition, in order to quantitatively investigate the possibility of using anthracite as a substitute for Coke, a reducing agent, changes in the temperature distribution inside Kiln, where oxidation/reduction reactions occur due to an increase in the amount of anthracite, the quality of Crude zinc oxide, and the behavior of tar in anthracite were also investigated.

The Effect of Structured Information on the Sleep Amount of Patients Undergoing Open Heart Surgery (계획된 간호 정보가 수면량에 미치는 영향에 관한 연구 -개심술 환자를 중심으로-)

  • 이소우
    • Journal of Korean Academy of Nursing
    • /
    • v.12 no.2
    • /
    • pp.1-26
    • /
    • 1982
  • The main purpose of this study was to test the effect of the structured information on the sleep amount of the patients undergoing open heart surgery. This study has specifically addressed to the Following two basic research questions: (1) Would the structed in formation influence in the reduction of sleep disturbance related to anxiety and Physical stress before and after the operation? and (2) that would be the effects of the structured information on the level of preoperative state anxiety, the hormonal change, and the degree of behavioral change in the patients undergoing an open heart surgery? A Quasi-experimental research was designed to answer these questions with one experimental group and one control group. Subjects in both groups were matched as closely as possible to avoid the effect of the differences inherent to the group characteristics, Baseline data were also. collected on both groups for 7 days prior to the experiment and found that subjects in both groups had comparable sleep patterns, trait anxiety, hormonal levels and behavioral level. A structured information as an experimental input was given to the subjects in the experimental group only. Data were collected and compared between the experimental group and the control group on the sleep amount of the consecutive pre and post operative days, on preoperative state anxiety level, and on hormonal and behavioral changes. To test the effectiveness of the structured information, two main hypotheses and three sub-hypotheses were formulated as follows; Main hypothesis 1: Experimental group which received structured information will have more sleep amount than control group without structured information in the night before the open heart surgery. Main hypothesis 2: Experimental group with structured information will have more sleep, amount than control group without structured information during the week following the open heart surgery Sub-hypothesis 1: Experimental group with structured information will be lower in the level of State anxiety than control group without structured information in the night before the open heart surgery. Sub-hypothesis 2 : Experimental group with structured information will have lower hormonal level than control group without stuctured information on the 5th day after the open heart surgery Sub-hypothesis 3: Experimental group with structured information will be lower in the behavioral change level than control group without structured information during the week after the open heart surgery. The research was conducted in a national university hospital in Seoul, Korea. The 53 Subjects who participated in the study were systematically divided into experimental group and control group which was decided by random sampling method. Among 53 subjects, 26 were placed in the experimental group and 27 in the control group. Instruments; (1) Structed information: Structured information as an independent variable was constructed by the researcher on the basis of Roy's adaptation model consisting of physiologic needs, self-concept, role function and interdependence needs as related to the sleep and of operational procedures. (2) Sleep amount measure: Sleep amount as main dependent variable was measured by trained nurses through observation on the basis of the established criteria, such as closed or open eyes, regular or irregular respiration, body movement, posture, responses to the light and question, facial expressions and self report after sleep. (3) State anxiety measure: State Anxiety as a sub-dependent variable was measured by Spi-elberger's STAI Anxiety scale, (4) Hormornal change measure: Hormone as a sub-dependent variable was measured by the cortisol level in plasma. (5) Behavior change measure: Behavior as a sub-dependent variable was measured by the Behavior and Mood Rating Scale by Wyatt. The data were collected over a period of four months, from June to October 1981, after the pretest period of two months. For the analysis of the data and test for the hypotheses, the t-test with mean differences and analysis of covariance was used. The result of the test for instruments show as follows: (1) STAI measurement for trait and state anxiety as analyzed by Cronbachs alpha coefficient analysis for item analysis and reliability showed the reliability level at r= .90 r= .91 respectively. (2) Behavior and Mood Rating Scale measurement was analyzed by means of Principal Component Analysis technique. Seven factors retained were anger, anxiety, hyperactivity, depression, bizarre behavior, suspicious behavior and emotional withdrawal. Cumulative percentage of each factor was 71.3%. The result of the test for hypotheses show as follows; (1) Main hypothesis, was not supported. The experimental group has 282 minutes of sleep as compared to the 255 minutes of sleep by the control group. Thus the sleep amount was higher in experimental group than in control group, however, the difference was not statistically significant at .05 level. (2) Main hypothesis 2 was not supported. The mean sleep amount of the experimental group and control group were 297 minutes and 278 minutes respectively Therefore, the experimental group had more sleep amount as compared to the control group, however, the difference was not statistically significant at .05 level. Thus, the main hypothesis 2 was not supported. (3) Sub-hypothesis 1 was not supported. The mean state anxiety of the experimental group and control group were 42.3, 43.9 in scores. Thus, the experimental group had slightly lower state anxiety level than control group, howe-ver, the difference was not statistically significant at .05 level. (4) Sub-hypothesis 2 was not supported. . The mean hormonal level of the experimental group and control group were 338 ㎍ and 440 ㎍ respectively. Thus, the experimental group showed decreased hormonal level than the control group, however, the difference was not statistically significant at .05 level. (5) Sub-hypothesis 3 was supported. The mean behavioral level of the experimental group and control group were 29.60 and 32.00 respectively in score. Thus, the experimental group showed lower behavioral change level than the control group. The difference was statistically significant at .05 level. In summary, the structured information did not influence the sleep amount, state anxiety or hormonal level of the subjects undergoing an open heart surgery at a statistically significant level, however, it showed a definite trends in their relationships, not least to mention its significant effect shown on behavioral change level. It can further be speculated that a great degree of individual differences in the variables such as sleep amount, state anxiety and fluctuation in hormonal level may partly be responsible for the statistical insensitivity to the experimentation.

  • PDF

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF