• Title/Summary/Keyword: assessment curve

Search Result 710, Processing Time 0.035 seconds

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Toxicity Assessment of Silver Ions Compared to Silver Nanoparticles in Aqueous Solutions and Soils Using Microtox Bioassay (Microtox 생물검정법을 이용한 은 이온과 은 나노입자의 수용액과 토양에서의 독성 비교 평가)

  • Wie, Min-A;Oh, Se-Jin;Kim, Sung-Chul;Kim, Rog-Young;Lee, Sang-Phil;Kim, Won-Il;Yang, Jae E.
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.6
    • /
    • pp.1114-1119
    • /
    • 2012
  • This study was conducted to assess the microbial toxicity of ionic silver solution ($Ag^+N$) and silver nanoparticle suspension ($Ag^0NP$) based on the Microtox bioassay. In this test, the light inhibition of luminescent bacteria was measured after 15 and 30 min exposure to aqueous solutions and soils spiked with a dilution series of $Ag^+N$ and $Ag^0NP$. The resulting dose-response curves were used to derive effective concentration (EC25, $EC_{50}$, EC75) and effective dose ($ED_{25}$, $ED_{50}$, $ED_{75}$) that caused a 25, 50 and 75% inhibition of luminescence. In aqueous solutions, $EC_{50}$ value of $Ag^+N$ after 15 min exposure was determined to be < $2mg\;L^{-1}$ and remarkably lower than $EC_{50}$ value of $Ag^0NP$ with $251mg\;L^{-1}$. This revealed that $Ag^+N$ was more toxic to luminescent bacteria than $Ag^0NP$. In soil extracts, however, $ED_{50}$ value of $Ag^+N$ with 196 mg kg-1 was higher than $ED_{50}$ value of $Ag^0NP$ with $104mg\;kg^{-1}$, indicating less toxicity of $Ag^+N$ in soils. The reduced toxicity of $Ag^+N$ in soils can be attributed to a partial adsorption of ionic $Ag^+$ on soil colloids and humic acid as well as a partial formation of insoluble AgCl with NaCl of Microtox diluent. This resulted in lower concentration of active Ag in soil extracts obtained after 1 hour shaking with $Ag^+N$ than that spiked with $Ag^0NP$. With longer exposure time, EC and ED values of both $Ag^+N$ and $Ag^0NP$ decreased, so their toxicity increased. The toxic characteristics of silver nanomaterials were different depending on existing form of Ag ($Ag^+$, $Ag^0$), reaction medium (aqueous solution, soil), and exposure time.

Valuation of the Water Pollution Reduction: An Application of the Imaginary Emission Market Concept (수질오염물질 감소의 편익 추정 -수질총량제하 가상배출권시장 개념의 적용-)

  • Han, Tak-Whan;Lee, Hyo Chang
    • Environmental and Resource Economics Review
    • /
    • v.23 no.4
    • /
    • pp.719-746
    • /
    • 2014
  • This study attempts to estimate the value of the water quality improvement by deriving the equilibrium price of the water pollutant emission permit for the imaginary water pollutant emission trading market. It is reasonable to say that there is already an implicit social agreement for the unit value of water pollutant, when the government set the Total Water Pollutant Loading System for the major river basin as a part of the Comprehensive Measures for Water Management, particularly for the Nakdong River Basin. Therefore, we can derive the unit value of water pollutant emission, which is already implied in the pollution allowance for each city or county by the Total Water Pollutant Loading System. Once estimated, it will be useful to the economic assessment of the water quality related projects. An imaginary water pollutant emission trading system for the Nakdong River Basin, where Total Water Pollutant Loading System is already effective, is constructed for the estimation of the equilibrium price of water pollutant permit. By estimating marginal abatement cost curve or each city or county, we can compute the equilibrium price of the permit and then it is regarded as the economic value of the water pollutant. The marginal net benefit function results from the relationship between the emission and the benefit, and then the equilibrium price of permit comes from constructing the excess demand function of the permit by using the total allowable permit of the local government entity. The equilibrium price of the permit would be estimated to be $1,409.3won/kg{\cdot}BOD$. This is within reasonable boundary compared for the permit price compared to foreign example. This permit price would be applied to calculate for the economic value of the water quality pollutants, and also be expected to use directly for the B/C analysis of the business involved with water quality change.

Residue levels of phthalic acid esters (PAEs) and diethylhexyl adipate(DEHA) in various industrial wastewaters (업종별 산업폐수 중 프탈산에스테르와 디에틸헥실아디페이트의 잔류수준)

  • Kim, Hyesung;Park, Sangah;Lee, Hyeri;Lee, Jinseon;Lee, Suyeong;Kim, Jaehoon;Im, Jongkwon;Choi, Jongwoo;Lee, Wonseok
    • Analytical Science and Technology
    • /
    • v.29 no.2
    • /
    • pp.57-64
    • /
    • 2016
  • Many phthalic acid esters (PAEs), including DMP, DEP, DBP, BBP, and DEHP, as well as DEHA are widely used as plasticizers in plastics. An analytical method was developed and used to analyze these compounds at 41 industrial facilities. The coefficient of determination (R2) for each constructed curve was higher than 0.98. The method detection limit (MDL) values were 0.4–0.7 μg/L for PAEs and 0.6 μg/L for DEHA. In addition, the recovery rate was shown to be 77.0–92.3%, while the relative standard deviation was shown to be in the range of 5.8-10.5%. DMP (n = 3), DEP (n = 2), DBP (n = 2), BBP (n = 2), and DEHA (n = 3) were detected in the range of 2.2-11.1% in the influent. DEHP was a predominant compound and was detected at > MDL in both the influent (n = 16, 35.6%) and the effluent (n = 4, 10.0%) at a high removal efficiency (92–100%). The highest levels of residue in industrial wastewater influent were 137.4 μg/L of DEHP at plastic products manufacturing facility, 12.5 μg/L of DEHA at a chemical manufacturing facility, and 14.0 μg/L of DEP at an electronics facility. The highest concentration of effluent was 12.5 μg/L of DEHP at a chemical manufacturing facility, which indicated that the effluent was below the allowable concentration (800 μg/L). Therefore, the levels of PAEs and DEHA that are discharged into nearby streams could not influence the health of the ecosystem.

A Evaluation of Direct Payment on Agricultural Income effect using Farm Manager Registration Information (농업경영체 등록정보를 활용한 농업직불제 소득효과 분석)

  • Han, Suk-Ho;Chae, Gwang-Seok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.5
    • /
    • pp.195-202
    • /
    • 2016
  • The government has run and managed various forms of direct payment systems, such as the paddy and field direct payment, to ease the instability of farm incomes with respect to market opening, and preserve farm income. Direct payments to the agricultural sector is a center in the key policy instrument that plays an important role in income stabilization. Despite the large amount of spending in the farm unit, the status of direct payment, and policy effects the analysis of direct payments, such as stability of income contribution, are insufficient. This paper, using the farm unit DB in 2014 and 2015, performed farm level analysis of direct payment, and derived the implications of the performance evaluation system. As a result, the distribution of direct payment showed considerable bias to the left side compared to the normal distribution curve. Approximately half of the farms (49.3%) in 2014 DB should receive below 100,000 won per year by a direct payment. A larger-scale farm showed a significantly increased income effect and income stabilizing effect because direct payments make higher contributions to farm income in proportional to the area. In the more elderly farmers, a high contribution by direct payment to farm income was found to be an advantage; however, in small-scale farms of less than 0.5ha, direct payment contribution on farm household income was only 3%. In large-scale farms, 10ha or more, the contribution to farm income were found to be 29.4%. The income of large farms was 10 times larger than small farmers, and the direct payment entitlements that were received were 110 times larger. Through this policy, direct payments are required for future improvements and modifications.

Quantitative Elemental Analysis in Soils by using Laser Induced Breakdown Spectroscopy(LIBS) (레이저유도붕괴분광법을 활용한 토양의 정량분석)

  • Zhang, Yong-Seon;Lee, Gye-Jun;Lee, Jeong-Tae;Hwang, Seon-Woong;Jin, Yong-Ik;Park, Chan-Won;Moon, Yong-Hee
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.42 no.5
    • /
    • pp.399-407
    • /
    • 2009
  • Laser induced breakdown spectroscopy(LIBS) is an simple analysis method for directly quantifying many kinds of soil micro-elements on site using a small size of laser without pre-treatment at any property of materials(solid, liquid and gas). The purpose of this study were to find an optimum condition of the LIBS measurement including wavelengths for quantifying soil elements, to relate spectral properties to the concentration of soil elements using LIBS as a simultaneous un-breakdown quantitative analysis technology, which can be applied for the safety assessment of agricultural products and precision agriculture, and to compare the results with a standardized chemical analysis method. Soil samples classified as fine-silty, mixed, thermic Typic Hapludalf(Memphis series) from grassland and uplands in Tennessee, USA were collected, crushed, and prepared for further analysis or LIBS measurement. The samples were measured using LIBS ranged from 200 to 600 nm(0.03 nm interval) with a Nd:YAG laser at 532 nm, with a beam energy of 25 mJ per pulse, a pulse width of 5 ns, and a repetition rate of 10 Hz. The optimum wavelength(${\lambda}nm$) of LIBS for estimating soil and plant elements were 308.2 nm for Al, 428.3 nm for Ca, 247.8 nm for T-C, 438.3 nm for Fe, 766.5 nm for K, 85.2 nm for Mg, 330.2 nm for Na, 213.6 nm for P, 180.7 nm for S, 288.2 nm for Si, and 351.9 nm for Ti, respectively. Coefficients of determination($r^2$) of calibration curve using standard reference soil samples for each element from LIBS measurement were ranged from 0.863 to 0.977. In comparison with ICP-AES(Inductively coupled plasma atomic emission spectroscopy) measurement, measurement error in terms of relative standard error were calculated. Silicon dioxide(SiO2) concentration estimated from two methods showed good agreement with -3.5% of relative standard error. The relative standard errors for the other elements were high. It implies that the prediction accuracy is low which might be caused by matrix effect such as particle size and constituent of soils. It is necessary to enhance the measurement and prediction accuracy of LIBS by improving pretreatment process, standard reference soil samples, and measurement method for a reliable quantification method.

Developing a Tool to Assess Competency to Consent to Treatment in the Mentally Ill Patient: Reliability and Validity (정신장애인의 치료동의능력 평가 도구 개발 : 신뢰도와 타당화)

  • Seo, Mi-Kyoung;Rhee, MinKyu;Kim, Seung-Hyun;Cho, Sung-Nam;Ko, Young-hun;Lee, Hyuk;Lee, Moon-Soo
    • Korean Journal of Health Psychology
    • /
    • v.14 no.3
    • /
    • pp.579-596
    • /
    • 2009
  • This study aimed to develop the Korean tool of competency to consent to psychiatric treatment and to analyze the reliability and validity of this tool. Also the developed tool's efficiency in determining whether a patient possesses treatment consent competence was checked using the Receiver Operating Characteristic curve and the relevant indices. A total of 193 patients with mental illness, who were hospitalized in a mental hospital or were in community mental health center, participated in this study. We administered a questionnaire consisting of 14 questions concerning understanding, appreciation, reasoning ability, and expression of a choice to the subjects. To investigate the validity of the tool, we conducted the K-MMSE, insight test, estimated IQ, and BPRS. The tool's reliability and usefulness were examined via Cronbach's alpha, ICC, and ROC analysis, and criterion related validation was performed. This tool showed that internal consistency and agreement between raters was relatively high(ICC .80~.98, Cronbach's alpha .56~.83)and the confirmatory factor analysis for constructive validation showed that the tool was valid. Also, estimated IQ, and MMSE were significantly correlated to understanding, appreciation, expression of a choice, and reasoning ability. However, the BPRS did not show significant correlation with any subcompetences. In ROC analysis, full scale cutoff score 18.5 was suggested. Subscale cutoff scores were understanding 4.5, appreciation 8.5, reasoning ability 3.5, and expression of a choice 0.5. These results suggest that this assessment tool is reliable, valid and efficient diagnostically. Finally, limitations and implications of this study were discussed.

Estimation of Rice Canopy Height Using Terrestrial Laser Scanner (레이저 스캐너를 이용한 벼 군락 초장 추정)

  • Dongwon Kwon;Wan-Gyu Sang;Sungyul Chang;Woo-jin Im;Hyeok-jin Bak;Ji-hyeon Lee;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.387-397
    • /
    • 2023
  • Plant height is a growth parameter that provides visible insights into the plant's growth status and has a high correlation with yield, so it is widely used in crop breeding and cultivation research. Investigation of the growth characteristics of crops such as plant height has generally been conducted directly by humans using a ruler, but with the recent development of sensing and image analysis technology, research is being attempted to digitally convert growth measurement technology to efficiently investigate crop growth. In this study, the canopy height of rice grown at various nitrogen fertilization levels was measured using a laser scanner capable of precise measurement over a wide range, and a comparative analysis was performed with the actual plant height. As a result of comparing the point cloud data collected with a laser scanner and the actual plant height, it was confirmed that the estimated plant height measured based on the average height of the top 1% points showed the highest correlation with the actual plant height (R2 = 0.93, RMSE = 2.73). Based on this, a linear regression equation was derived and used to convert the canopy height measured with a laser scanner to the actual plant height. The rice growth curve drawn by combining the actual and estimated plant height collected by various nitrogen fertilization conditions and growth period shows that the laser scanner-based canopy height measurement technology can be effectively utilized for assessing the plant height and growth of rice. In the future, 3D images derived from laser scanners are expected to be applicable to crop biomass estimation, plant shape analysis, etc., and can be used as a technology for digital conversion of conventional crop growth assessment methods.

Improvement in Regional Contractility of Myocardium after CABG (관상동맥 우회로 수술 환자에서 심근의 탄성도 변화)

  • Lee, Byeong-Il;Paeng, Jin-Chul;Lee, Dong-Soo;Lee, Jae-Sung;Chung, June-Key;Lee, Myung-Chul;Choi, Heung-Kook
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.4
    • /
    • pp.224-230
    • /
    • 2005
  • Purpose: The maximal elastance ($E_{max}$) of myocardium has been established as a reliable load-independent contractility index. Recently, we developed a noninvasive method to measure the regional contractility using gated myocardial SPECT and arterial tonometry data. In this study, we measured regional $E_{max}(rE_{max}$ in the patients who underwent coronary artery bypass graft surgery (CABG), and assessed its relationship with other variables. Materials and Methods: 21 patients (M:F=17:4, $58{\pm}12$ y) who underwent CABG were enrolled. $^{201}TI$ rest/dipyridamole stress $^{99m}Tc$-sestamibi gated SPECT were performed before and 3 months after CABG. For 15 myocardial regions, regional time-elastance curve was obtained using the pressure data of tonometry and the volume data of gated SPECT. To investigate the coupling with myocardial function, preoperative regional $E_{max}$ was compared with regional perfusion and systolic thickening. In addition, the correlation between $E_{max}$ and viability was assessed in dysfunctional segments (thickening <20% before CABG). The viability was defined as improvement of postoperative systolic thickening more than 10%. Results: Regional $E_{max}$ was slightly increased after CABG from $2.41{\pm}1.64 (pre)\;to\;2.78{\pm}1.83 (post)$ mmHg/ml. $E_{max}$ had weak correlation with perfusion and thickening (r=0.35, p<0.001). In the regions of preserved perfusion (${\geq}60%$), $E_{max}$ was $2.65{\pm}1.67$, while it was $1.30{\pm}1.24$ in the segments of decreased perfusion. With regard to thickening, $E_{max}$ was $3.01{\pm}1.92$ mmHg/ml for normal regions (thickening ${geq}40%$), $2.40{\pm}1.19$ mmHg/ml for mildly dysfunctional regions (<40% and ${\geq}20%$), and $1.13{\pm}0.89$ mmHg/ml for severely dysfunctional regions (<20%). $E_{max}$ was improved after CABG in both the viable (from $1.27{\pm}1.07\;to\;1.79{\pm}1.48$ mmHg/ml) and non-viable segments (from $0.97 {\pm}0.59\;to\;1.22{\pm}0.71$ mmHg/ml), but there was no correlation between $E_{max}$ and thickening improvements (r=0.007). Conclusions: Preoperative regional $E_{max}$ was relatively concordant with regional perfusion and systolic thickening on gated myocardial SPECT. In dysfunctional but viable segments, $E_{max}$ was improved after CABG, but showed no correlation with thickening improvement. As a load-independent contractility index of dysfunctional myocardial segments, we suggest that the regional $E_{max}$ could be an independent parameter in the assessment of myocardial function.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF