• Title/Summary/Keyword: Target accuracy

Search Result 1,450, Processing Time 0.034 seconds

Comparisons of Soil Water Retention Characteristics and FDR Sensor Calibration of Field Soils in Korean Orchards (노지 과수원 토성별 수분보유 특성 및 FDR 센서 보정계수 비교)

  • Lee, Kiram;Kim, Jongkyun;Lee, Jaebeom;Kim, Jongyun
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.401-408
    • /
    • 2022
  • As research on a controlled environment system based on crop growth environment sensing for sustainable production of horticultural crops and its industrial use has been important, research on how to properly utilize soil moisture sensors for outdoor cultivation is being actively conducted. This experiment was conducted to suggest the proper method of utilizing the TEROS 12, an FDR (frequency domain reflectometry) sensor, which is frequently used in industry and research fields, for each orchard soil in three regions in Korea. We collected soils from each orchard where fruit trees were grown, investigated the soil characteristics and soil water retention curve, and compared TEROS 12 sensor calibration equations to correlate the sensor output to the corresponding soil volumetric water content through linear and cubic regressions for each soil sample. The estimated value from the calibration equation provided by the manufacturer was also compared. The soil collected from all three orchards showed different soil characteristics and volumetric water content values by each soil water retention level across the soil samples. In addition, the cubic calibration equation for TEROS 12 sensor showed the highest coefficient of determination higher than 0.95, and the lowest RMSE for all soil samples. When estimating volumetric water contents from TEROS 12 sensor output using the calibration equation provided by the manufacturer, their calculated volumetric water contents were lower than the actual volumetric water contents, with the difference up to 0.09-0.17 m3·m-3 depending on the soil samples, indicating an appropriate calibration for each soil should be preceded before FDR sensor utilization. Also, there was a difference in the range of soil volumetric water content corresponding to the soil water retention levels across the soil samples, suggesting that the soil water retention information should be required to properly interpret the volumetric water content value of the soil. Moreover, soil with a high content of sand had a relatively narrow range of volumetric water contents for irrigation, thus reducing the accuracy of an FDR sensor measurement. In conclusion, analyzing soil water retention characteristics of the target soil and the soil-specific calibration would be necessary to properly quantify the soil water status and determine their adequate irrigation point using an FDR sensor.

The Effect of Domain Specificity on the Performance of Domain-Specific Pre-Trained Language Models (도메인 특수성이 도메인 특화 사전학습 언어모델의 성능에 미치는 영향)

  • Han, Minah;Kim, Younha;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.251-273
    • /
    • 2022
  • Recently, research on applying text analysis to deep learning has steadily continued. In particular, researches have been actively conducted to understand the meaning of words and perform tasks such as summarization and sentiment classification through a pre-trained language model that learns large datasets. However, existing pre-trained language models show limitations in that they do not understand specific domains well. Therefore, in recent years, the flow of research has shifted toward creating a language model specialized for a particular domain. Domain-specific pre-trained language models allow the model to understand the knowledge of a particular domain better and reveal performance improvements on various tasks in the field. However, domain-specific further pre-training is expensive to acquire corpus data of the target domain. Furthermore, many cases have reported that performance improvement after further pre-training is insignificant in some domains. As such, it is difficult to decide to develop a domain-specific pre-trained language model, while it is not clear whether the performance will be improved dramatically. In this paper, we present a way to proactively check the expected performance improvement by further pre-training in a domain before actually performing further pre-training. Specifically, after selecting three domains, we measured the increase in classification accuracy through further pre-training in each domain. We also developed and presented new indicators to estimate the specificity of the domain based on the normalized frequency of the keywords used in each domain. Finally, we conducted classification using a pre-trained language model and a domain-specific pre-trained language model of three domains. As a result, we confirmed that the higher the domain specificity index, the higher the performance improvement through further pre-training.

Development of System for Real-Time Object Recognition and Matching using Deep Learning at Simulated Lunar Surface Environment (딥러닝 기반 달 표면 모사 환경 실시간 객체 인식 및 매칭 시스템 개발)

  • Jong-Ho Na;Jun-Ho Gong;Su-Deuk Lee;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.4
    • /
    • pp.281-298
    • /
    • 2023
  • Continuous research efforts are being devoted to unmanned mobile platforms for lunar exploration. There is an ongoing demand for real-time information processing to accurately determine the positioning and mapping of areas of interest on the lunar surface. To apply deep learning processing and analysis techniques to practical rovers, research on software integration and optimization is imperative. In this study, a foundational investigation has been conducted on real-time analysis of virtual lunar base construction site images, aimed at automatically quantifying spatial information of key objects. This study involved transitioning from an existing region-based object recognition algorithm to a boundary box-based algorithm, thus enhancing object recognition accuracy and inference speed. To facilitate extensive data-based object matching training, the Batch Hard Triplet Mining technique was introduced, and research was conducted to optimize both training and inference processes. Furthermore, an improved software system for object recognition and identical object matching was integrated, accompanied by the development of visualization software for the automatic matching of identical objects within input images. Leveraging satellite simulative captured video data for training objects and moving object-captured video data for inference, training and inference for identical object matching were successfully executed. The outcomes of this research suggest the feasibility of implementing 3D spatial information based on continuous-capture video data of mobile platforms and utilizing it for positioning objects within regions of interest. As a result, these findings are expected to contribute to the integration of an automated on-site system for video-based construction monitoring and control of significant target objects within future lunar base construction sites.

One-Dimensional Consolidation Simulation of Kaolinte using Geotechnical Online Testing Method (온라인 실험을 이용한 카올리나이트 점토의 일차원 압밀 시뮬레이션)

  • Kwon, Youngcheul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4C
    • /
    • pp.247-254
    • /
    • 2006
  • Online testing method is one of the numerical experiment methods using experimental information for a numerical analysis directly. The method has an advantage in that analysis can be conducted without using an idealized mechanical model, because mechanical properties are updated from element test for a numerical analysis in real time. The online testing method has mainly been used for the geotechnical seismic engineering, whose major target is sand. A testing method that may be applied to a consolidation problem has recently been developed and laboratory and field verifications have been tried. Although related research thus far has mainly used a method to update average reaction for a numerical analysis by positioning an element tests at the center of a consolidation layer, a weakness that accuracy of the analysis can be impaired as the thickness of the consolidation layer becomes more thicker has been pointed out regarding the method. To clarify the effectiveness and possible analysis scope of the online testing method in relation to the consolidation problem, we need to review the results by applying experiment conditions that may completely exclude such a factor. This research reviewed the results of the online consolidation test in terms of reproduction of the consolidation settlement and the dissipation of excess pore water pressure of a clay specimen by comparing the results of an online consolidation test and a separated-type consolidation test carried out under the same conditions. As a result, the online consolidation test reproduced the change of compressibility according effective stress of clay without a huge contradiction. In terms of the dissipation rate of excess pore water pressure, however, the online consolidation test was a little faster. In conclusion, experiment procedure needs to improve in a direction that hydraulic conductivity can be updated in real time so as to more precisely predict the dissipation of excess pore water pressure. Further research or improvement should be carried out with regard to the consolidation settlement after the end of the dissipation of excess pore water pressure.

Verification of Multi-point Displacement Response Measurement Algorithm Using Image Processing Technique (영상처리기법을 이용한 다중 변위응답 측정 알고리즘의 검증)

  • Kim, Sung-Wan;Kim, Nam-Sik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.3A
    • /
    • pp.297-307
    • /
    • 2010
  • Recently, maintenance engineering and technology for civil and building structures have begun to draw big attention and actually the number of structures that need to be evaluate on structural safety due to deterioration and performance degradation of structures are rapidly increasing. When stiffness is decreased because of deterioration of structures and member cracks, dynamic characteristics of structures would be changed. And it is important that the damaged areas and extent of the damage are correctly evaluated by analyzing dynamic characteristics from the actual behavior of a structure. In general, typical measurement instruments used for structure monitoring are dynamic instruments. Existing dynamic instruments are not easy to obtain reliable data when the cable connecting measurement sensors and device is long, and have uneconomical for 1 to 1 connection process between each sensor and instrument. Therefore, a method without attaching sensors to measure vibration at a long range is required. The representative applicable non-contact methods to measure the vibration of structures are laser doppler effect, a method using GPS, and image processing technique. The method using laser doppler effect shows relatively high accuracy but uneconomical while the method using GPS requires expensive equipment, and has its signal's own error and limited speed of sampling rate. But the method using image signal is simple and economical, and is proper to get vibration of inaccessible structures and dynamic characteristics. Image signals of camera instead of sensors had been recently used by many researchers. But the existing method, which records a point of a target attached on a structure and then measures vibration using image processing technique, could have relatively the limited objects of measurement. Therefore, this study conducted shaking table test and field load test to verify the validity of the method that can measure multi-point displacement responses of structures using image processing technique.

A Study of Equipment Accuracy and Test Precision in Dual Energy X-ray Absorptiometry (골밀도검사의 올바른 질 관리에 따른 임상적용과 해석 -이중 에너지 방사선 흡수법을 중심으로-)

  • Dong, Kyung-Rae;Kim, Ho-Sung;Jung, Woon-Kwan
    • Journal of radiological science and technology
    • /
    • v.31 no.1
    • /
    • pp.17-23
    • /
    • 2008
  • Purpose : Because there is a difference depending on the environment as for an inspection equipment the important part of bone density scan and the precision/accuracy of a tester, the management of quality must be made systematically. The equipment failure caused by overload effect due to the aged equipment and the increase of a patient was made frequently. Thus, the replacement of equipment and additional purchases of new bonedensity equipment caused a compatibility problem in tracking patients. This study wants to know whether the clinical changes of patient's bonedensity can be accurately and precisely reflected when used it compatiblly like the existing equipment after equipment replacement and expansion. Materials and methods : Two equipments of GE Lunar Prodigy Advance(P1 and P2) and the Phantom HOLOGIC Spine Road(HSP) were used to measure equipment precision. Each device scans 20 times so that precision data was acquired from the phantom(Group 1). The precision of a tester was measured by shooting twice the same patient, every 15 members from each of the target equipment in 120 women(average age 48.78, 20-60 years old)(Group 2). In addition, the measurement of the precision of a tester and the cross-calibration data were made by scanning 20 times in each of the equipment using HSP, based on the data obtained from the management of quality using phantom(ASP) every morning (Group 3). The same patient was shot only once in one equipment alternately to make the measurement of the precision of a tester and the cross-calibration data in 120 women(average age 48.78, 20-60 years old)(Group 4). Results : It is steady equipment according to daily Q.C Data with $0.996\;g/cm^2$, change value(%CV) 0.08. The mean${\pm}$SD and a %CV price are ALP in Group 1(P1 : $1.064{\pm}0.002\;g/cm^2$, $%CV=0.190\;g/cm^2$, P2 : $1.061{\pm}0.003\;g/cm^2$, %CV=0.192). The mean${\pm}$SD and a %CV price are P1 : $1.187{\pm}0.002\;g/cm^2$, $%CV=0.164\;g/cm^2$, P2 : $1.198{\pm}0.002\;g/cm^2$, %CV=0.163 in Group 2. The average error${\pm}$2SD and %CV are P1 - (spine: $0.001{\pm}0.03\;g/cm^2$, %CV=0.94, Femur: $0.001{\pm}0.019\;g/cm^2$, %CV=0.96), P2 - (spine: $0.002{\pm}0.018\;g/cm^2$, %CV=0.55, Femur: $0.001{\pm}0.013\;g/cm^2$, %CV=0.48) in Group 3. The average error${\pm}2SD$, %CV, and r value was spine : $0.006{\pm}0.024\;g/cm^2$, %CV=0.86, r=0.995, Femur: $0{\pm}0.014\;g/cm^2$, %CV=0.54, r=0.998 in Group 4. Conclusion: Both LUNAR ASP CV% and HOLOGIC Spine Phantom are included in the normal range of error of ${\pm}2%$ defined in ISCD. BMD measurement keeps a relatively constant value, so showing excellent repeatability. The Phantom has homogeneous characteristics, but it has limitations to reflect the clinical part including variations in patient's body weight or body fat. As a result, it is believed that quality control using Phantom will be useful to check mis-calibration of the equipment used. A value measured a patient two times with one equipment, and that of double-crossed two equipment are all included within 2SD Value in the Bland - Altman Graph compared results of Group 3 with Group 4. The r value of 0.99 or higher in Linear regression analysis(Regression Analysis) indicated high precision and correlation. Therefore, it revealed that two compatible equipment did not affect in tracking the patients. Regular testing equipment and capabilities of a tester, then appropriate calibration will have to be achieved in order to calculate confidential BMD.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Comparison and evaluation between 3D-bolus and step-bolus, the assistive radiotherapy devices for the patients who had undergone modified radical mastectomy surgery (변형 근치적 유방절제술 시행 환자의 방사선 치료 시 3D-bolus와 step-bolus의 비교 평가)

  • Jang, Wonseok;Park, Kwangwoo;Shin, Dongbong;Kim, Jongdae;Kim, Seijoon;Ha, Jinsook;Jeon, Mijin;Cho, Yoonjin;Jung, Inho
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.1
    • /
    • pp.7-16
    • /
    • 2016
  • Purpose : This study aimed to compare and evaluate between the efficiency of two respective devices, 3D-bolus and step-bolus when the devices were used for the treatment of patients whose chest walls were required to undergo the electron beam therapy after the surgical procedure of modified radical mastectomy, MRM. Materials and Methods : The treatment plan of reverse hockey stick method, using the photon beam and electron beam, had been set for six breast cancer patients and these 6 breast cancer patients were selected to be the subjects for this study. The prescribed dose of electron beam for anterior chest wall was set to be 180 cGy per treatment and both the 3D-bolus, produced using 3D printer(CubeX, 3D systems, USA) and the self-made conventional step-bolus were used respectively. The surface dose under 3D-bolus and step-bolus was measured at 5 measurement spots of iso-center, lateral, medial, superior and inferior point, using GAFCHROMIC EBT3 film (International specialty products, USA) and the measured value of dose at 5 spots was compared and analyzed. Also the respective treatment plan was devised, considering the adoption of 3D-bolus and stepbolus and the separate treatment results were compared to each other. Results : The average surface dose was 179.17 cGy when the device of 3D-bolus was adopted and 172.02 cGy when step-bolus was adopted. The average error rate against the prescribed dose of 180 cGy was -(minus) 0.47% when the device of 3D-bolus was adopted and it was -(minus) 4.43% when step-bolus was adopted. It was turned out that the maximum error rate at the point of iso-center was 2.69%, in case of 3D-bolus adoption and it was 5,54% in case of step-bolus adoption. The maximum discrepancy in terms of treatment accuracy was revealed to be about 6% when step-bolus was adopted and to be about 3% when 3D-bolus was adopted. The difference in average target dose on chest wall between 3D-bolus treatment plan and step-bolus treatment plan was shown to be insignificant as the difference was only 0.3%. However, to mention the average prescribed dose for the part of lung and heart, that of 3D-bolus was decreased by 11% for lung and by 8% for heart, compared to that of step-bolus. Conclusion : It was confirmed through this research that the dose uniformity could be improved better through the device of 3D-bolus than through the device of step-bolus, as the device of 3D-bolus, produced in consideration of the contact condition of skin surface of chest wall, could be attached to patients' skin more nicely and the thickness of chest wall can be guaranteed more accurately by the device of 3D-bolus. It is considered that 3D-bolus device can be highly appreciated clinically because 3D-bolus reduces the dose on the adjacent organs and make the normal tissues protected, while that gives no reduction of dose on chest wall.

  • PDF

Computed Tomography-guided Localization with a Hook-wire Followed by Video-assisted Thoracic Surgery for Small Intrapulmonary and Ground Glass Opacity Lesions (폐실질 내에 위치한 소결질 및 간유리 병변에서 흉부컴퓨터단층촬영 유도하에 Hook Wire를 이용한 위치 선정 후 시행한 흉강경 폐절제술의 유용성)

  • Kang, Pil-Je;Kim, Yong-Hee;Park, Seung-Il;Kim, Dong-Kwan;Song, Jae-Woo;Do, Kyoung-Hyun
    • Journal of Chest Surgery
    • /
    • v.42 no.5
    • /
    • pp.624-629
    • /
    • 2009
  • Background: Making the histologic diagnosis of small pulmonary nodules and ground glass opacity (GGO) lesions is difficult. CT-guided percutaneous needle biopsies often fail to provide enough specimen for making the diagnosis. Video-assisted thoracoscopic surgery (VATS) can be inefficient for treating non-palpable lesions. Preoperative localization of small intrapulmonary lesions provides a more obvious target to facilitate performing intraoperative. resection. We evaluated the efficacy of CT-guided localization with using a hook wire and this was followed by VATS for making the histologic diagnosis of small intrapulmonary nodules and GGO lesions. Material and Method: Eighteen patients (13 males) were included in this study from August 2005 to March 2008. 18 intrapulmonary lesions underwent preoperative localization by using a CT-guided a hook wire system prior to performing VATS resection for intrapulmonary lesions and GGO lesions. The clinical data such as the accuracy of localization, the rate of conversion-to-thoracotomy, the operation time, the postoperative complications and the histology of the pulmonary lesion were retrospectively collected. Result: Eighteen VATS resections were performed in 18 patients. Preoperative CT-guided localization with a hook-wire was successful in all the patients. Dislodgement of a hook wire was observed in one case. There was no conversion to thoracotomy, The median diameter of lesions was 8 mm (range: $3{\sim}15\;mm$). The median depth of the lesions from the pleural surfaces was 5.5 mm (range: $1{\sim}30\;mm$). The median interval between preoperative CT-guided with a hook-wire and VATS was 34.5 min (range: ($10{\sim}226$ min). The median operative time was 43.5.min (range: $26{\sim}83$ min). In two patients, clinically insignificant pneumothorax developed after CT-guided localization with a hook-wire and there were no other complications. Histological examinations confirmed 8 primary lung cancers, 3 cases of metastases, 3 cases of inflammation, 2 intrapulmonary lymph nodes and 2 other benign lesions. Conclusion: CT-guided localization with a hook-wire followed by VATS for treating small intrapulmonary nodules and GGO lesions provided a low conversion thoracotomy rate, a short operation time and few localization-related or postoperative complications. This procedure was efficient to confirm intrapulmonary lesions and GGO lesions.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF