• Title/Summary/Keyword: R&D Project Management

Search Result 421, Processing Time 0.027 seconds

Entrepreneurial Characteristics Affecting on Angel Investors's Decision making (엔젤투자자의 투자의사결정에 영향을 미치는 기업가특성에 관한 연구)

  • Yun, Young Sook;Hwangbo, Yun
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.9 no.3
    • /
    • pp.47-61
    • /
    • 2014
  • Many angel investors hesitate to invest in early-stage company. Most early-stage company has no sales and only R&D step project or in early approach of market. So it's impossible to evaluate early-stage company quantitatively. Therefore many angel investors depend on CEO's tendency to evaluate company and make decision for investment. The purpose of this study is discover the entrepreneurial characteristics of CEO and the importance level which affect on the angel investors decision making factors for investment. To identify the factors of entrepreneurial characteristics, survey was conducted by Delphi Technique which is involved by 20 experts who is angel investment club members, venture capitalists, CEOs and officers. Three rounds of survey results derived 10 elements of entrepreneurial characteristics for investment decision making factors including reliability, risk sensitivity, passion, perseverance, integrity, leadership, startup experience, organizational management skills, innovation and social networking. In addition, this study derived the importance level of elements of entrepreneurial characteristics based on the AHP(Analytic Hierarchy Process) theory and maintained the logical consistency by pair-wise comparison for each element. As a result of analyzing the importance of entrepreneurial characteristics, the sequence is reliability (18.1%), integrity (15.9%), leadership (11.7%), organizational management skills (10.0%), social networking (9.5%), passion(9.1%), perseverance(8.4%), innovation(8.1%), startup experience(5.3%) and risk sensitivity(3.9%) respectively. The significance of this study is somewhat decrease limit of the uncertainty arising from angel investors and angel investors can help a decision making, by discover factors of entrepreneurial characteristics that can be called the biggest influencing factors among Investor's investment decision-making In early stage companies and compare importance.

  • PDF

Comparative Analysis of Construction Productivity for Modernized Korean Housing (Hanok) (보급형 신한옥 개발을 위한 건설 생산성 분석)

  • Kim, Min;Kim, Yesol;Lee, YunSub;Jung, Youngsoo
    • Korean Journal of Construction Engineering and Management
    • /
    • v.14 no.3
    • /
    • pp.107-114
    • /
    • 2013
  • The interest in traditional Korean housing has greatly increasing in Korean housing market. However, it is difficult to wildly disseminate for a high construction cost reason. In order to effectively facilitate the Hanok construction, Korean government has initiated a project that develops a new style Korean housing, which meets the requirements of low cost and modernized life style. Cost of building is mainly affected by materials and construction methods. Hanok has some special commodities those significantly impact the cost. In order to effectively cut down the costs, well-organized planning for costs is very important. Also, improving the productivity by utilizing new materials and methods can result in cost down. In this context, this paper compared and analyzed two different types of Korean housing; one is a modernized Korean house which used new materials and methods, the other is a traditional Korean house which was build up by purely traditional methods. Productivity has also been compared and analyzed for 5 major commodities between two types of models. Based on these comparative data, effect of cost down by new model has been analyzed. As a result it is confirmed that by using the new materials and methods could highly effect to increasing productivity and cost down. Especially, the cost of Roofing have been more influenced by using new material while the Wood and Finishes have been influenced by new construction method. Construction cost of Foundation (Earthwork, Concrete, Masonry) and Openings were influenced both factors, changing of materials and methods.

A Study on the Necessity of Verification about depot level maintenance plan through the Weapons System cases analysis (무기체계 사례 분석을 통한 창정비개발계획안 검증 필요성 연구)

  • Ahn, Jung-Jun;Kim, Su-Dong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.2
    • /
    • pp.76-82
    • /
    • 2019
  • This study has done to search for a solution to remove risk limitedly caused by separating weapon system acquisition from operation and maintenance at the view point of Logistic Commander who's responsible for stable operation and maintenance after acquiring weapon system. At the System development stage, unverified overhaul development plan may cause additional manpower and costs after the development, and furthermore it is likely to have risk to lower reliability of the military. Thus, research and development agency should write overhaul development plan at the System development stage, and it should be verified through evaluation and verification test. Secondly, during research and development, institutional supplementation is needed to calculate human and material resources writing overhaul development plan. Thirdly, it should be able to analyze proper operation & maintenance plan and cost for overhaul plan at the pre-investigation stage. Fourthly, the base which can develop overhaul concept and overhaul factors should be included in the need and need determination document. Lastly, for the weapon system which has small amount of high power figure, project management should be performed to be able to specify at the each acquisition level of weapon system to realize Article 28, clause 3 and 4 of Defense business law.

Strategic Issues in Managing Complexity in NPD Projects (신제품개발 과정의 복잡성에 대한 주요 연구과제)

  • Kim, Jongbae
    • Asia Marketing Journal
    • /
    • v.7 no.3
    • /
    • pp.53-76
    • /
    • 2005
  • With rapid technological and market change, new product development (NPD) complexity is a significant issue that organizations continually face in their development projects. There are numerous factors, which cause development projects to become increasingly costly & complex. A product is more likely to be successfully developed and marketed when the complexity inherent in NPD projects is clearly understood and carefully managed. Based upon the previous studies, this study examines the nature and importance of complexity in developing new products and then identifies several issues in managing complexity. Issues considered include: definition of complexity : consequences of complexity; and methods for managing complexity in NPD projects. To achieve high performance in managing complexity in development projects, these issues need to be addressed, for example: A. Complexity inherent in NPD projects is multi-faceted and multidimensional. What factors need to be considered in defining and/or measuring complexity in a development project? For example, is it sufficient if complexity is defined only from a technological perspective, or is it more desirable to consider the entire array of complexity sources which NPD teams with different functions (e.g., marketing, R&D, manufacturing, etc.) face in the development process? Moreover, is it sufficient if complexity is measured only once during a development project, or is it more effective and useful to trace complexity changes over the entire development life cycle? B. Complexity inherent in a project can have negative as well as positive influences on NPD performance. Thus, which complexity impacts are usually considered negative and which are positive? Project complexity also can affect the entire organization. Any complexity could be better assessed in broader and longer perspective. What are some ways in which the long-term impact of complexity on an organization can be assessed and managed? C. Based upon previous studies, several approaches for managing complexity are derived. What are the weaknesses & strengths of each approach? Is there a desirable hierarchy or order among these approaches when more than one approach is used? Are there differences in the outcomes according to industry and product types (incremental or radical)? Answers to these and other questions can help organizations effectively manage the complexity inherent in most development projects. Complexity is worthy of additional attention from researchers and practitioners alike. Large-scale empirical investigations, jointly conducted by researchers and practitioners, will help gain useful insights into understanding and managing complexity. Those organizations that can accurately identify, assess, and manage the complexity inherent in projects are likely to gain important competitive advantages.

  • PDF

Scheme on Environmental Risk Assessment and Management for Carbon Dioxide Sequestration in Sub-seabed Geological Structures in Korea (이산화탄소 해양 지중저장사업의 환경위해성평가관리 방안)

  • Choi, Tae-Seob;Lee, Jung-Suk;Lee, Kyu-Tae;Park, Young-Gyu;Hwang, Jin-Hwan;Kang, Seong-Gil
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.12 no.4
    • /
    • pp.307-319
    • /
    • 2009
  • Carbon dioxide capture and storage (CCS) technology has been regarded as one of the most possible and practical option to reduce the emission of carbon dioxide ($CO_2$) and consequently to mitigate the climate change. Korean government also have started a 10-year R&D project on $CO_2$ storage in sea-bed geological structure including gas field and deep saline aquifer since 2005. Various relevant researches are carried out to cover the initial survey of suitable geological structure storage site, monitoring of the stored $CO_2$ behavior, basic design of $CO_2$ transport and storage process and the risk assessment and management related to $CO_2$ leakage from engineered and geological processes. Leakage of $CO_2$ to the marine environment can change the chemistry of seawater including the pH and carbonate composition and also influence adversely on the diverse living organisms in ecosystems. Recently, IMO (International Maritime Organization) have developed the risk assessment and management framework for the $CO_2$ sequestration in sub-seabed geological structures (CS-SSGS) and considered the sequestration as a waste management option to mitigate greenhouse gas emissions. This framework for CS-SSGS aims to provide generic guidance to the Contracting Parties to the London Convention and Protocol, in order to characterize the risks to the marine environment from CS-SSGS on a site-specific basis and also to collect the necessary information to develop a management strategy to address uncertainties and any residual risks. The environmental risk assessment (ERA) plan for $CO_2$ storage work should include site selection and characterization, exposure assessment with probable leak scenario, risk assessment from direct and in-direct impact to the living organisms and risk management strategy. Domestic trial of the $CO_2$ capture and sequestration in to the marine geologic formation also should be accomplished through risk management with specified ERA approaches based on the IMO framework. The risk assessment procedure for $CO_2$ marine storage should contain the following components; 1) prediction of leakage probabilities with the reliable leakage scenarios from both engineered and geological part, 2) understanding on physio-chemical fate of $CO_2$ in marine environment especially for the candidate sites, 3) exposure assessment methods for various receptors in marine environments, 4) database production on the toxic effect of $CO_2$ to the ecologically and economically important species, and finally 5) development of surveillance procedures on the environmental changes with adequate monitoring techniques.

  • PDF

Development of the Deterioration Models for the Port Structures by the Multiple Regression Analysis and Markov Chain (다중 회귀분석 및 Markov Chain을 통한 항만시설물의 상태열화모델 개발)

  • Cha, Kyunghwa;Kim, Sung-Wook;Kim, Jung Hoon;Park, Mi-Yun;Kong, Jung Sik
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.28 no.3
    • /
    • pp.229-239
    • /
    • 2015
  • In light of the significant increase in the quantities of goods transported and the development of the shipping industry, the frequency of usage of port structures has increased; yet, the government's budget for the shipping & port of SOC has been reduced. Port structures require systematically effective maintenance and management trends that address their growing frequency of usage. In order to construct a productive maintenance system, it is essential to develop deterioration models of port structures that consider various characteristics, such as location, type, use, constructed level, and state of maintenance. Processes for developing such deterioration models include examining factors that cause the structures to deteriorate, collecting data on deteriorating structures, and deciding methods of estimation. The techniques used for developing the deterioration models are multiple regression analysis and Markov chain theory. Multiple regression analysis can reflect changes over time and Markov chain theory can apply status changes based on a probabilistic method. Along with these processes, the deterioration models of open-type and gravity-type wharfs were suggested.

A narrative research on the job and the job-related learning of a mechanical engineer - an exemplary study on the characteristic of job-related learning of engineer in work place and it's implication on engineering education (기계설계분야 중견 엔지니어의 일과 학습에 관한 내러티브 연구 - 엔지니어의 직무관련 학습의 맥락과 공학교육에 대한 시사점 찾기)

  • Lim, Se-Yung
    • 대한공업교육학회지
    • /
    • v.38 no.2
    • /
    • pp.1-26
    • /
    • 2013
  • This study inquired following research questions by a narrative research method : What was the job of an engineer in mechanical design field? How did he fulfill his job-related learning in his workplace? What were the context and the characteristic of the job-related learning in the workplace? And some implications of the job-related learning on engineering education were discussed. We identified that the research participant's career as a mechanical engineer has developed through three stages. At first, he engaged on conceptual design of a semi-conductor test machine through self-initiated learning from basic to whole system of the machine. At second stage, he leaded a design group for the concrete design of a ball type semi-conductor test machine. In this stage he learned the meaning of cooperation and cooperative learning. At third stage, he initiated to found an entrepreneur company that was specified to design a semi-conductor test machine. He became CEO of the company. He learned the R & D policy making through contacts with global company, visiting exhibition in abroad. Eventually his main task as a mechanical engineer was the problem solving in the process of machine design. He had experienced and learned through his works : project management, independent fulfilling of tasks, functional analysis and reverse engineering, conceptualizing and test, cohesive cooperation, dialogue and discussion, mediation of conflict, human relationship, leadership. The implication of the narrative analysis on engineering education is, proposed, to give the students more chances to experience and to learn such activities.

Profile of sexual violence experiences among the survivors using victim support services in Korea (성폭력 피해특성에 따른 피해경험자 유형화와 지원 서비스 이용양태 연구)

  • Kim, Kihyun;Kim, Jae-Won;Park, Haeyoung;Ryou, Bee
    • Korean Journal of Social Welfare Studies
    • /
    • v.47 no.4
    • /
    • pp.255-280
    • /
    • 2016
  • This study examined sexual violence characteristics among the individuals who utilized victim support services provided by National Rape Crisis Intervention Centers in Korea. The study is the part of a Korean longitudinal study on sexual assault characteristics and its implications for post-abuse adjustments, which is supported by Korean Mental Health Technology R&D Project. Eleven national rape crisis centers nationwide and 29 NGO's participated in the study. The participating centers provided the data on sexual abuse characteristics from their standardized case management system. The cases were randomly selected from the system. Total of 1077 cases were gleaned from the system and utilized for this analysis. Results indicated that the abuse characteristics differed by victims' age (children, adolescent vs. adults) as well as relationship with perpetrator. We could identify six different profile groups based on the detailed violence characteristics. The results assured us of the importance to understand the detailed characteristics of sexual violence and the old notion that 'one size may not fit all'. The results from the profile analyses may have important implication for developing victim support programs and appropriately allocating agency resources according to the different profiles of the service users.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF