• Title/Summary/Keyword: Test Run

Search Result 1,056, Processing Time 0.035 seconds

Wetting-Induced Collapse in Fill Materials for Concrete Slab Track of High Speed Railway (고속철도 콘크리트궤도 흙쌓기재료의 Wetting Collapse에 관한 연구)

  • Lee, Sung-Jin;Lee, Il-Wha;Im, Eun-Sang;Shin, Dong-Hoon;Cho, Sung-Eun
    • Journal of the Korean Geotechnical Society
    • /
    • v.24 no.4
    • /
    • pp.79-88
    • /
    • 2008
  • Recently, the high speed railway comes into the spotlight as the important and convenient traffic infrastructure. In Korea, Kyung-Bu high speed train service began in bout 400 km section in 2004, and the Ho-Nam high speed railway will be constructed by 2017. The high speed train will run with a design maximum speed of 300-350 km/hr. Since the trains are operated at high speed, the differential settlement of subgrade under the rail is able to cause a fatal disaster. Therefore, the differential settlement of the embankment must be controlled with the greatest care. Furthermore, the characteristics and causes of settlements which occurred under construction and post-construction should be investigated. A considerable number of studies have been conducted on the settlement of the natural ground over the past several decades. But little attention has been given to the compression settlement of the embankment. The long-term settlement of compacted fills embankments is greatly influenced by the post-construction wetting. This is called 'hydro collapse' or 'wetting collapse'. In spite of little study for this wetting collapse problem, it has been recognized that the compressibility of compacted sands, gravels and rockfills exhibit low compressibility at low pressures, but there can be significant compression at high pressures due to grain crushing (Marachi et al. 1969, Nobari and Duncan 1972, Noorany et al. 1994, Houston et al. 1993, Wu 2004). The characteristics of compression of fill materials depend on a number of factors such as soil/rock type, as-compacted moisture, density, stress level and wetting condition. Because of the complexity of these factors, it is not easy to predict quantitatively the amount of compression without extensive tests. Therefore, in this research I carried out the wetting collapse tests, focusing on various soil/rock type, stress levels, wetting condition more closely.

Hay Preparation Technology for Sorghum×Sudangrass Hybrid Using a Stationary Far-Infrared Dryer (정치식 원적외선 건조기를 이용한 수수×수단그라스 교잡종의 건초 조제 기술 연구)

  • Jong Geun Kim;Hyun Rae Kim;Won Jin Lee;Young Sang Yu;Yan Fen Li;Li Li Wang
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.43 no.1
    • /
    • pp.22-27
    • /
    • 2023
  • This experiment was conducted to confirm the possibility of preparing Sorghum×sudangrass hybrid artificial hay using far-infrared rays in Korea. The machine used in this experiment is a drying device based on far-infrared rays, and is designed to control temperature, air flow rate, far-infrared radiation amount, and air flow speed. The Sorghum×sudangrass hybrids harvested in late September were wilted in the field for one day, and a drying test was performed on them. Conditions for drying were performed by selecting a total of 7 conditions, and each condition induced a change in radiation amount in a single condition (42%) and two steps (4 treatments) and three steps (2 treatments). The speed of the air flow in the device was fixed at 60 m/s, and the run time was changed to 30, 60, and 90 minutes. The average dry matter (DM) content was 82.84%. The DM content was 59.94 and 76.91%, respectively, in drying conditions 1 and 3, which were not suitable for hay. In terms of drying rate, it was significantly higher than 80% in the 5, 6 and 7 treatment, and power consumption was slightly high with an average of 5.7 kw/h. As for the feed value according to each drying condition, the crude protein (CP) content increased as the drying time increased, and there was no significant difference between treatments in ADF, NDF, IVDMD and TDN content. In terms of RFV, treatment 1, which is a single condition, was significantly lower than the complex condition. Through the above results, it was determined that the drying conditions 4 and 5 were the most advantageous when considering the drying speed, power consumption, and quality.

A Study on the Dimensions, Surface Area and Volume of Grains (곡립(穀粒)의 치수, 표면적(表面積) 및 체적(體積)에 관(關)한 연구(硏究))

  • Park, Jong Min;Kim, Man Soo
    • Korean Journal of Agricultural Science
    • /
    • v.16 no.1
    • /
    • pp.84-101
    • /
    • 1989
  • An accurate measurement of size, surface area and volume of agricultural products is essential in many engineering operations such as handling and sorting, and in heat transfer studies on heating and cooling processes. Little information is available on these properties due to their irregular shape, and moreover very little information on the rough rice, soybean, barley, and wheat has been published. Physical dimensions of grain, such as length, width, thickness, surface area, and volume vary according to the variety, environmental conditions, temperature, and moisture content. Especially, recent research has emphasized on the variation of these properties with the important factors such as moisture content. The objectives of this study were to determine physical dimensions such as length, width and thickness, surface area and volume of the rough rice, soybean, barley, and wheat as a function of moisture content, to investigate the effect of moisture content on the properties, and to develop exponential equations to predict the surface area and the volume of the grains as a function of physical dimensions. The varieties of the rough rice used in this study were Akibare, Milyang 15, Seomjin, Samkang, Chilseong, and Yongmun, as a soybean sample Jangyeobkong and Hwangkeumkong, as a barley sample Olbori and Salbori, and as a wheat sample Eunpa and Guru were selected, respectively. The physical properties of the grain samples were determined at four levels of moisture content and ten or fifteen replications were run at each moisture content level and each variety. The results of this study are summarized as follows; 1. In comparison of the surface area and the volume of the 0.0375m diameter-sphere measured in this study with the calculated values by the formula the percent error between them showed least values of 0.65% and 0.77% at the rotational degree interval of 15 degree respectively. 2. The statistical test(t-test) results of the physical properties between the types of rough rice, and between the varieties of soybean and wheat indicated that there were significant difference at the 5% level between them. 3. The physical dimensions varied linearly with the moisture content, and the ratios of length to thickness (L/T) and of width to thickness (W/T) in rough rice decreased with increase of moisture content, while increased in soybean, but uniform tendency of the ratios in barley and wheat was not shown. In all of the sample grains except Olbori, sphericity decreased with increase of moisture content. 4. Over the experimental moisture levels, the surface area and the volume were in the ranges of about $45{\sim}51{\times}10^{-6}m^2$, $25{\sim}30{\times}10^{-9}m^3$ for Japonica-type rough rice, about $42{\sim}47{\times}10^{-6}m^2$, $21{\sim}26{\times}10^{-9}m^3$ for Indica${\times}$Japonica type rough rice, about $188{\sim}200{\times}10^{-6}m^2$, $277{\sim}300{\times}10^{-9}m^3$ for Jangyeobkong, about $180{\sim}201{\times}10^{-6}m^2$, $190{\sim}253{\times}10^{-9}m^3$ for Hwangkeumkong, about $60{\sim}69{\times}10^{-6}m^2$, $36{\sim}45{\times}10^{-9}m^3$ for Covered barley, about $47{\sim}60{\times}10^{-6}m^2$, $22{\sim}28{\times}10^{-9}m^3$ for Naked barley, about $51{\sim}20{\times}10^{-6}m^2$, $23{\sim}31{\times}10^{-9}m^3$ for Eunpamill, and about $57{\sim}69{\times}10^{-6}m^2$, $27{\sim}34{\times}10^{-9}m^3$ for Gurumill, respectively. 5. The increasing rate of surface area and volume with increase of moisture content was higher in soybean than other sample grains, and that of Japonica-type was slightly higher than Indica${\times}$Japonica type in rough rice. 6. The regression equations of physical dimensions, surface area and volume were developed as a function of moisture content, the exponential equations of surface area and volume were also developed as a function of physical dimensions, and the regression equations of surface area were also developed as a function of volume in all grain samples.

  • PDF

The Effectiveness of Fiscal Policies for R&D Investment (R&D 투자 촉진을 위한 재정지원정책의 효과분석)

  • Song, Jong-Guk;Kim, Hyuk-Joon
    • Journal of Technology Innovation
    • /
    • v.17 no.1
    • /
    • pp.1-48
    • /
    • 2009
  • Recently we have found some symptoms that R&D fiscal incentives might not work well what it has intended through the analysis of current statistics of firm's R&D data. Firstly, we found that the growth rate of R&D investment in private sector during the recent decade has been slowdown. The average of growth rate (real value) of R&D investment is 7.1% from 1998 to 2005, while it was 13.9% from 1980 to 1997. Secondly, the relative share of R&D investment of SME has been decreased to 21%('05) from 29%('01), even though the tax credit for SME has been more beneficial than large size firm, Thirdly, The R&D expenditure of large size firms (besides 3 leading firms) has not been increased since late of 1990s. We need to find some evidence whether fiscal incentives are effective in increasing firm's R&D investment. To analyse econometric model we use firm level unbalanced panel data for 4 years (from 2002 to 2005) derived from MOST database compiled from the annual survey, "Report on the Survey of Research and Development in Science and Technology". Also we use fixed effect model (Hausman test results accept fixed effect model with 1% of significant level) and estimate the model for all firms, large firms and SME respectively. We have following results from the analysis of econometric model. For large firm: i ) R&D investment responds elastically (1.20) to sales volume. ii) government R&D subsidy induces R&D investment (0.03) not so effectively. iii) Tax price elasticity is almost unity (-0.99). iv) For large firm tax incentive is more effective than R&D subsidy For SME: i ) Sales volume increase R&D investment of SME (0.043) not so effectively. ii ) government R&D subsidy is crowding out R&D investment of SME not seriously (-0.0079) iii) Tax price elasticity is very inelastic (-0.054) To compare with other studies, Koga(2003) has a similar result of tax price elasticity for Japanese firm (-1.0036), Hall((l992) has a unit tax price elasticity, Bloom et al. (2002) has $-0.354{\sim}-0.124$ in the short run. From the results of our analysis we recommend that government R&D subsidy has to focus on such an areas like basic research and public sector (defense, energy, health etc.) not overlapped private R&D sector. For SME government has to focus on establishing R&D infrastructure. To promote tax incentive policy, we need to strengthen the tax incentive scheme for large size firm's R&D investment. We recommend tax credit for large size film be extended to total volume of R&D investment.

  • PDF

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF