• Title/Summary/Keyword: Vehicle Network

Search Result 1,538, Processing Time 0.031 seconds

Effects of Lipopolysaccride-induced Stressor on the Expression of Stress-related Genes in Two Breeds of Chickens (Lipopolysaccride 감염처리가 닭의 품종간 스트레스연관 유전자 발현에 미치는 영향)

  • Jang, In Surk;Sohn, Sea Hwan;Moon, Yang Soo
    • Korean Journal of Poultry Science
    • /
    • v.44 no.1
    • /
    • pp.1-9
    • /
    • 2017
  • The objective of the present study was to determine the expression of genes associated with lipopolysaccharide (LPS)-induced stressor in two breeds of chickens: the Korean native chicken (KNC) and the White Leghorn chicken (WLH). Forty chickens per breed, aged 40 weeks, were randomly allotted to the control (CON, administered the saline vehicle) and LPS-injected stress groups. Samples were collected at 0 and 48 h post-LPS injection, and total RNA was extracted from the chicken livers for RNA microarray and quantitative real-time polymerase chain reaction (qRT-PCR) analyses. In response to LPS, 1,044 and 1,193 genes were upregulated, and 1,000 and 1,072 genes were downregulated in the KNC and WLH, respectively, using a ${\geq}2$-fold cutoff change. A functional network analysis revealed that stress-related genes were downregulated in both KNC and WLH after LPS infection. The results obtained from the qRT-PCR analysis of mRNA expression of heat shock 90 (HSP90), 3-hydroxy-3-methylglutaryl-CoA reductase (HMGCR), activating transcription factor 4 (ATF4), sterol regulatory element-binding protein 1 (SREBP1), and X-box binding protein 1 (XBP1) were confirmed by the results of the microarray analysis. There was a significant difference in the expression of stress-associated genes between the control and LPS-injected KNC and WLH groups. The qRT-PCR analysis revealed that the stress-related $HSP90{\alpha}$ and HMGCR genes were downregulated in both LPS-injected KNC and WLH groups. However, the HSP70 and $HSP90{\beta}$ genes were upregulated only in the LPS-injected KNC group. The results suggest that the mRNA expression of stress-related genes is differentially affected by LPS stimulation, and some of the responses varied with the chicken breed. A better understanding of the LPS-induced infective stressors in chicken using the qRT-PCR and RNA microarray analyses may contribute to improving animal welfare and husbandry practices.

Effectiveness Analysis and Application of Phosphorescent Pavement Markings for Improving Visibility (축광노면표시 시인성 개선에 따른 경제성 분석 및 적용방안)

  • Yi, Yongju;Lee, Kyujin;Kim, Sangtae;Choi, Keechoo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.37 no.5
    • /
    • pp.815-825
    • /
    • 2017
  • Visibility of lane marking is impaired at night or in the rain, which thereby threatens traffic safety. Recently, various studies and technologies have been developed to improve lane marking visibility, such as the extension of lane marking life expectancy (up to 1.5 times), improvement of lane marking equipment productivity, improvement of lane marking visibility by applying phosphorescent material mixed paint. Cost-benefit analysis was performed with considering various benefit items that can be expected. About 45% of traffic accidents would be prevented by improving lane marking visibility. Additionally, accident reduction benefit and traffic congestion reduction benefit were calculated as much as 246 billion KRW per year and 12 billion KRW per year, respectively, by reducing repaint cycle due to enhanced durability. 45 billion KRW per year is expected to reduced with improved lane detection performance of autonomous vehicle. Meanwhile, total increased cost when introducing phosphorescent material mixed paint to 91,195km of nationwide road is identified as 1922 billion KRW per year. However, economic feasibility could not be secured with 0.16 of cost-benefit ratio when applied to the road network as a whole. In case of "Accident Hot Spot" analyzing section window (400m), one or more fatality or two or more injured (one or more injured in case of less than 2 lanes per direction) per year were caused by pavement marking related accident, economic feasibility was secured. In detail, 3.91 of cost-benefit ratio is estimated with comparison of the installation cost for 5,697 of accident hot spot and accident reduction benefit. Some limitations and future research agenda have also been discussed.

Protective Effects on A2Kb Transgenic Mice That Were Immunized with Hepatitis B Virus X Antigen Peptides by the Activation of CD8+ T Cells; XEP-3 Specific CTL Responses in the in vitro Culture (B형 간염 바이러스 X 항원을 면역한 A2Kb Transgenic Mice에서 CD8+ T Cell의 활성화에 의한 X 항원 표현 재조합 Vaccinia Virus에 대한 방어 효과; in vitro 배양을 통한 XEP-3 특이적인 CTL의 반응)

  • Hwang, Yu Kyeong;Kim, Hyung-Il;Kim, Nam Kyung;Park, Jung Min;Cheong, Hong Seok
    • IMMUNE NETWORK
    • /
    • v.2 no.1
    • /
    • pp.41-48
    • /
    • 2002
  • Background: Viral antigens presented on the cell surface in association with MHC class I molecules are recognized by CD8+ T cells. MHC restricted peptides are important in eliciting cellular immune responses. As peptide antigens have a weak immunigenicity, pH-sensitive liposomes were used for peptide delivery to induce effective cytotoxic T lymphocyte (CTL) responses. In the previous study, as the HBx peptides could induce specific CTLs in vitro, we tested whether the HLA-A2/$K^b$ transgenic mice that were immunized by HBx-derived peptides could be protected from a viral challenge. Methods: HBx-peptides encapsulated by pH-sensitive liposomes were prepared. $A2K^b$ transgenic mice were immunized i.m. on days one and seven with the indicated concentrations of liposome-encapsulated peptides. Three weeks later, mice were infected with $1{\times}10^7pfu$/head of recombinant vaccinia virus (rVV)-HBx via i.p. administration. The ovaries were extracted from the mice, and the presence of rVV-HBx in the ovaries was analyzed using human TK-143B cells. IFN-${\gamma}$ secretion by these cells was directly assessed using a peptide-pulsed target cell stimulation assay with either peptide-pulsed antigen presenting cells (APCs), concanavalin A ($2{\mu}g/ml$), or a vehicle. To generate peptide-specific CTLs, splenocytes obtained from the immunized mice were stimulated with $20{\mu}g/ml$ of each peptide and restimulated with peptide-pulsed APC four times. The cytotoxic activity of the CTLs was assessed by standard $^{51}Cr$-release assay and intracellular IFN-${\gamma}$ assay. Results: Immunization of these peptides as a mixture in pH-sensitive liposomes to transgenic mice induced a good protective effect from a viral challenge by inducing the peptide-specific CD8+ T cells. Mice immunized with $50{\mu}g/head$ were much better protected against viral challenge compared to those immunized with $5{\mu}g$/head, whereas the mice immunized with empty liposomes were not protected at all. After in vitro CTL culture by peptide stimulation, however, specific cytotoxicity was much higher in the CTLs from mice immunized with $5{\mu}g/head$ than $50{\mu}g/head$ group. Increase of the number of cells that intracellular IFN-${\gamma}$ secreting cell among CD8+ T cells showed similar result. Conclusion: Mice immunized with XEPs within pH-sensitive liposome were protected against viral challenge. The protective effect depended on the amount of antigen used during immunization. XEP-3-specific CTLs could be induced by peptide stimulation in vitro from splenocytes obtained from immunized mice. The cytotoxic effect of CTLs was measured by $^{51}Cr$-release assay and the percentage of accumulated intracellular IFN-${\gamma}$ secreting cells after in vitro restimulation was measured by flow cytometric analysis. The result of $^{51}Cr$-release cytotoxicity test was well correlated with that of the flow cytometric analysis. Viral protection was effective in immunized group of $50{\mu}g/head$, while in the in vitro restimulation, it showed more spectific response in $5{\mu}g$/head group.

Deep Learning Approaches for Accurate Weed Area Assessment in Maize Fields (딥러닝 기반 옥수수 포장의 잡초 면적 평가)

  • Hyeok-jin Bak;Dongwon Kwon;Wan-Gyu Sang;Ho-young Ban;Sungyul Chang;Jae-Kyeong Baek;Yun-Ho Lee;Woo-jin Im;Myung-chul Seo;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.1
    • /
    • pp.17-27
    • /
    • 2023
  • Weeds are one of the factors that reduce crop yield through nutrient and photosynthetic competition. Quantification of weed density are an important part of making accurate decisions for precision weeding. In this study, we tried to quantify the density of weeds in images of maize fields taken by unmanned aerial vehicle (UAV). UAV image data collection took place in maize fields from May 17 to June 4, 2021, when maize was in its early growth stage. UAV images were labeled with pixels from maize and those without and the cropped to be used as the input data of the semantic segmentation network for the maize detection model. We trained a model to separate maize from background using the deep learning segmentation networks DeepLabV3+, U-Net, Linknet, and FPN. All four models showed pixel accuracy of 0.97, and the mIOU score was 0.76 and 0.74 in DeepLabV3+ and U-Net, higher than 0.69 for Linknet and FPN. Weed density was calculated as the difference between the green area classified as ExGR (Excess green-Excess red) and the maize area predicted by the model. Each image evaluated for weed density was recombined to quantify and visualize the distribution and density of weeds in a wide range of maize fields. We propose a method to quantify weed density for accurate weeding by effectively separating weeds, maize, and background from UAV images of maize fields.

An Analysis of Accessibility to Hydrogen Charging Stations in Seoul Based on Location-Allocation Models (입지배분모형 기반의 서울시 수소충전소 접근성 분석)

  • Sang-Gyoon Kim;Jong-Seok Won;Yong-Beom Pyeon;Min-Kyung Cho
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.2
    • /
    • pp.339-350
    • /
    • 2024
  • Purpose: This study analyzes accessibility of 10 hydrogen charging stations in Seoul and identifies areas that were difficult to access. The purpose is to re-analyze accessibility by adding a new location in terms of equity and safety of location placement, and then draw implications by comparing the improvement effects. Method: By applying the location-allocation model and the service area model based on network analysis of the ArcGIS program, areas with weak access were identified. The location selection method applied the 'Minimize Facilities' method in consideration of the need for rapid arrival to insufficient hydrogen charging stations. The limit distance for arrival within a specific time was analyzed by applying the average vehicle traffic speed(23.1km/h, Seoul Open Data Square) in 2022 to three categories: 3,850m(10minutes), 5,775m(15minutes), 7,700m(20minutes). In order to minimize conflicts over the installation of hydrogen charging stations, special standards of the Ministry of Trade, Industry and Energy applied to derive candidate sites for additional installation of hydrogen charging stations among existing gas stations and LPG/CNG charging stations. Result: As a result of the analysis, it was confirmed that accessibility was significantly improved by installing 5 new hydrogen charging stations at relatively safe gas stations and LPG/CNG charging stations in areas where access to the existing 10 hydrogen charging stations is weak within 20 minutes. Nevertheless, it was found that there are still areas where access remains difficult. Conclusion: The location allocation model is used to identify areas where access to hydrogen charging stations is difficult and prioritize installation, decision-making to select locations for hydrogen charging stations based on scientific evidence can be supported.

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF