• Title/Summary/Keyword: Network structure

Search Result 5,340, Processing Time 0.033 seconds

Popularization of Marathon through Social Network Big Data Analysis : Focusing on JTBC Marathon (소셜 네트워크 빅데이터 분석을 통한 마라톤 대중화 : JTBC 마라톤대회를 중심으로)

  • Lee, Ji-Su;Kim, Chi-Young
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.3
    • /
    • pp.27-40
    • /
    • 2020
  • The marathon has long been established as a representative lifestyle for all ages. With the recent expansion of the Work and Life Balance trend across the society, marathon with a relatively low barrier to entry is gaining popularity among young people in their 20s and 30s. By analyzing the issues and related words of the marathon event, we will analyze the spottainment elements of the marathon event that is popular among young people through keywords, and suggest a development plan for the differentiated event. In order to analyze keywords and related words, blogs, cafes and news provided by Naver and Daum were selected as analysis channels, and 'JTBC Marathon' and 'Culture' were extracted as key words for data search. The data analysis period was limited to a three-month period from August 13, 2019 to November 13, 2019, when the application for participation in the 2019 JTBC Marathon was started. For data collection and analysis, frequency and matrix data were extracted through social matrix program Textom. In addition, the degree of the relationship was quantified by analyzing the connection structure and the centrality of the degree of connection between the words. Although the marathon is a personal movement, young people share a common denominator of "running" and form a new cultural group called "running crew" with other young people. Through this, it was found that a marathon competition culture was formed as a festival venue where people could train together, participate together, and escape from the image of a marathon run alone and fight with themselves.

A Comparative Study on the Characteristics of Cultural Heritage in China and Vietnam (중국과 베트남의 문화유산 특성 비교 연구)

  • Shin, Hyun-Sil;Jun, Da-Seul
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.40 no.2
    • /
    • pp.34-43
    • /
    • 2022
  • This study compared the characteristics of cultural heritage in China and Vietnam, which have developed in the relationship of mutual geopolitical and cultural influence in history, and the following conclusions were made. First, the definition of cultural heritage in China and Vietnam has similar meanings in both countries. In the case of cultural heritage classification, both countries introduced the legal concept of intangible cultural heritage through UNESCO, and have similarities in terms of intangible cultural heritage. Second, while China has separate laws for managing tangible and intangible cultural heritages, Vietnam integrally manages the two types of cultural heritages under a single law. Vietnam has a slower introduction of the concept of cultural heritage than China, but it shows high integration in terms of system. Third, cultural heritages in both China and Vietnam are graded, which is applied differently depending on the type of heritage. The designation method has a similarity in which the two countries have a vertical structure and pass through steps. By restoring the value of heritage and complementing integrity through such a step-by-step review, balanced development across the country is being sought through tourism to enjoy heritage and create economic effects. Fourth, it was confirmed that the cultural heritage management organization has a central government management agency in both countries, but in China, the authority of local governments is higher than that of Vietnam. In addition, unlike Vietnam, where tangible and intangible cultural heritage are managed by an integrated institution, China had a separate institution in charge of intangible cultural heritage. Fifth, China is establishing a conservation management policy focusing on sustainability that harmonizes the protection and utilization of heritage. Vietnam is making efforts to integrate the contents and spirit of the agreement into laws, programs, and projects related to cultural heritage, especially intangible heritage and economic and social as a whole. However, it is still dependent on the influence of international organizations. Sixth, China and Vietnam are now paying attention to intangible heritage recently introduced, breaking away from the cultural heritage protection policy centered on tangible heritage. In addition, they aim to unite the people through cultural heritage and achieve the nation's unified policy goals. The two countries need to use intangible heritage as an efficient means of preserving local communities or regions. A cultural heritage preservation network should be established for each subject that can integrate the components of intangible heritage into one unit to lay the foundation for the enjoyment of the people. This study has limitations as a research stage comparing the cultural heritage system and preservation management status in China and Vietnam, and the characteristic comparison of cultural heritage policies by type remains a future research task.

A Study on the Needs Analysis of University-Regional Collaborative Startup Co-Space Composition (대학-지역 연계 협업적 창업공간(Co-Space) 구성 요구도 분석)

  • Kim, In-Sook;Yang, Ji-Hee;Lee, Sang-Seub
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.159-172
    • /
    • 2023
  • The purpose of this study is to explore a collaborative start-up space(Co-Space) configuration plan in terms of university-regional linkage through demand analysis on the composition of university-regional linkage startup space. To this end, a survey was conducted for request analysis, and the collected data were analyzed through the t-test, The Lotus for Focus model. In addition, FGI was implemented for entrepreneurs, and the direction of the composition of the university-region Co-Space was derived from various aspects. The results of this study are as follows. First, as a result of the analysis of the necessity of university-community Co-Space, the necessity of opening up the start-up space recognized by local residents and the necessity of building the start-up space in the region were high. In addition, men recognized the need to build a space for start-ups in the community more highly than women did women. Second, as a result of analysis of demands for university-regional Co-Space, the difference between current importance and future necessity of university-regional Co-Space was statistically significant. Third, as a result of analysis on the composition of the startup space by cooperation between universities and regions, different demands were made for composition of the startup space considering openness and closeness, and for composition of the startup space size. The implications of the study are as follows. First, Co-Spaces need to be constructed in conjunction with universities in accordance with the demands of start-up companies in the region by stage of development. Second, it is necessary to organize a customized Co-Space that takes into account the size and operation of the start-up space. Third, it is necessary to establish an experience-based open space for local residents in the remaining space of the university. Fourth, it is necessary to establish a Co-Space that enables an organic network between local communities, start-up investment companies, start-up support institutions, and start-up companies. This study is significant in that it proposed the regional startup ecosystem and the cooperative start-up space structure for strengthening start-up sustainability through cooperation between universities and local communities. The results of this study are expected to be used as useful basic data for Co-Space construction to build a regional start-up ecosystem in a trend emphasizing the importance of start-up space, which is a major factor affecting start-up companies.

  • PDF

An Empirical Study on the Effects of Seniors' Growth·Fixed Mindset and Entrepreneurial Ability on Entrepreneurial Intentions: Focusing on the Mediating Effects of Entrepreneurship Efficasy (시니어의 성장·고정 마인드셋과 창업역량이 창업의도에 미치는 영향에 관한 실증연구: 창업효능감의 매개효과 중심으로)

  • Jae Yul, Lee;Tae Kwan, Ha
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.17 no.6
    • /
    • pp.89-104
    • /
    • 2022
  • Entrepreneurship by seniors who have accumulated skills and expertise in the industrial field is very important from a social point of view. This study aimed at seniors to find out the major start-up capabilities of seniors in an economic situation where instability factors and uncertainties are amplified due to the social structure of jobs that has changed due to COVID-19 during the 4th industrial revolution and the rapidly progressing high interest rates and global supply chain problems. The purpose of this study was to empirically verify how variables affect entrepreneurial intention. In addition, from the perspective of mindset, which is the individual psychological characteristic of pre-entrepreneurs, we tried to empirically verify whether growth mindset and fixed mindset have a significant effect on senior entrepreneurship intention. The psychological characteristics of founders were approached from the perspective of mindset, and an attempt was made to apply them to the field of entrepreneurship and to obtain practical implications. This study empirically analyzed the effects of growth mindset, fixed mindset, technical competency, network competency, and funding competency, which are components of mindset, on senior entrepreneurial intention, and verified the mediating effect of entrepreneurial efficacy. As a result of the empirical analysis, it was verified that growth mindset and technological competency had a positive (+) effect on entrepreneurial intention. In addition, it was verified that the mediating effect of entrepreneurial efficacy was significant in the influence of growth mindset and technological competency on entrepreneurial intention, and it was verified that growth mindset and technological competency are important variables in senior entrepreneurship. The study results provide the following policy implications. In order to activate senior entrepreneurship, first, to maximize the effect of founder education, programs such as customized entrepreneurship education that match the growth mindset characteristics, which are the psychological characteristics of founders, are needed. Second, it is required to expand the base of technology startups by expanding government support, such as expanding low-interest policy financing, for senior startups with technological capabilities and expertise. Third, it is necessary to provide institutional support for starting a business, such as providing a start-up program even before retirement, so that the expertise and technology accumulated by seniors can be linked to start-ups even after retirement.

Study on water quality prediction in water treatment plants using AI techniques (AI 기법을 활용한 정수장 수질예측에 관한 연구)

  • Lee, Seungmin;Kang, Yujin;Song, Jinwoo;Kim, Juhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.151-164
    • /
    • 2024
  • In water treatment plants supplying potable water, the management of chlorine concentration in water treatment processes involving pre-chlorination or intermediate chlorination requires process control. To address this, research has been conducted on water quality prediction techniques utilizing AI technology. This study developed an AI-based predictive model for automating the process control of chlorine disinfection, targeting the prediction of residual chlorine concentration downstream of sedimentation basins in water treatment processes. The AI-based model, which learns from past water quality observation data to predict future water quality, offers a simpler and more efficient approach compared to complex physicochemical and biological water quality models. The model was tested by predicting the residual chlorine concentration downstream of the sedimentation basins at Plant, using multiple regression models and AI-based models like Random Forest and LSTM, and the results were compared. For optimal prediction of residual chlorine concentration, the input-output structure of the AI model included the residual chlorine concentration upstream of the sedimentation basin, turbidity, pH, water temperature, electrical conductivity, inflow of raw water, alkalinity, NH3, etc. as independent variables, and the desired residual chlorine concentration of the effluent from the sedimentation basin as the dependent variable. The independent variables were selected from observable data at the water treatment plant, which are influential on the residual chlorine concentration downstream of the sedimentation basin. The analysis showed that, for Plant, the model based on Random Forest had the lowest error compared to multiple regression models, neural network models, model trees, and other Random Forest models. The optimal predicted residual chlorine concentration downstream of the sedimentation basin presented in this study is expected to enable real-time control of chlorine dosing in previous treatment stages, thereby enhancing water treatment efficiency and reducing chemical costs.

Analysis of the Case of Separation of Mixtures Presented in the 2015 Revised Elementary School Science 4th Grade Authorized Textbook and Comparison of the Concept of Separation of Mixtures between Teachers and Students (2015 개정 초등학교 과학과 4학년 검정 교과서에 제시된 혼합물의 분리 사례 분석 및 교사와 학생의 혼합물 개념 비교)

  • Chae, Heein;Noh, Sukgoo
    • Journal of Korean Elementary Science Education
    • /
    • v.43 no.1
    • /
    • pp.122-135
    • /
    • 2024
  • The purpose of this study was to analyze the examples presented in the "Separation of Mixtures" section of the 2015 revised science authorized textbook introduced in elementary schools in 2022 and to see how the teachers and students understand the concept. To do that, 96 keywords were extracted through three cleansing processes to separate the elements of the mixture presented in the textbook. In order to analyze the teachers' perceptions, 32 teachers at elementary schools in Gyeonggi-do received responses to a survey, and a survey of 92 fourth graders who learned the separation of the mixture with an authorized textbook in 2022 was used for the analysis. As a result, as for the solids, 54 out of 96 separations (56.3%) showed the highest ratio, and the largest number of cases were presented according to the characteristics of the development stage of students. It was followed by living things, liquids, other objects and substances, and gasses. By analyzing the mixture, the structure and the interrelationships between the 96 extracted keywords were systematized through the network analysis, and the connection between the keywords, which were a part of the mixture was analyzed. The teachers partially responded to the separation of the complex mixture presented in the textbook, but most of the students did not recognize it. Because the analysis of the teachers' and students' perceptions of the seven separate categories presented in the survey was not based on a clear conceptual perception of the separation of the mixture, but rather they tended to respond differently for each characteristic of each individual category, it was decided that it was necessary to present clearer examples of the separation of the mixture, so that the students could better understand the concept of separation of mixtures that could be somewhat abstract.

Open Skies Policy : A Study on the Alliance Performance and International Competition of FFP (항공자유화정책상 상용고객우대제도의 제휴성과와 국제경쟁에 관한 연구)

  • Suh, Myung-Sun;Cho, Ju-Eun
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.25 no.2
    • /
    • pp.139-162
    • /
    • 2010
  • In terms of the international air transport, the open skies policy implies freedom in the sky or opening the sky. In the normative respect, the open skies policy is a kind of open-door policy which gives various forms of traffic right to other countries, but on the other hand it is a policy of free competition in the international air transport. Since the Airline Deregulation Act of 1978, the United States has signed an open skies agreement with many countries, starting with the Netherlands, so that competitive large airlines can compete in the international air transport market where there exist a lot of business opportunities. South Korea now has an open skies agreement with more than 20 countries. The frequent flyer program (FFP) is part of a broad-based marketing alliance which has been used as an airfare strategy since the U.S. government's airline deregulation. The membership-based program is an incentive plan that provides mileage points to customers for using airline services and rewards customer loyalty in tangible forms based on their accumulated points. In its early stages, the frequent flyer program was focused on marketing efforts to attract customers, but now in the environment of intense competition among airlines, the program is used as an important strategic marketing tool for enhancing business performance. Therefore, airline companies agree that they need to identify customer needs in order to secure loyal customers more effectively. The outcomes from an airline's frequent flyer program can have a variety of effects on international competition. First, the airline can obtain a more dominant position in the air flight market by expanding its air route networks. Second, the availability of flight products for customers can be improved with an increase in flight frequency. Third, the airline can preferentially expand into new markets and thus gain advantages over its competitors. However, there are few empirical studies on the airline frequent flyer program. Accordingly, this study aims to explore the effects of the program on international competition, after reviewing the types of strategic alliance between airlines. Making strategic airline alliances is a worldwide trend resulting from the open skies policy. South Korea also needs to be making open skies agreements more realistic to promote the growth and competition of domestic airlines. The present study is about the performance of the airline frequent flyer program and international competition under the open skies policy. With a sample of five global alliance groups (Star, Oneworld, Wings, Qualiflyer and Skyteam), the study was attempted as an empirical study of the effects that the resource structures and levels of information technology held by airlines in each group have on the type of alliance, and one-way analysis of variance and regression analysis were used to test hypotheses. The findings of this study suggest that both large airline companies and small/medium-size airlines in an alliance group with global networks and organizations are able to achieve high performance and secure international competitiveness. Airline passengers earn mileage points by using non-flight services through an alliance network with hotels, car-rental services, duty-free shops, travel agents and more and show high interests in and preferences for related service benefits. Therefore, Korean airline companies should develop more aggressive marketing programs based on multilateral alliances with other services including hotels, as well as with other airlines.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF