• Title/Summary/Keyword: The Industry Network

Search Result 2,748, Processing Time 0.038 seconds

Standardization of Identification-number for Processed Food in Food-traceability-system (가공식품에 대한 이력추적관리번호 부여체계의 표준화 방안)

  • Choi, Joon-Ho
    • Journal of Food Hygiene and Safety
    • /
    • v.27 no.2
    • /
    • pp.194-201
    • /
    • 2012
  • Facing a number of global food-related accidents, the concept and system for food traceability have been designed and introduced in many countries to manage the food-safety risks. To connect and harmonize the various food traceability-information in food traceability system according to the food supply chain, the coding system of identification-number for food-traceability has to be standardized. The GTIN (Global Trade Item Number) barcode system which has been globally standardized and implemented, is reviewed with the mandatory food-labeling regulation in expiration date of processed foods. The integration of GTIN-13 bar-code system for food-traceability is a crucial factor to expand its function in the food-related industrial areas. In this literature, the standard coding system of identification-number for food-traceability is proposed with 20 digit coding number which is combined with GTIN-13 bar-code (13 digit), expiration date (6 digit), and additional classification code (1 digit). This proposed standard coding system for identification-number has a several advantages in application for prohibiting the sale of hazard goods, food-recall, and inquiring food traceability-information. And also, this proposed coding system could enhance the food traceability system by communicating and harmonizing the information with the national network such as UNI-PASS and electronic Tax-invoice system. For the global application, the identification-number for food-traceability needs to be cooperated with the upcoming global standards such as GTIN-128 bar-code and GS1 DataBar.

A study on security independent behavior in social game using expanded health belief model (건강신념모델을 확장한 소셜게임(Social Game) 보안의지행동에 관한 연구)

  • Ahn, Ho-Jeong;Kim, Sung-Jun;Kwon, Do-Soon
    • Management & Information Systems Review
    • /
    • v.35 no.2
    • /
    • pp.99-118
    • /
    • 2016
  • With the development of Internet and popularization of smartphones over recent years, social network services are experiencing rapid growth. On top of this, smartphone gaming market is showing a rapid growth and the use of mobile social games is on the significant rise. The occurrence of game data manipulation targeting these services and personal information leakage is highlighting the importance of social gaming security. This study is intended to propose development plans effective and efficient in social game services by figuring out factors putting effects on security dependent behavior of social game users in Korea and carrying out a practical study on the casual relationship between factors influencing security dependent behavior through recognized behavioral control and attitudes for privacy infringement of these factors. To do this, proposed was a study model in which the HBM(Health Belief Model) allowing the social game user to influence security dependent behavior was expanded and applied as a major variable. To verify the study model of this study practically, a survey was conducted among university students in Seoul-based K University and S University who had experienced using social game services. According to the study findings, firstly, the perceived seriousness turned out to provide positive influence to trust. But, the perceived seriousness turned out not to put positive effects on self-efficacy. Secondly, the perceived probability turned out not to put positive effects on self-efficacy and trust. Thirdly, the perceived gain turned out to put positive effects on self-efficacy and trust. Fourthly, the perceived disorder turned out not to put positive effects on self-efficacy and trust. Fifthly, self-efficacy turned out to put positive effects on trust. But, self-efficacy turned out not to put positive effects on security dependent behavior. Sixthly, trust turned out not to put positive effects on security dependent behavior. This study is intended to make a strategic proposal so that social game users can raise awareness of their level of security perception and security willingness through this.

  • PDF

Theoretical Study on Modeling Success Factors of Overseas Agricultural Startups (해외 농업스타트업 성공요인 모델링에 관한 이론적 고찰)

  • Jinhwan, Park;Sangsoon, Kim
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.85-106
    • /
    • 2023
  • This study reviewed and derived the success factors of overseas agricultural startups and studied their integrated research model. Agricultural startups and general startups have in common that poor resources and infrastructure exist from a resource-based perspective after startup, but a differentiated approach from general startups is required due to the nature of the primary industry of agriculture. In this study, we approach the company internal factors (human resources/vision/distribution network capacity/capital capacity/cultivated crops/physical resources/farming technology, etc.) and external factors (agricultural infrastructure/laws/regulations/relationship with surrounding society, etc.) We tried to build a research model that can be integrated by focusing on various existing research models, success factors, and entrepreneurship. Through this, it is intended to present an integrated model that is practically helpful to business performance to entrepreneurs, business-related persons, and researchers who need an integrated understanding of agricultural startups at home and abroad. made for purpose In this paper, a standard model was established through three types (existing agricultural startup, small and medium-sized business startup, multinational company, and comprehensive approach) according to size and characteristics for modeling agricultural startup success factors. Through this, a total of 9 success factors (agricultural management, external environment, manager/founder characteristics, corporate identity, business management, organizational culture, infrastructure, commercialization capability, and sustainable growth) were derived. The implication of this study is that the success factors of agricultural startups were comprehensively presented based on 'entrepreneurship' for various domestic and foreign agricultural startup cases. By confirming the systematic categorization, a standard model for future agricultural startup success factors was presented, and as a result, a foundation was presented for systematic research and practical effectiveness of related research in the future.

  • PDF

Perceptional Change of a New Product, DMB Phone

  • Kim, Ju-Young;Ko, Deok-Im
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.3
    • /
    • pp.59-88
    • /
    • 2008
  • Digital Convergence means integration between industry, technology, and contents, and in marketing, it usually comes with creation of new types of product and service under the base of digital technology as digitalization progress in electro-communication industries including telecommunication, home appliance, and computer industries. One can see digital convergence not only in instruments such as PC, AV appliances, cellular phone, but also in contents, network, service that are required in production, modification, distribution, re-production of information. Convergence in contents started around 1990. Convergence in network and service begins as broadcasting and telecommunication integrates and DMB(digital multimedia broadcasting), born in May, 2005 is the symbolic icon in this trend. There are some positive and negative expectations about DMB. The reason why two opposite expectations exist is that DMB does not come out from customer's need but from technology development. Therefore, customers might have hard time to interpret the real meaning of DMB. Time is quite critical to a high tech product, like DMB because another product with same function from different technology can replace the existing product within short period of time. If DMB does not positioning well to customer's mind quickly, another products like Wibro, IPTV, or HSPDA could replace it before it even spreads out. Therefore, positioning strategy is critical for success of DMB product. To make correct positioning strategy, one needs to understand how consumer interprets DMB and how consumer's interpretation can be changed via communication strategy. In this study, we try to investigate how consumer perceives a new product, like DMB and how AD strategy change consumer's perception. More specifically, the paper segment consumers into sub-groups based on their DMB perceptions and compare their characteristics in order to understand how they perceive DMB. And, expose them different printed ADs that have messages guiding consumer think DMB in specific ways, either cellular phone or personal TV. Research Question 1: Segment consumers according to perceptions about DMB and compare characteristics of segmentations. Research Question 2: Compare perceptions about DMB after AD that induces categorization of DMB in direction for each segment. If one understand and predict a direction in which consumer perceive a new product, firm can select target customers easily. We segment consumers according to their perception and analyze characteristics in order to find some variables that can influence perceptions, like prior experience, usage, or habit. And then, marketing people can use this variables to identify target customers and predict their perceptions. If one knows how customer's perception is changed via AD message, communication strategy could be constructed properly. Specially, information from segmented customers helps to develop efficient AD strategy for segment who has prior perception. Research framework consists of two measurements and one treatment, O1 X O2. First observation is for collecting information about consumer's perception and their characteristics. Based on first observation, the paper segment consumers into two groups, one group perceives DMB similar to Cellular phone and the other group perceives DMB similar to TV. And compare characteristics of two segments in order to find reason why they perceive DMB differently. Next, we expose two kinds of AD to subjects. One AD describes DMB as Cellular phone and the other Ad describes DMB as personal TV. When two ADs are exposed to subjects, consumers don't know their prior perception of DMB, in other words, which subject belongs 'similar-to-Cellular phone' segment or 'similar-to-TV' segment? However, we analyze the AD's effect differently for each segment. In research design, final observation is for investigating AD effect. Perception before AD is compared with perception after AD. Comparisons are made for each segment and for each AD. For the segment who perceives DMB similar to TV, AD that describes DMB as cellular phone could change the prior perception. And AD that describes DMB as personal TV, could enforce the prior perception. For data collection, subjects are selected from undergraduate students because they have basic knowledge about most digital equipments and have open attitude about a new product and media. Total number of subjects is 240. In order to measure perception about DMB, we use indirect measurement, comparison with other similar digital products. To select similar digital products, we pre-survey students and then finally select PDA, Car-TV, Cellular Phone, MP3 player, TV, and PSP. Quasi experiment is done at several classes under instructor's allowance. After brief introduction, prior knowledge, awareness, and usage about DMB as well as other digital instruments is asked and their similarities and perceived characteristics are measured. And then, two kinds of manipulated color-printed AD are distributed and similarities and perceived characteristics for DMB are re-measured. Finally purchase intension, AD attitude, manipulation check, and demographic variables are asked. Subjects are given small gift for participation. Stimuli are color-printed advertising. Their actual size is A4 and made after several pre-test from AD professionals and students. As results, consumers are segmented into two subgroups based on their perceptions of DMB. Similarity measure between DMB and cellular phone and similarity measure between DMB and TV are used to classify consumers. If subject whose first measure is less than the second measure, she is classified into segment A and segment A is characterized as they perceive DMB like TV. Otherwise, they are classified as segment B, who perceives DMB like cellular phone. Discriminant analysis on these groups with their characteristics of usage and attitude shows that Segment A knows much about DMB and uses a lot of digital instrument. Segment B, who thinks DMB as cellular phone doesn't know well about DMB and not familiar with other digital instruments. So, consumers with higher knowledge perceive DMB similar to TV because launching DMB advertising lead consumer think DMB as TV. Consumers with less interest on digital products don't know well about DMB AD and then think DMB as cellular phone. In order to investigate perceptions of DMB as well as other digital instruments, we apply Proxscal analysis, Multidimensional Scaling technique at SPSS statistical package. At first step, subjects are presented 21 pairs of 7 digital instruments and evaluate similarity judgments on 7 point scale. And for each segment, their similarity judgments are averaged and similarity matrix is made. Secondly, Proxscal analysis of segment A and B are done. At third stage, get similarity judgment between DMB and other digital instruments after AD exposure. Lastly, similarity judgments of group A-1, A-2, B-1, and B-2 are named as 'after DMB' and put them into matrix made at the first stage. Then apply Proxscal analysis on these matrixes and check the positional difference of DMB and after DMB. The results show that map of segment A, who perceives DMB similar as TV, shows that DMB position closer to TV than to Cellular phone as expected. Map of segment B, who perceive DMB similar as cellular phone shows that DMB position closer to Cellular phone than to TV as expected. Stress value and R-square is acceptable. And, change results after stimuli, manipulated Advertising show that AD makes DMB perception bent toward Cellular phone when Cellular phone-like AD is exposed, and that DMB positioning move towards Car-TV which is more personalized one when TV-like AD is exposed. It is true for both segment, A and B, consistently. Furthermore, the paper apply correspondence analysis to the same data and find almost the same results. The paper answers two main research questions. The first one is that perception about a new product is made mainly from prior experience. And the second one is that AD is effective in changing and enforcing perception. In addition to above, we extend perception change to purchase intention. Purchase intention is high when AD enforces original perception. AD that shows DMB like TV makes worst intention. This paper has limitations and issues to be pursed in near future. Methodologically, current methodology can't provide statistical test on the perceptual change, since classical MDS models, like Proxscal and correspondence analysis are not probability models. So, a new probability MDS model for testing hypothesis about configuration needs to be developed. Next, advertising message needs to be developed more rigorously from theoretical and managerial perspective. Also experimental procedure could be improved for more realistic data collection. For example, web-based experiment and real product stimuli and multimedia presentation could be employed. Or, one can display products together in simulated shop. In addition, demand and social desirability threats of internal validity could influence on the results. In order to handle the threats, results of the model-intended advertising and other "pseudo" advertising could be compared. Furthermore, one can try various level of innovativeness in order to check whether it make any different results (cf. Moon 2006). In addition, if one can create hypothetical product that is really innovative and new for research, it helps to make a vacant impression status and then to study how to form impression in more rigorous way.

  • PDF

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

The Effect of Mutual Trust on Relational Performance in Supplier-Buyer Relationships for Business Services Transactions (재상업복무교역중적매매관계중상호신임대관계적효적영향(在商业服务交易中的买卖关系中相互信任对关系绩效的影响))

  • Noh, Jeon-Pyo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.4
    • /
    • pp.32-43
    • /
    • 2009
  • Trust has been studied extensively in psychology, economics, and sociology, and its importance has been emphasized not only in marketing, but also in business disciplines in general. Unlike past relationships between suppliers and buyers, which take considerable advantage of private networks and may involve unethical business practices, partnerships between suppliers and buyers are at the core of success for industrial marketing amid intense global competition in the 21st century. A high level of mutual cooperation occurs through an exchange relationship based on trust, which brings long-term benefits, competitive enhancements, and transaction cost reductions, among other benefits, for both buyers and suppliers. In spite of the important role of trust, existing studies in buy-supply situations overlook the role of trust and do not systematically analyze the effect of trust on relational performance. Consequently, an in-depth study that determines the relation of trust to the relational performance between buyers and suppliers of business services is absolutely needed. Business services in this study, which include those supporting the manufacturing industry, are drawing attention as the economic growth engine for the next generation. The Korean government has selected business services as a strategic area for the development of manufacturing sectors. Since the demands for opening business services markets are becoming fiercer, the competitiveness of the business service industry must be promoted now more than ever. The purpose of this study is to investigate the effect of the mutual trust between buyers and suppliers on relational performance. Specifically, this study proposed a theoretical model of trust-relational performance in the transactions of business services and empirically tested the hypotheses delineated from the framework. The study suggests strategic implications based on research findings. Empirical data were collected via multiple methods, including via telephone, mail, and in-person interviews. Sample companies were knowledge-based companies supplying and purchasing business services in Korea. The present study collected data on a dyadic basis. Each pair of sample companies includes a buying company and its corresponding supplying company. Mutual trust was traced for each pair of companies. This study proposes a model of trust-relational performance of buying-supplying for business services. The model consists of trust and its antecedents and consequences. The trust of buyers is classified into trust toward the supplying company and trust toward salespersons. Viewing trust both at the individual level and the organizational level is based on the research of Doney and Cannon (1997). Normally, buyers are the subject of trust, but this study supposes that suppliers are the subjects. Hence, it uniquely focused on the bilateral perspective of perceived risk. In other words, suppliers, like buyers, are the subject of trust since transactions are normally bilateral. From this point of view, suppliers' trust in buyers is as important as buyers' trust in suppliers. The suppliers' trust is influenced by the extent to which it trusts the buying companies and the buyers. This classification of trust using an individual level and an organization level is based on the suggestion of Doney and Cannon (1997). Trust affects the process of supplier selection, which works in a bilateral manner. Suppliers are actively involved in the supplier selection process, working very closely with buyers. In addition, the process is affected by the extent to which each party trusts its partners. The selection process consists of certain steps: recognition, information search, supplier selection, and performance evaluation. As a result of the process, both buyers and suppliers evaluate the performance and take corrective actions on the basis of such outcomes as tangible, intangible, and/or side effects. The measurement of trust used for the present study was developed on the basis of the studies of Mayer, Davis and Schoorman (1995) and Mayer and Davis (1999). Based on their recommendations, the three dimensions of trust used for the study include ability, benevolence, and integrity. The original questions were adjusted to the context of the transactions of business services. For example, a question such as "He/she has professional capabilities" has been changed to "The salesperson showed professional capabilities while we talked about our products." The measurement used for this study differs from those used in previous studies (Rotter 1967; Sullivan and Peterson 1982; Dwyer and Oh 1987). The measurements of the antecedents and consequences of trust used for this study were developed on the basis of Doney and Cannon (1997). The original questions were adjusted to the context of transactions in business services. In particular, questions were developed for both buyers and suppliers to address the following factors: reputation (integrity, customer care, good-will), market standing (company size, market share, positioning in the industry), willingness to customize (product, process, delivery), information sharing (proprietary information, private information), willingness to maintain relationships, perceived professionalism, authority empowerment, buyer-seller similarity, and contact frequency. As a consequential variable of trust, relational performance was measured. Relational performance is classified into tangible effects, intangible effects, and side effects. Tangible effects include financial performance; intangible effects include improvements in relations, network developing, and internal employee satisfaction; side effects include those not included either in the tangible or intangible effects. Three hundred fifty pairs of companies were contacted, and one hundred five pairs of companies responded. After deleting five company pairs because of incomplete responses, one hundred five pairs of companies were used for data analysis. The response ratio of the companies used for data analysis is 30% (105/350), which is above the average response ratio in industrial marketing research. As for the characteristics of the respondent companies, the majority of the companies operate service businesses for both buyers (85.4%) and suppliers (81.8%). The majority of buyers (76%) deal with consumer goods, while the majority of suppliers (70%) deal with industrial goods. This may imply that buyers process the incoming material, parts, and components to produce the finished consumer goods. As indicated by their report of the length of acquaintance with their partners, suppliers appear to have longer business relationships than do buyers. Hypothesis 1 tested the effects of buyer-supplier characteristics on trust. The salesperson's professionalism (t=2.070, p<0.05) and authority empowerment (t=2.328, p<0.05) positively affected buyers' trust toward suppliers. On the other hand, authority empowerment (t=2.192, p<0.05) positively affected supplier trust toward buyers. For both buyers and suppliers, the degree of authority empowerment plays a crucial role in the maintenance of their trust in each other. Hypothesis 2 tested the effects of buyerseller relational characteristics on trust. Buyers tend to trust suppliers, as suppliers make every effort to contact buyers (t=2.212, p<0.05). This tendency has also been shown to be much stronger for suppliers (t=2.591, p<0.01). On the other hand suppliers trust buyers because suppliers perceive buyers as being similar to themselves (t=2.702, p<0.01). This finding confirmed the results of Crosby, Evans, and Cowles (1990), which reported that suppliers and buyers build relationships through regular meetings, either for business or personal matters. Hypothesis 3 tested the effects of trust on perceived risk. It has been found that for both suppliers and buyers the lower is the trust, the higher is the perceived risk (t=-6.621, p<0.01 for buyers; t=-2.437, p<0.05). Interestingly, this tendency has been shown to be much stronger for buyers than for suppliers. One possible explanation for this higher level of perceived risk is that buyers normally perceive higher risks than do suppliers in transactions involving business services. For this reason, it is necessary for suppliers to implement risk reduction strategies for buyers. Hypothesis 4 tested the effects of trust on information searching. It has been found that for both suppliers and buyers, contrary to expectation, trust depends on their partner's reputation (t=2.929, p<0.01 for buyers; t=2.711, p<0.05 for suppliers). This finding shows that suppliers with good reputations tend to be trusted. Prior experience did not show any significant relationship with trust for either buyers or suppliers. Hypothesis 5 tested the effects of trust on supplier/buyer selection. Unlike buyers, suppliers tend to trust buyers when they think that previous transactions with buyers were important (t=2.913 p<0.01). However, this study did not show any significant relationship between source loyalty and the trust of buyers in suppliers. Hypothesis 6 tested the effects of trust on relational performances. For buyers and suppliers, financial performance reportedly improved when they trusted their partners (t=2.301, p<0.05 for buyers; t=3.692, p<0.01 for suppliers). It is interesting that this tendency was much stronger for suppliers than it was for buyers. Similarly, competitiveness was reported to improve when buyers and suppliers trusted their partners (t=3.563, p<0.01 for buyers; t=3.042, p<0.01 for suppliers). For suppliers, efficiency and productivity were reportedly improved when they trusted buyers (t=2.673, p<0.01). Other performance indices showed insignificant relationships with trust. The findings of this study have some strategic implications. First and most importantly, trust-based transactions are beneficial for both suppliers and buyers. As verified in the study, financial performance can be improved through efforts to build and maintain mutual trust. Similarly, competitiveness can be increased through the same kinds of effort. Second, trust-based transactions can facilitate the reduction of perceived risks inherent in the purchasing situation. This finding has implications for both suppliers and buyers. It is generally believed that buyers perceive higher risks in a highly involved purchasing situation. To reduce risks, previous studies have recommended that suppliers devise risk-reducing tactics. Moving beyond these recommendations, the present study uniquely focused on the bilateral perspective of perceived risk. In other words, suppliers are also susceptible to perceived risks, especially when they supply services that require very technical and sophisticated manipulations and maintenance. Consequently, buyers and suppliers must solve problems together in close collaboration. Hence, mutual trust plays a crucial role in the problem-solving process. Third, as found in this study, the more authority a salesperson has, the more he or she can be trusted. This finding is very important with regard to tactics. Building trust is a long-term assignment; however, when mutual trust has not been developed, suppliers can overcome the problems they encounter by empowering a salesperson with the authority to make certain decisions. This finding applies to suppliers as well.

  • PDF