• Title/Summary/Keyword: Price index

Search Result 802, Processing Time 0.028 seconds

Effects of Nonverbal Communication of Flight Attendants on Customer Engagement and Brand Intimacy (항공사 승무원의 비언어 커뮤니케이션이 고객 인게이지먼트 및 브랜드 친밀감에 미치는 영향)

  • Yuna Choi;Namho Chung
    • Knowledge Management Research
    • /
    • v.24 no.2
    • /
    • pp.185-209
    • /
    • 2023
  • The air travel industry, which had shrunk with COVID-19, is gaining wings again. Accordingly, this study investigated whether non-verbal communication factors experienced through interaction with airline flight attendants for passengers who have traveled abroad within the past year through domestic airlines affect customer engagement and brand intimacy. A total of 285 samples were collected, and SPSS 28 and AMOS 26 programs were used to verify the reliability and validity of the research tool, the suitability of the model, and hypotheses. As a result of the empirical study analysis, it was confirmed that Paralanguage and Proxemics in non-verbal communication of flight attendants had a significant effect on customer engagement. Although it is different from the results of previous studies following changes in perspective after COVID-19, it once again confirmed the importance of airline crew communication in providing face-to-face services at the interface with passengers. In order to induce customer engagement, which is a new customer satisfaction management index. In addition, it was confirmed that customer engagement has a significant effect on brand intimacy. These results support the view that it is necessary to establish new customer management indicators of emotion and relationship marketing in the existing marketing centered on price reduction or securing loyalty. It was confirmed that interactions with flight attendants can contribute to customer engagement, and these results have important implications for those working in the air transportation industry.

The Dynamics of Film Genre Box Office Success: Macro-Economic Conditions, Fashion Momentum, and Inter-Genre Competition (영화 장르 흥행의 동학: 거시경제, 유행의 동력, 장르 간 경쟁의 효과)

  • Dong-Il Jung;Yeseul Kim;Chaewon Ahn;Youngmin Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.2
    • /
    • pp.389-397
    • /
    • 2023
  • This study examines how macro-economic conditions, fashion momentum, and inter-genre competition affect movie genre's popularity, thus shaping fashion trends in the feature film market in Korea. Using panel data analysis of genre-specific audience sizes with 6 genre cateories and 132 monthly time points, we found that favorable economic conditions generate the fashion trend in the action/crime genre, while the deterioration of the economic conditions leads to the decline of action/crime genre. The finding implies that economic situations influence cultural consumers' psychological states, which in turn shape the fashion trend in certain direction. Furthermore, we found that the action/crime genre has a greater fashion momentum and its competitive power is stronger than other genres, suggesting that this genre has longer fashion cycle even if other genres rise to the top in their popularity. We argue that such enlengthened fahion cycle and competitive stength of the action/crime genre are associated with its breadth of niche width and audience loyalty. Scholarly and practical implications are discussed.

Automatic 3D data extraction method of fashion image with mannequin using watershed and U-net (워터쉐드와 U-net을 이용한 마네킹 패션 이미지의 자동 3D 데이터 추출 방법)

  • Youngmin Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.825-834
    • /
    • 2023
  • The demands of people who purchase fashion products on Internet shopping are gradually increasing, and attempts are being made to provide user-friendly images with 3D contents and web 3D software instead of pictures and videos of products provided. As a reason for this issue, which has emerged as the most important aspect in the fashion web shopping industry, complaints that the product is different when the product is received and the image at the time of purchase has been heightened. As a way to solve this problem, various image processing technologies have been introduced, but there is a limit to the quality of 2D images. In this study, we proposed an automatic conversion technology that converts 2D images into 3D and grafts them to web 3D technology that allows customers to identify products in various locations and reduces the cost and calculation time required for conversion. We developed a system that shoots a mannequin by placing it on a rotating turntable using only 8 cameras. In order to extract only the clothing part from the image taken by this system, markers are removed using U-net, and an algorithm that extracts only the clothing area by identifying the color feature information of the background area and mannequin area is proposed. Using this algorithm, the time taken to extract only the clothes area after taking an image is 2.25 seconds per image, and it takes a total of 144 seconds (2 minutes and 4 seconds) when taking 64 images of one piece of clothing. It can extract 3D objects with very good performance compared to the system.

Risk of Flood Damage Potential and Design Frequency (홍수피해발생 잠재위험도와 기왕최대강수량을 이용한 설계빈도의 연계)

  • Park, Seok Geun;Lee, Keon Haeng;Kyung, Min Soo;Kim, Hung Soo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.5B
    • /
    • pp.489-499
    • /
    • 2006
  • The Potential Flood Damage (PFD) is widely used for representing the degree of potential of flood damage. However, this cannot be related with the design frequency of river basin and so we have difficulty in the use of water resources field. Therefore, in this study, the concept of Potential Risk for Flood Damage Occurrence (PRFD) was introduced and estimated, which can be related to the design frequency. The PRFD has three important elements of hazard, exposure, and vulnerability. The hazard means a probability of occurrence of flood event, the exposure represents the degree that the property is exposed in the flood hazard, and the vulnerability represents the degree of weakness of the measures for flood prevention. Those elements were devided into some sub-elements. The hazard is explained by the frequency based rainfall, the exposure has two sub-elements which are population density and official land price, and the vulnerability has two sub-elements which are undevelopedness index and ability of flood defence. Each sub-elements are estimated and the estimated values are rearranged in the range of 0 to 100. The Analytic Hierarchy Process (AHP) is also applied to determine weighting coefficients in the equation of PRFD. The PRFD for the Anyang river basin and the design frequency are estimated by using the maximum rainfall. The existing design frequency for Anyang river basin is in the range of 50 to 200. And the design frequency estimation result of PRFD of this study is in the range of 110 to 130. Therefore, the developed method for the estimation of PRFD and the design frequency for the administrative districts are used and the method for the watershed and the river channel are to be applied in the future study.

Categorizing Quality Features of Franchisees: In the case of Korean Food Service Industry (프랜차이즈 매장 품질요인의 속성분류: 국내 외식업을 중심으로)

  • Byun, Sook-Eun;Cho, Eun-Seong
    • Journal of Distribution Research
    • /
    • v.16 no.1
    • /
    • pp.95-115
    • /
    • 2011
  • Food service is the major part of franchise business in Korea, accounting for 69.9% of the brands in the market. As the food service industry becomes mature, many franchisees have struggled to survive in the market. In general, consumers have higher levels of expectation toward service quality of franchised outlets compared that of (non-franchised) independent ones. They also tend to believe that franchisees deliver standardized service at the uniform food price, regardless of their locations. Such beliefs seem to be important reasons that consumers prefer franchised outlets to independent ones. Nevertheless, few studies examined the impact of qualify features of franchisees on customer satisfaction so far. To this end, this study examined the characteristics of various quality features of franchisees in the food service industry, regarding their relationship with customer satisfaction and dissatisfaction. The quality perception of heavy-users was also compared with that of light-users in order to find insights for developing differentiated marketing strategy for the two segments. Customer satisfaction has been understood as a one-dimensional construct while there are recent studies that insist two-dimensional nature of the construct. In this regard, Kano et al. (1984) suggested to categorize quality features of a product or service into five types, based on their relation to customer satisfaction and dissatisfaction: Must-be quality, Attractive quality, One-dimensional quality, Indifferent quality, and Reverse quality. According to the Kano model, customers are more dissatisfied when Must-be quality(M) are not fulfilled, but their satisfaction does not arise above neutral no matter how fully the quality fulfilled. In comparison, customers are more satisfied with a full provision of Attactive quality(A) but manage to accept its dysfunction. One-dimensional quality(O) results in satisfaction when fulfilled and dissatisfaction when not fulfilled. For Indifferent quality(I), its presence or absence influences neither customer satisfaction nor dissatisfaction. Lastly, Reverse quality(R) refers to the features whose high degree of achievement results in customer dissatisfaction rather than satisfaction. Meanwhile, the basic guidelines of the Kano model have a limitation in that the quality type of each feature is simply determined by calculating the mode statistics. In order to overcome such limitation, the relative importance of each feature on customer satisfaction (Better value; b) and dissatisfaction (Worse value; w) were calculated following the formulas below (Timko, 1993). The Better value indicates how much customer satisfaction is increased by providing the quality feature in question. In contrast, the Worse value indicates how much customer dissatisfaction is decreased by providing the quality feature. Better = (A + O)/(A+O+M+I) Worse = (O+M)/(A+O+M+I)(-1) An on-line survey was performed in order to understand the nature of quality features of franchisees in the food service industry by applying the Kano Model. A total of twenty quality features (refer to the Table 2) were identified as the result of literature review in franchise business and a pre-test with fifty college students in Seoul. The potential respondents of our main survey was limited to the customers who have visited more than two restaurants/stores of the same franchise brand. Survey invitation e-mails were sent out to the panels of a market research company and a total of 257 responses were used for analysis. Following the guidelines of Kano model, each of the twenty quality features was classified into one of the five types based on customers' responses to a set of questions: "(1) how do you feel if the following quality feature is fulfilled in the franchise restaurant that you visit," and "(2) how do you feel if the following quality feature is not fulfilled in the franchise restaurant that you visit." The analyses revealed that customers' dissatisfaction with franchisees is commonly associated with the poor level of cleanliness of the store (w=-0.872), kindness of the staffs(w=-0.890), conveniences such as parking lot and restroom(w=-0.669), and expertise of the staffs(w=-0.492). Such quality features were categorized as Must-be quality in this study. While standardization or uniformity across franchisees has been emphasized in franchise business, this study found that consumers are interested only in uniformity of price across franchisees(w=-0.608), but not interested in standardizations of menu items, interior designs, customer service procedures, and food tastes. Customers appeared to be more satisfied when the franchise brand has promotional events such as giveaways(b=0.767), good accessibility(b=0.699), customer loyalty programs(b=0.659), award winning history(b=0.641), and outlets in the overseas market(b=0.506). The results are summarized in a matrix form in Table 1. Better(b) and Worse(w) index indicate relative importance of each quality feature on customer satisfaction and dissatisfaction, respectively. Meanwhile, there were differences in perceiving the quality features between light users and heavy users of any specific franchise brand in the food service industry. Expertise of the staffs was labeled as Must-be quality for heavy users but Indifferent quality for light users. Light users seemed indifferent to overseas expansion of the brand and offering new menu items on a regular basis, while heavy users appeared to perceive them as Attractive quality. Such difference may come from their different levels of involvement when they eat out. The results are shown in Table 2. The findings of this study help practitioners understand the quality features they need to focus on to strengthen the competitive power in the food service market. Above all, removing the factors that cause customer dissatisfaction seems to be the most critical for franchisees. To retain loyal customers of the franchise brand, it is also recommended for franchisor to invest resources in the development of new menu items as well as training programs for the staffs. Lastly, if resources allow, promotional events, loyalty programs, overseas expansion, award-winning history can be considered as tools for attracting more customers to the business.

  • PDF

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Establishment of the Heart Failure Model in Swine for the Experiment of the Pneumatic Ventricular Assist Device (공압식 심실보조기의 실험을 위한 돼지에서의 심부전 모델의 개발)

  • 박성식;서필원;이상훈;강봉진;문상호;김삼현
    • Journal of Chest Surgery
    • /
    • v.36 no.3
    • /
    • pp.123-130
    • /
    • 2003
  • Background: In order to develop the acute heart failure model for the animal experiment of the pneumatic ventricular assist device, we decided to use young pig whose coronary artery distribution is almost the same as humans and also very cheap in price. The purpose of this study is to develop stable, reproducible acute ischemic heart failure model in swine using coronary artery ligation method. Material and Method: Five young pigs whose weights are the same as adult humans are under experiment. Each pig was under endotracheal intubation and connected to a mechanical ventilator. Through left lateral thoracotomy, we exposed the heart and induced ischemic heart failure by coronary artery ligation. The ligation began at the distal part of the left anterior descending coronary artery. After 5 minutes of initial ligation we reperfused the artery and then re-ligated. Before and after each ligation-reperfusion procedure we assessed the left ventricular end-diastolic pressure, arterial pressure, and cardiac index. We also measured left ventricular end-diastolic dimension, end-systolic dimension, fractional shortening, ejection fraction using intraoperative epicardial echocardiography. After appropriate heart failure was established with sequential (from distal part of LAD to proximal location) ligation-reperfusion-ligation procedure, we inserted the ventricular assist device and operated. Result: We established stable acute ischemic heart failure in 3 of 5 young pigs with this sequential ligation-reperfusion-ligation procedure, and could maintained 50% less ejection fraction before the procedure according to intraoperative epicardial echocardiography. We also observed no ventricular arrhythmia usually associated with simple coronary artery ligation in large animals and no cardiac arrest associated with ventricular arrhythmia or myocardial stunning. In pathologic specimen, we observed scattered ischemic myocardium in all around the ischemic field induced by coronary artery ligation. Conclusion: Under the concept of ischemic preconditioning, we developed safe and reproducible acute ischemic heart failure model in swine using sequential coronary artery ligation-reperfusion-ligation method.

A Study on the Power Supply and Demand Policy to Minimize Social Cost in Competitive Market (경쟁시장 하에서 사회적 비용을 고려한 전력수급정책 방향에 관한 연구)

  • Kwon, Byung-Hun;Song, Byung Gun;Kang, Seung-Jin
    • Environmental and Resource Economics Review
    • /
    • v.14 no.4
    • /
    • pp.817-838
    • /
    • 2005
  • In this paper, the resource adequacy as well as the optimum fuel mix is obtained by the following procedures. First, the regulation body, the government agency, determine the reliability index as well as the optimum portfolio of the fuel mix during the planning horizon. Here, the resources with the characteristics of public goods such as demand-side management, renewable resources are assigned in advance. Also, the optimum portfolio is determined by reflecting the economics, environmental characteristics, public acceptance, regional supply and demand, etc. Second, the government announces the required amount of each fuel-type new resources during the planning horizon and the market participants bid to the government based on their own estimated fixed cost. Here, the government announces the winners of the each auction by plant type and the guaranteed fixed cost is determined by the marginal auction price by plant type. Third, the energy market is run and the surplus of each plant except their cost (guaranteed fixed cost and operating cost) is withdrew by the regulatory body. Here, to induce the generators to reduce their operating cost some incentives for each generator is given based on their performance. The performance is determined by the mechanism of the performance-based regulation (PBR). Here the free-riding performance should be subtracted to guarantee the transparent competition. Although the suggested mechanism looks like very regulated one, it provides two mechanism of the competition. That is, one is in the resource construction auction and the other is in the energy spot market. Also the advantages of the proposed method are it guarantee the proper resource adequacy as well as the desired fuel mix. However, this mechanism should be sustained during the transient period of the deregulation only. Therefore, generation resource planning procedure and market mechanisms are suggested to minimize possible stranded costs.

  • PDF

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.

Economic Impact of the Tariff Reform : A General Equilibrium Approach (관세율(關稅率) 조정(調整) 경제적(經濟的) 효과분석(效果分析) : 일반균형적(一般均衡的) 접근(接近))

  • Lee, Won-yong
    • KDI Journal of Economic Policy
    • /
    • v.12 no.1
    • /
    • pp.69-91
    • /
    • 1990
  • A major change in tariff rates was made in January 1989 in Korea. The benchmark tariff rate, which applies to about two thirds of all commodity items, was lowered to 15 percent from 20 percent. In addition, the variation in tariff rates among different types of commodities was reduced. This paper examines the economic impact of the tariff reform using a multisectoral general equilibrium model of the Korean economy which was introduced by Lee and Chang(1988), and by Lee(1988). More specifically, this paper attempts to find the changes in imports, exports, domestic production, consumption, prices, and employment in 31 different sectors of the economy induced by the reform in tariff rates. The policy simulations are made according to three different methods. First, tariff changes in industries are calculated strictly according to the change in legal tariff rates, which tend to over-estimate the size of the tariff reduction given the tariff-drawback system and tariff exemption applied to various import items. Second, tariff changes in industries are obtained by dividing the estimated tariff revenues of each industry by the estimated imports for that industry, which are often called actual tariff rates. According to the first method, the import-weighted average tariff rate is lowered from 15.2% to 10.2%, while the second method changes the average tariff rate from 6.2% to 4.2%. In the third method, the tariff-drawback system is internalized in the model. This paper reports the results of the policy simulation according to all three methods, comparing them with one another. It is argued that the second method yields the most realistic estimate of the changes in macro-economic variables, while the third method is useful in delineating the differences in impact across industries. The findings, according to the second method, show that the tariff reform induces more imports in most sectors. Garments, leather products, and wood products are those industries in which imports increase by more than 5 percent. On the other hand, imports in agricultural, mining and service sectors are least affected. Domestic production increases in all sectors except the following: leather products, non-metalic products, chemicals, paper and paper products, and wood-product industries. The increase in production and employment is largest in export industries, followed by service industries. An impact on macroeconomic variables is also simulated. The tariff reform increases nominal GNP by 0.26 percent, lowers the consumer price index by 0.49 percent, increases employment by 0.24 percent, and worsens the trade balance by 480 million US dollars, through a rise in exports of 540 million US dollars and a rise in imports of 1.02 billion US dollars.

  • PDF