• Title/Summary/Keyword: New Business

Search Result 6,986, Processing Time 0.034 seconds

A Study on the Passengers liability of the Carrier on the Montreal Convention (몬트리올협약상의 항공여객운송인의 책임(Air Carrier's Liability for Passenger on Montreal Convention 1999))

  • Kim, Jong-Bok
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.23 no.2
    • /
    • pp.31-66
    • /
    • 2008
  • Until Montreal Convention was established in 1999, the Warsaw System is undoubtedly accepted private international air law treaty and has played major role on the carrier's liability in international aviation transport industry. But the whole Warsaw System, though it was revised many times to meet the rapid developments of the aviation transport industry, is so complicated, tangled and outdated. This thesis, therefore, aim to introduce the Montreal Convention by interpreting it as a new legal instrument on the air carrier's liability, specially on the passenger's, and analyzing all the issues relating to it. The Montreal Convention markedly changed the rules governing international carriage by air. The Montreal Convention has modernized and consolidated the old Warsaw System of international instruments of private international air law into one legal instrument. One of the most significant features of the Montreal Convention is that it sifted its priority to the protection of the interest of the consumers from the protection of the carrier which originally the Warsaw Convention intended to protect the fledgling international air transport business. Two major features of the Montreal Convention adopts are the Two-tier Liability System and the Fifth Jurisdiction. In case of death or bodily injury to passengers, the Montreal Convention introduces a two-tier liability system. The first tier includes strict liability up to 100,000SDR, irrespective of carriers' fault. The second tier is based on presumption of fault of carrier and has no limit of liability. Regarding Jurisdiction, the Montreal Convention expands upon the four jurisdiction in which the carrier could be sued by adding a fifth jurisdiction, i.e., a passenger can bring suit in a country in which he or she has their permanent and principal residence and in which the carrier provides a services for the carriage of passengers by either its own aircraft or through a commercial agreement. Other features are introducing the advance payment, electronic ticketing, compulsory insurance and regulation on the contracting and actual carrier etc. As we see some major features of the Montreal Convention, the Convention heralds the single biggest change in the international aviation liability and there can be no doubt it will prevail the international aviation transport world in the future. Our government signed this Convention on 20th Sep. 2007 and it came into effect on 29th Dec. 2007 domestically. Thus, it was recognized that domestic carriers can adequately and independently manage the change of risks of liability. I, therefore, would like to suggest our country's aviation industry including newly-born low cost carrier prepare some countermeasures domestically that are necessary to the enforcement of the Convention.

  • PDF

Differential Effects of Recovery Efforts on Products Attitudes (제품태도에 대한 회복노력의 차별적 효과)

  • Kim, Cheon-GIl;Choi, Jung-Mi
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.1
    • /
    • pp.33-58
    • /
    • 2008
  • Previous research has presupposed that the evaluation of consumer who received any recovery after experiencing product failure should be better than the evaluation of consumer who did not receive any recovery. The major purposes of this article are to examine impacts of product defect failures rather than service failures, and to explore effects of recovery on postrecovery product attitudes. First, this article deals with the occurrence of severe and unsevere failure and corresponding service recovery toward tangible products rather than intangible services. Contrary to intangible services, purchase and usage are separable for tangible products. This difference makes it clear that executing an recovery strategy toward tangible products is not plausible right after consumers find out product failures. The consumers may think about backgrounds and causes for the unpleasant events during the time gap between product failure and recovery. The deliberation may dilutes positive effects of recovery efforts. The recovery strategies which are provided to consumers experiencing product failures can be classified into three types. A recovery strategy can be implemented to provide consumers with a new product replacing the old defective product, a complimentary product for free, a discount at the time of the failure incident, or a coupon that can be used on the next visit. This strategy is defined as "a rewarding effort." Meanwhile a product failure may arise in exchange for its benefit. Then the product provider can suggest a detail explanation that the defect is hard to escape since it relates highly to the specific advantage to the product. The strategy may be called as "a strengthening effort." Another possible strategy is to recover negative attitude toward own brand by giving prominence to the disadvantages of a competing brand rather than the advantages of its own brand. The strategy is reflected as "a weakening effort." This paper emphasizes that, in order to confirm its effectiveness, a recovery strategy should be compared to being nothing done in response to the product failure. So the three types of recovery efforts is discussed in comparison to the situation involving no recovery effort. The strengthening strategy is to claim high relatedness of the product failure with another advantage, and expects the two-sidedness to ease consumers' complaints. The weakening strategy is to emphasize non-aversiveness of product failure, even if consumers choose another competitive brand. The two strategies can be effective in restoring to the original state, by providing plausible motives to accept the condition of product failure or by informing consumers of non-responsibility in the failure case. However the two may be less effective strategies than the rewarding strategy, since it tries to take care of the rehabilitation needs of consumers. Especially, the relative effect between the strengthening effort and the weakening effort may differ in terms of the severity of the product failure. A consumer who realizes a highly severe failure is likely to attach importance to the property which caused the failure. This implies that the strengthening effort would be less effective under the condition of high product severity. Meanwhile, the failing property is not diagnostic information in the condition of low failure severity. Consumers would not pay attention to non-diagnostic information, and with which they are not likely to change their attitudes. This implies that the strengthening effort would be more effective under the condition of low product severity. A 2 (product failure severity: high or low) X 4 (recovery strategies: rewarding, strengthening, weakening, or doing nothing) between-subjects design was employed. The particular levels of product failure severity and the types of recovery strategies were determined after a series of expert interviews. The dependent variable was product attitude after the recovery effort was provided. Subjects were 284 consumers who had an experience of cosmetics. Subjects were first given a product failure scenario and were asked to rate the comprehensibility of the failure scenario, the probability of raising complaints against the failure, and the subjective severity of the failure. After a recovery scenario was presented, its comprehensibility and overall evaluation were measured. The subjects assigned to the condition of no recovery effort were exposed to a short news article on the cosmetic industry. Next, subjects answered filler questions: 42 items of the need for cognitive closure and 16 items of need-to-evaluate. In the succeeding page a subject's product attitude was measured on an five-item, six-point scale, and a subject's repurchase intention on an three-item, six-point scale. After demographic variables of age and sex were asked, ten items of the subject's objective knowledge was checked. The results showed that the subjects formed more favorable evaluations after receiving rewarding efforts than after receiving either strengthening or weakening efforts. This is consistent with Hoffman, Kelley, and Rotalsky (1995) in that a tangible service recovery could be more effective that intangible efforts. Strengthening and weakening efforts also were effective compared to no recovery effort. So we found that generally any recovery increased products attitudes. The results hint us that a recovery strategy such as strengthening or weakening efforts, although it does not contain a specific reward, may have an effect on consumers experiencing severe unsatisfaction and strong complaint. Meanwhile, strengthening and weakening efforts were not expected to increase product attitudes under the condition of low severity of product failure. We can conclude that only a physical recovery effort may be recognized favorably as a firm's willingness to recover its fault by consumers experiencing low involvements. Results of the present experiment are explained in terms of the attribution theory. This article has a limitation that it utilized fictitious scenarios. Future research deserves to test a realistic effect of recovery for actual consumers. Recovery involves a direct, firsthand experience of ex-users. Recovery does not apply to non-users. The experience of receiving recovery efforts can be relatively more salient and accessible for the ex-users than for non-users. A recovery effort might be more likely to improve product attitude for the ex-users than for non-users. Also the present experiment did not include consumers who did not have an experience of the products and who did not perceive the occurrence of product failure. For the non-users and the ignorant consumers, the recovery efforts might lead to decreased product attitude and purchase intention. This is because the recovery trials may give an opportunity for them to notice the product failure.

  • PDF

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Formative Stages of Establishing Royal Tombs Steles and Kings' Calligraphic Tombstones in Joseon Dynasty (조선시대 능비(陵碑)의 건립과 어필비(御筆碑)의 등장)

  • Hwang, Jung Yon
    • Korean Journal of Heritage: History & Science
    • /
    • v.42 no.4
    • /
    • pp.20-49
    • /
    • 2009
  • This paper explores the Korean royal tombs steles such as monumental steles and tombstone marks (神道碑, 表石) that are broadly fallen into the following three periods ; the 15~16th centuries, 17th~18th centuries, and 19th century. As a result, the royal tombs steles were built, unlike the private custom, on the heirs to the King's intentions. During the 15~17th centuries the construction and reconstruction of the monumental steles took place. In the late Joseon period, monumental steles had been replaced with a number of tombstone marks were built to appeal to the king's calligraphy carved on stone for the first time. During the Great Empire Han(大韓帝國) when the Joseon state was upgraded the empire, Emperors Gojong and Sunjong devoted to honor ancestors by rebuilding royal tombstone mark. Based on these periodical trends, it would not be exaggerated that the history of establishing the royal tombs steles formed in late Joseon. The type of royal tombs monuments originated from those of the Three Kingdoms era, a shapeless form, the new stele type of the Tang Dynasty (唐碑) has influenced on the building of monuments of the Unified Silla and Buddhist honorable monuments (塔碑) of the Goryeo Dynasty. From the 15th century, successive kings have wished to express the predecessors's achievements, nevertheless, the officials opposed it because the affairs of the King legacy (國史) were all recorded, so there is no need to establish the tombs steles. Although its lack of quantity, each Heonneung and Jereung monumental steles rebuilt in 1695 and 1744 respectively, is valuable to show the royal sculpture of the late Joseon period. Since the 15th century, the construction of the royal tombs monumental steles has been interrupted, the tombstone marks (boulders) with simpler format began to be erected within the tomb precincts. The Yeoneung tombstone mark(寧陵表石), built in 1682, shows the first magnificent scale and delicate sculpture technique. Many tombstone marks were erected since the 1740s on a large scale, largely caused by King Yeongjo's announce to the honorific business for the predecessors. Thanks to King Yeongjo's such appealing effort, over 20 pieces of tombstone marks were established during his reign. The fact that his handwritten calligraphic works first carved on tombstones was a remarkable phenomenon had never been appeared before. Since the 18th century, a double-slab high above the roof(加?石) and rectangular basement of the stele have been accepted as a typical format of the tombstone marks. In front of the stele, generally seal script calligraphic works after a Tang dynasty calligrapher Li Yangbing(李陽氷)'s brushwork were engraved. In 1897 when King Gojong declared the Empire, these tombstone marks were once again produced in large amounts. Because he tried to find the legitimacy of the Empire in the history of the Joseon dynasty and its four founding fathers in creating the monuments both of the front and back sides by carving his in-person-calligraphy as a ruler representing his symbolic authority. The tombstone marks made during this period, show an abstract sculpture features with the awkward techniques, and long and slim strokes. As mentioned above, the construction of monumental steles and tombstone marks is a historical and remarkable phenonenon to reveal the royal funeral custom, sculpture techniques, and successive kings' efforts to honor the royal predecessors.

A Study on Profitability of the Allianced Discount Program with Credit Cards and Loyalty Cards in Food & Beverage Industry (제휴카드 할인프로그램이 외식업의 수익성에 미치는 영향)

  • Shin, Young Sik;Cha, Kyoung Cheon
    • Asia Marketing Journal
    • /
    • v.12 no.4
    • /
    • pp.55-78
    • /
    • 2011
  • Recently strategic alliance between business firms has become prevalent to overcome increasing competitive threats and to supplement resource limitation of individual firms. As one of allianced sales promotion activities, a new type of discount program, so called "Alliance Card Discount", is introduced with the partnership of credit cards and loyalty cards. The program mainly pursues short-term sales growth by larger discount scheme while spends less through cost share among alliance partners. Thus this program can be regarded as cost efficient discount promotion. But because there is no solid evidence that it can really deliver profitable sales growth, an empirical study for its effects on sales and profit should be conducted. This study has two basic research questions concerning the effects of allianced discount program ; 1)the possibility of sales increase 2) the profitability of the discount driven sales. In F&B industry, sales increase mainly comes from increased guest count. Especially in family restaurants, to increase the number of guests we need to enlarge the size of visitor group (number of visitors for one group) because customers visit by group in a special occasion. And because they pay the bill by group(table), the increase of sales per table is a key measure for sales improvement. The past researches for price & discount sensitivity and reference discount rate explain that price sensitive consumers have narrow reference discount zone and make rational purchase decision. Differently from all time discount scheme of regular sales promotions, the alliance card discount program only provides the right to get discount like discount coupon. And because it is usually once a month opportunity given by the past month usage level, customers tend to perceive alliance card discount as a rare chance to get. So that we can expect customers try to maximize the discount effect when they use the limited discount opportunity. Considering group visiting practice and low visit frequency of family restaurants, the way to maximize discount effect should be the increase the size of visit group. And their sensitivity to discount and rational consumption behavior defer the additional spending for ordering high price menu, even though they get considerable amount of savings from the discount. From the analysis of sales data paid by alliance discount cards for four months, we found the below. 1) The relation between discount rate and number of guest per table is positive : 25% discount results one additional guest 2) The relation between discount rate and the spending per guest is negative. 3) However, total profit amount per table is increased when discount rate is increased. 4) Reward point accumulation & redemption did not show any significant relationship with the increase of number of guests. These results suggest that the allianced discount program substantially contributes to sales increase and profit improvement by increasing the number of guests per table. Though the spending per guest is decreased by discount rate increase, the total amount of profit per table is improved. It seems the incremental profit by increased guest count offsets the profit decrease. Additional intriguing finding is the point reward system does not have any significant impact on the increase of number of guest, even if the point accumulation & redemption of loyalty program are usually regarded as another saving offers by customers. In sum, because it is proved that allianced discount program with credit cards and loyalty cards is effective to both sales drive and profit increase, the alliance card program could be recommended as strategically buyable program.

  • PDF

The Effect of Price Discount Rate According to Brand Loyalty on Consumer's Acquisition Value and Transaction Value (브랜드애호도에 따른 가격할인율의 차이가 소비자의 획득가치와 거래가치에 미치는 영향)

  • Kim, Young-Ei;Kim, Jae-Yeong;Shin, Chang-Nag
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.4
    • /
    • pp.247-269
    • /
    • 2007
  • In recent years, one of the major reasons for the fierce competition amongst firms is that they strive to increase their own market shares and customer acquisition rate in the same market with similar and apparently undifferentiated products in terms of quality and perceived benefit. Because of this change in recent marketing environment, the differentiated after-sales service and diversified promotion strategies have become more important to gain competitive advantage. Price promotion is the favorite strategy that most retailers use to achieve short-term sales increase, induce consumer's brand switch, in troduce new product into market, and so forth. However, if marketers apply or copy an identical price promotion strategy without considering the characteristic differences in product and consumer preference, it will cause serious problems because discounted price itself could make people skeptical about product quality, and the changes of perceived value might appear differently depending on other factors such as consumer involvement or brand attitude. Previous studies showed that price promotion would certainly increase sales, and the discounted price compared to regular price would enhance the consumer's perceived values. On the other hand, discounted price itself could make people depreciate or skeptical about product quality, and reduce the consumers' positivity bias because consumers might be unsure whether the current price promotion is the retailer's best price offer. Moreover, we cannot say that discounted price absolutely enhances the consumer's perceived values regardless of product category and purchase situations. That is, the factors that affect consumers' value perceptions and buying behavior are so diverse in reality that the results of studies on the same dependent variable come out differently depending on what variable was used or how experiment conditions were designed. Majority of previous researches on the effect of price-comparison advertising have used consumers' buying behavior as dependent variable. In order to figure out consumers' buying behavior theoretically, analysis of value perceptions which influence buying intentions is needed. In addition, they did not combined the independent variables such as brand loyalty and price discount rate together. For this reason, this paper tried to examine the moderating effect of brand loyalty on relationship between the different levels of discounting rate and buyers' value perception. And we provided with theoretical and managerial implications that marketers need to consider such variables as product attributes, brand loyalty, and consumer involvement at the same time, and then establish a differentiated pricing strategy case by case in order to enhance consumer's perceived values properl. Three research concepts were used in our study and each concept based on past researches was defined. The perceived acquisition value in this study was defined as the perceived net gains associated with the products or services acquired. That is, the perceived acquisition value of the product will be positively influenced by the benefits buyers believe they are getting by acquiring and using the product, and negatively influenced by the money given up to acquire the product. And the perceived transaction value was defined as the perception of psychological satisfaction or pleasure obtained from taking advantage of the financial terms of the price deal. Lastly, the brand loyalty was defined as favorable attitude towards a purchased product. Thus, a consumer loyal to a brand has an emotional attachment to the brand or firm. Repeat purchasers continue to buy the same brand even though they do not have an emotional attachment to it. We assumed that if the degree of brand loyalty is high, the perceived acquisition value and the perceived transaction value will increase when higher discount rate is provided. But we found that there are no significant differences in values between two different discount rates as a result of empirical analysis. It means that price reduction did not affect consumer's brand choice significantly because the perceived sacrifice decreased only a little, and customers are satisfied with product's benefits when brand loyalty is high. From the result, we confirmed that consumers with high degree of brand loyalty to a specific product are less sensitive to price change. Thus, using price promotion strategy to merely expect sale increase is not recommendable. Instead of discounting price, marketers need to strengthen consumers' brand loyalty and maintain the skimming strategy. On the contrary, when the degree of brand loyalty is low, the perceived acquisition value and the perceived transaction value decreased significantly when higher discount rate is provided. Generally brands that are considered inferior might be able to draw attention away from the quality of the product by making consumers focus more on the sacrifice component of price. But considering the fact that consumers with low degree of brand loyalty are known to be unsatisfied with product's benefits and have relatively negative brand attitude, bigger price reduction offered in experiment condition of this paper made consumers depreciate product's quality and benefit more and more, and consumer's psychological perceived sacrifice increased while perceived values decreased accordingly. We infer that, in the case of inferior brand, a drastic price-cut or frequent price promotion may increase consumers' uncertainty about overall components of product. Therefore, it appears that reinforcing the augmented product such as after-sale service, delivery and giving credit which is one of the levels consisting of product would be more effective in reality. This will be better rather than competing with product that holds high brand loyalty by reducing sale price. Although this study tried to examine the moderating effect of brand loyalty on relationship between the different levels of discounting rate and buyers' value perception, there are several limitations. This study was conducted in controlled conditions where the high involvement product and two different levels of discount rate were applied. Given the presence of low involvement product, when both pieces of information are available, it is likely that the results we have reported here may have been different. Thus, this research results explain only the specific situation. Second, the sample selected in this study was university students in their twenties, so we cannot say that the results are firmly effective to all generations. Future research that manipulates the level of discount along with the consumer involvement might lead to a more robust understanding of the effects various discount rate. And, we used a cellular phone as a product stimulus, so it would be very interesting to analyze the result when the product stimulus is an intangible product such as service. It could be also valuable to analyze whether the change of perceived value affects consumers' final buying behavior positively or negatively.

  • PDF